The results of this test are so important. They highlight the role of randomness on Meta advertising.
Test Results
I ran an A/B test of three different ad sets. Breakdown of performance looks like this…
- Ad Set A: 80 conversions
- Ad Set B: 100 conversions
- Ad Set C: 86 conversions
Would you act on these results?
Meta thinks you should. According to the A/B test results, Meta has 59% confidence that Ad Set B is the winner. Maybe more convincing is that only 14% that Ad Set A would win if the test were repeated.
The Twist
Well, guess what? All three ad sets were completely identical. Each one was optimized for conversions using Advantage+ Audience and all placements. All three used the same ads.
And yet, Ad Set B ended up with 25 percent more conversions than Ad Set A. Why?
The reason is simple. Randomness is always a factor. These things will even out as you increase volume, but beware of random results from smaller sample sizes.
Advertisers make a big mistake by over optimizing based on things that could be random.
Go here to learn more about my test.