You’ll Get Worse Results When Testing

It’s important that you understand this…

A/B Tests

If you run an A/B test, your audience will be divided so that there’s no overlap. Per Meta:

We show each version to a segment of your audience and ensure nobody sees both, then determine which version performs best.

That’s good! But by spreading your audience across multiple campaigns and ad sets, you won’t get optimal results. Know that going in.

Meta A/B Test

Unscientific Test

Even if you don’t run a true A/B test, you can run into suboptimal results. You might create multiple campaigns or ad sets to determine whether one optimization works better than another, for example. That will lead to auction overlap.

Meta Ad Test

The more campaigns and ad sets you create, the more competition you generate for yourself.

Testing and Optimal Results

I’m not saying that you shouldn’t test. You should. But you should understand that testing isn’t the ideal environment for results.

When you test, temper your expectations. Don’t look for one winner that has amazing results. Instead, look for the version that gives you the best results during the test.

Then shut off the other versions so that the winning version has a better chance to perform. Once it’s free from auction overlap and it’s not restricted by an A/B test, you should start getting better results.

It’s a reminder that these tests aren’t meant to run indefinitely. You want to reach a point where you find a winner so that you can leverage it and get the most optimal results possible.

Category: