Test Results: Advantage+ Audience vs. Detailed Targeting and Lookalikes

We should always test our assumptions. We may think that something works, or maybe it worked at one time, but it’s important to verify that it remains the path forward.

Testing our targeting strategies was the focus of a recent blog post, and I ran a test of my own as an example. This post will highlight the setup and results of the test.

I tested using the following three targeting strategies:

  1. Advantage+ Audience without suggestions
  2. Detailed Targeting with Advantage Detailed Targeting
  3. Lookalike Audiences with Advantage Lookalike

It’s important to understand that the results of this test are not universal. I will address some of the potential contributing factors at the end of this post.

Here’s what we’ll cover:

  • Campaign Basics
  • Targeting
  • A/B Test Setup
  • Surface Level Data
  • Conversion Results
  • Quality
  • Remarketing and Prospecting Distribution
  • Potential Contributing Factors
  • What it Means

My goal isn’t to convince you that your approach is right or wrong. My hope is that my test inspires you to run a similar one of your own so that you can validate or invalidate your assumptions.

Let’s begin…

Campaign Basics

I created a campaign using the Sales objective.

Sales Objective

Within that campaign, I created three ad sets. Each used the following settings…

1. Performance Goal: Maximize conversions with Complete Registration conversion event.

Maximize Conversions Performance Goal

My goal is to get registrations on a lead magnet. The reason I’m using the Sales objective is to get access to Audience Segments data (I’ll address that later).

2. Attribution Setting: 1-day click.

Attribution Setting

I recommend using a 1-day click attribution setting for most non-purchase events.

3. Budget: $25/day per ad set ($750 per ad set overall)

Daily Budget

The total spent on the test was about $2,250.

4. Locations: United States, Canada, and Australia.

Locations

I would normally include the United Kingdom, but it is no longer allowed for split testing.

5. Placements: Advantage+ Placements.

Advantage+ Placements

6. Ads: 1 static and one using Flexible Ad Format. The Flexible version utilized four different images.

Each ad sent people to a different landing page with a unique form. All three landing pages and forms appear identical to the user. This was done so that I could confirm results in my CRM — not just the number of registrations using each form, but what these people did once they subscribed.

Targeting

Each ad set utilized a different targeting approach.

1. Advantage+ Audience without suggestions.

Advantage+ Audience

There isn’t much to show here. This allows the algorithm to do whatever it wants.

2. Detailed Targeting with Advantage Detailed Targeting.

Detailed Targeting

I used Original Audiences and selected the following detailed targeting options:

  • Digital Marketing Strategist
  • Advertising agency (marketing)
  • Jon Loomer Digital (website)
  • Digital marketing (marketing)
  • Online advertising (marketing)
  • Social media marketing (marketing)

Because I’m optimizing for conversions, Advantage Detailed Targeting is automatically turned on. I cannot prevent the audience from expanding.

3. Lookalike Audiences with Advantage Lookalike.

Lookalike Audiences

I selected lookalike audiences based on the following sources:

  • Customer List
  • Power Hitters Club – Elite (Active Member)
  • All Purchases – JonLoomer.com – 180 Days

Because I’m optimizing for conversions, Advantage Lookalike is automatically turned on and can’t be turned off.

A/B Test Setup

I ran an A/B test of these three ad sets in Experiments. The key metric for finding a winner was Cost Per Result. That “result” was a registration.

A/B Test

I ran the test for 30 days and chose not to have it end early if Meta found a winner.

A/B Test

I’m glad I did it this way because Meta’s confidence in the winner wasn’t particularly high and it changed the projected winner a couple of times. This allowed the test to play out until the end.

Surface Level Data

Before we get to the results, I found this interesting. Beyond testing how these three would perform, I was curious if the cost for delivery would be much different. This, of course, could have an impact on overall performance.

Ads Manager Results

The difference in CPM is minor, but it could be impactful. It was $.68 cheaper to deliver ads using Advantage+ Audience than Lookalikes. The difference in CPM between Advantage+ Audience and Detailed Targeting was $.89.

While this may not seem like much (it’s not), that resulted in the delivery of between 1,500 and 2,000 more impressions when using Advantage+ Audience. It doesn’t mean that a lower CPM will lead to more results, but we should bookmark this metric for later.

Conversion Results

According to Ads Manager, Advantage+ Audience led to 9 more registrations than Detailed Targeting and 36 more than Lookalikes.

Ads Manager Results

The overall costs for these results weren’t great, but that’s also consistent with what I’ve seen when running split tests. Because these tests prevent overlap, delivery will be less efficient. Of course, “good results” weren’t the goal here.

The difference between Advantage+ Audience and Detailed Targeting may not be statistically significant, but the difference between the two and Lookalikes certainly was. The A/B test results support this assumption.

A/B Test Results

It’s possible that if the test were run again, Detailed Targeting would come out ahead (Meta estimates a 36% chance of that happening). But, it’s very unlikely (under 5%) that Lookalikes would come out on top.

Recall that each ad sent people to a different landing page that utilized a different form. This way, registrants were given a unique tag so that I knew which audience they were in. These landing pages and forms were only used for the test.

Keep in mind that the results in Ads Manager reflect all registrations, and this can include registrations for other lead magnets. This could happen if someone who subscribes to the lead magnet I’m promoting then subscribes to another (I email about other lead magnets in my nurture sequence).

The numbers from my CRM aren’t much different, but they are different.

The disparity is greater when looking at the “true” results. Advantage+ Audience led to 14 more registrations than Detailed Targeting and 43 more than Lookalikes.

At least some of this difference might be related to the slight difference in CPMs. But, keep in mind that Lookalikes had the second lowest CPM of the three targeting strategies, but it performed the worst.

Quality

One of the first arguments I hear from advertisers when it comes to leveraging Advantage+ Audience over old school targeting approaches is that it’s more likely to lead to low-quality results. Was that the case here?

I was prepared to measure this. It’s one of the reasons that I used unique forms for each ad set. It allowed me to get a deeper understanding of whether these registrants did anything else.

I’d consider my funnel atypical when it comes to most businesses who collect registrations. I don’t have an expectation that many of them will buy from me within 30 days. I look at it as more of a long-tail impact, and many of the people who buy from me do so years later.

Because of that, we can’t make any reasonable assessment of registration quality based on sales at this stage. While two purchases came in via Advantage+ Audience and two from Detailed Targeting so far, these are hardly statistically significant. And it could change dramatically in a matter of months or years (and I don’t want to wait until then to publish this post).

But, there is another way to assess quality, and I first applied this when comparing lead quality from instant forms vs. website forms. Have these registrants performed a funnel event by clicking specific links in my emails?

Once again, the count of “quality clicks” is incomplete, but we can make some initial evaluations. Here’s where we stand at this moment…

While Advantage+ Audience led to a higher volume of registrations, it was not at the expense of quality. It generated 17% more quality registrants than Detailed Targeting and 54% more than Lookalikes.

These numbers are imperfect and incomplete since, like I said, a true evaluation of whether or not the registrations were “quality” can’t be made for quite some time. But, it at least shows the difference in engagement. If someone hasn’t engaged with my emails, they are less likely to be an eventual customer.

Remarketing and Prospecting Distribution

I promised I’d get back to this when I explained using the Sales objective at the top. I could have used the Leads objective (or even Engagement), but I chose Sales for one reason: Access to data using Audience Segments.

When running a Sales campaign (Advantage+ Shopping or manual), some advertisers have access to Audience Segments for reporting.

Audience Segments

Once you define your Engaged Audience and Existing Customers, you can use breakdowns to see how your budget and results are distributed between remarketing (Engaged Audience and Existing Customers) and prospecting (New Audience).

This is something that isn’t necessarily incredibly meaningful, but I find it interesting. It gives us an idea of how Meta finds the people who are likely to perform our goal event. I used this as the primary way to compare distribution using four different targeting approaches in another test.

Within that test, I saw remarketing take up 25 to 35% of my budget, regardless of the targeting approach. In that case, I ran each ad set concurrently and didn’t run an A/B test. This test could be different since it’s a true A/B test.

Here are the breakdowns…

Breakdown by Audience Segments

It’s a lot of numbers, but the distribution between remarketing and prospecting is very similar in all three cases.

  • Advantage+ Audience: 9.2% remarketing, 90.8% prospecting
  • Detailed Targeting: 10.1% remarketing, 89.9% prospecting
  • Lookalikes: 8.7% remarketing, 91.3% prospecting

More remarketing happened with Detailed Targeting, though I wouldn’t consider that statistically significant. The type of remarketing was a bit more significant, however. Advantage+ Audience spent $10 on existing customers, whereas the other two approaches spent around $5 or under. Not a lot, obviously.

Maybe somewhat surprising is that more remarketing registrations came from using Detailed Targeting (25 vs. 16 for Lookalikes and 14 for Advantage+ Audience). While that creates a seemingly significant percentage difference, we’re also dealing with very small sample sizes now that may be impacted by randomness.

My primary takeaway is that distribution to remarketing and prospecting is about the same for all three approaches. My theory regarding why it’s so much less than when I ran my other three tests is that an A/B test splits a finite (and comparatively smaller) remarketing audience into three. There isn’t as much remarketing to go around.

Potential Contributing Factors

It’s important to understand that my results are unique. They are impacted by factors that are unique to my situation and you may see different results.

1. The Detailed Targeting selected.

Some advertisers swear by detailed targeting. Maybe they have certain options that are much more precise and make using them an advantage. Maybe I would have seen different results had I used a different selection of interests and behaviors.

These things are all true. But, you should also remember that no matter what our selections, the audience is expanded when optimizing for conversions. This is why I have my doubts regarding the impact of using specific detailed targeting options.

2. The Lookalike Audiences selected.

The lookalike audiences that I selected are based on sources that are important to my business. They include both prior registrants and paying customers. But, this was also my worst performing ad set. Maybe different lookalike audiences would have changed things.

Once again, I’m not wholly convinced of this because of the fact that lookalike audiences are expanded when optimizing for conversions. I have doubts regarding whether any of my lookalike audiences are that different that the algorithm wouldn’t eventually find itself showing my ads to the same people once expanded.

But, I can’t ignore the possibility. I was surprised that lookalikes performed so much worse than the other two, and the ones I selected could have contributed to those results.

3. Activity and history on my account.

This one is based primarily on theory because Meta isn’t particularly clear about it. We know that if audience suggestions aren’t provided when using Advantage+ Audience, Meta will prioritize conversion history, pixel data, and prior engagement with your ads.

Advantage+ Audience

It’s possible that I’m at an advantage because I have extensive history on my account. My website drives more than 100,000 visitors per month. There is a history of about a decade of pixel data.

Yes, this is possible. We just don’t know that for sure. Many advertisers jump into a new account and automatically assume that Advantage+ Audience won’t be effective without that history. Test it before making that assumption.

4. Industry.

It’s entirely possible that how each of these three approaches performs will differ based on the industry. Maybe some industries have detailed targeting that clearly makes a difference. That doesn’t seem to be the case for me, even though there are detailed targeting options that clearly fit my potential customer.

And… once again, we can’t ignore that your detailed targeting inputs will be expanded when optimizing for conversions.

5. Location.

Some of the responses I’ve received from advertisers regarding the viability of Advantage+ Audience refer specifically to their location. They say that Advantage+ Audience does not work where they are. Maybe that’s the case. I can’t say for sure.

6. Randomness.

One of the biggest mistakes that advertisers make is that they fail to account for randomness. Especially when results are close, do not ignore the potential impact of random distribution. The more data we have, the less it becomes a factor.

One of the tests on my list is to compare the results of three ad sets with identical targeting. What will happen? I’m not sure. But, a piece of me is hoping for chaos.

What it Means

As I said at the top, my goal with this test wasn’t to prove anything universally. My primary goal was to validate or invalidate my assumptions. I’ve been using Advantage+ Audience for a while now. I haven’t used detailed targeting or lookalikes for quite some time. But, these results validate that my approach is working for me.

Another goal for publishing these results is to inspire advertisers to create similar tests. Whether you use Advantage+ Audience, detailed targeting, lookalike audiences, or something else, validate or invalidate your assumptions.

A far too common response that I get from advertisers about why they don’t use Advantage+ Audience is something along the lines of, “This will never work for me because…” It’s based on an assumption.

That assumption could be because of an inability to restrict gender and age with Advantage+ Audience. But, as I’ve discussed, you should test that assumption as well — especially when optimizing for purchases.

Bottom line: These results mean that Advantage+ Audience without suggestions can be just as effective as, if not more effective than, detailed targeting and lookalikes. If that’s the case, you can save a lot of time and energy worrying about your targeting.

Test this yourself and report back.

Your Turn

Have you run a similar A/B test of targeting strategies? What did you learn?

Let me know in the comments below!