7 A/B Testing Best Practices to Drive Greater Ad Performance

September 27th, 2023

A/B testing is one of the most effective ways to improve performance marketing efficiency.

Experimenting with multiple versions of your ad gives you real-world data that backs up your ad’s effectiveness. You can use this data to optimise your ad, minimising risk and improving performance.

In short, testing ensures your ad will generate results before you put your budget behind it. These stats from 99Firms show why A/B testing is important:

  • 1 in 8 experiments drives significant change.
  • 60% of businesses A/B test their landing pages.
  • Just 7% of companies find it difficult to perform A/B tests.
  • A/B testing helped Bing achieve a 12% increase in revenue.

A/B testing works slightly differently on each ad platform, but the same rules apply. Here are 7 A/B testing best practices to help you boost ad performance, drive efficiency, and get more from your paid media budget.

1. Always Be Testing

A/B testing isn’t a one-time thing. It’s an ongoing process that helps you improve your ad performance, and ensures your ads continue to resonate with your audience.

Continuous ad testing can also help you:

  • Learn more about your audience and ad performance. 
  • Figure out what not to do in future campaigns.
  • Offer fresh ad creative to your audience on a regular basis, preventing campaign fatigue.

You don’t need a big budget to test your ads regularly. In fact, if you’re working with a reduced budget, testing is even more important, as it helps you get better results from your limited spend.

Here’s what Facebook Ads expert Andrea Vahl recommends to get the best results from split testing on a budget:

In order to set up a proper split test, you need to have your budget set up at the ad set level, not campaign budget organisation.

Andrea Vahl

Marketing Consultant & Author

Setting your budget at the ad group level gives you more control over how much you spend on each ad. If a challenger ad performs better than the control, you can increase spend on the challenger while decreasing spend on the control ad.

Automated PPC platforms like Google’s Performance Max or Facebook’s Advantage Plus use machine learning to better understand your audience and ad performance. But continuous testing can help you optimise your campaigns more quickly, giving you more control over your ad spend. Aaron Young at Define Digital Academy says:

Despite all the improvements that Google has made in its learning, you will still get faster results through running regular and scheduled split testing of your ad copies

Aaron Young

Founder, Define Digital Academy

Eliminate Fake Traffic From Paid Campaigns

Ads are for humans, not bots. Get a demo and save up to 25% of your advertising budget by automatically eliminating fake ad engagements across all paid channels.

2. Understand What You’re Testing For

First, you need to decide which metrics indicate success. The most common advertising KPIs for measuring ad performance are:

  • Click-through rate.
  • Conversion rate.
  • Average order value.
  • Return on investment.

You’ll use this metric to evaluate success at the end of the test run.

Here’s an example from underwear brand Underoutfit. They ran a split test campaign on Facebook Ads to see if user-generated videos could boost ad revenue.

a/b testing best practices

They discovered that adding branded content ads to their standard Facebook advertising strategy led to:

  • 47% higher click-through rate compared with standard ads alone.
  • 31% lower cost per sale.
  • 38% higher return on ad spend.

These metrics helped Underoutfit validate their hypothesis that user-generated content would lead to an uplift in revenue.

It’s OK if your results don’t support your hypothesis, or even lead to a reduction in clicks or conversions. That’s why testing is important — you can find out what works before you overhaul your entire campaign. This valuable information can shape your future ads for the best possible results.

3. Test Multiple Variables (But Only One at a Time)

It’s a good idea to test multiple variables in your ads — but not at the same time. Limiting your tests to one variable per experiment helps you understand its impact on performance.

Prioritise testing the variables that are likely to have the biggest impact on conversions. This will differ from company to company and campaign to campaign, so use the data you already have to make these initial decisions. 

Alex Jackson, Paid Media Team Lead at Hallam Internet, says:

When A/B testing, you should pretend you’re back in high school science. Approach it like an experiment. You need to have a hypothesis to start with. And you need to be methodical by only changing one variable at a time. Figure out what you think might make your ad more successful, and tweak that while keeping everything else the same.

Alex Jackson

Paid Media Team Lead, Hallam Internet

PPC variables you can test include:

  • Ad copy and messaging — Changing, adding, or removing just a single word can have an impact on your results.
  • Ad design — Consider testing variables like colour, image placement, and font.
  • Ad images — Test the impact of using stock photos versus your own custom images, or experiment with different stock photos (e.g. with and without people).
  • The keywords you bid on — Test which keywords give you a higher click-through/conversion rate for the same ad.
  • Landing pages — Experiment with different landing page messaging and design for the same ad.
  • Calls-to-action — Changing the action you want your audience to take can impact results.

Here’s an example of a Facebook ad image test. The creative on the left performed 75% better than the image on the right:

This suggests a photo that reflects the actual content (a comparison of different in-ear headphones) resonates more with Wirecutter’s audience than a more generic image of a model wearing earphones.

The ad creator could now spend time testing different photos of wireless headphones to see which performs best. But with such strong results, it’s probably more valuable to move on to testing another aspect of the ad, such as the article title or ad copy.

4. Give Your Tests Time

Ending experiments early can lead to unreliable data. If your control ad gets 10,000 impressions but your challenger ad only gets 1,000, your results won’t be comparable.

Your results need to be statistically significant before you can act on them. So it’s a good idea to run your split test for at least a week, or longer if possible.

a/b testing best practices

Ben Heath, Facebook Ads expert and founder at Heath Media, says:

For me, the appropriate length of time to assess a new Facebook Ad or Instagram Ad is about three to seven days. That will vary a lot depending on how many conversions you’re generating through that ad. The more conversions, the faster you can make a decision.

Ben Heath

Founder, Heath Media

Ultimately, statistical significance is more important than speed when A/B testing your ads. So give them as long as they need to make sure your data is reliable and accurate.

5. Select Your Sample Carefully

Be mindful of who you’re sampling when selecting who will be part of your A/B test campaign. Your test samples should be as similar as possible in order to get the most reliable results.

The most reliable A/B testing samples are:

  • Highly targeted — Select your most relevant audience.
  • Large — The greater your dataset, the more reliable your results.
  • Random — Samples shouldn’t be split by any specific characteristic.
  • Even — Control and challenger ads are seen by the same number of unique users.

Most ad platforms with an A/B testing function will split your audience for you. In Google Ads, for example, you can allocate a percentage of your budget and traffic to each version of your ad:

This puts your control and challenger ads in front of an even number of audience members without manually selecting a sample.

Make sure you have enough traffic and a high enough budget to achieve statistical significance, especially if you want to see results quickly. You should also take steps to remove invalid and fake users from your ad traffic, so they don’t skew your results and give you unreliable data. Look out for these signs of invalid traffic (and find out how to fix it).

6. Evaluate Your Test Results

When your campaign ends, analyse the performance of each of your tested ads. If there’s a clear loser (for example, conversions are down 20% on one variant), you don’t need to spend much more time analysing your results; just discard the poor-performing variant and move on to the next test.

But in many cases, the results are less clear cut. So you’ll need to perform a more thorough analysis before ending your experiment.

Take a look at the metrics you chose beforehand and measure your results against your goals:

  • Which ad was more effective?
  • Do you have enough data to back it up?
  • Are you confident enough to put your budget behind the winning ad?

If you’re happy with the statistical significance of your test results, apply any changes across your campaign.

If you’re not confident in your results, go back to the drawing board. Give the experiment more time to run, or simply stop the experiment and start again with a new idea. One company found that changing their CTA from “Request a quote” to “Request pricing” increased the click-through rate by 162%:

If you’ve tested more than one variant, it’s a good idea to validate your results before you implement the winning variant for your entire audience. Re-run your experiment using just the control and best-performing challenger ad to increase your confidence in the results.

7. Track & Repeat

When you’ve refined one aspect of your campaign, it’s time to move onto the next one. Continuous testing enables you to constantly optimise your ads and keep things fresh for your audience, so they don’t get bored of your campaigns.

Start again at step two: set a hypothesis, decide which variables you’re going to test for, and redesign or rewrite your ad. Don’t forget to track what you’ve changed and your results for future reference. 

Remove Invalid Traffic to Boost A/B Testing Accuracy

Implementing these A/B testing best practices is a great way to boost the performance of your ad campaigns. But your results may be less reliable than you think if there’s a lot of invalid traffic clicking your ads.

Our 2024 Wasted Ad Spend report found that almost 70% of respondents receive fake or spam leads from their paid media campaigns. These leads often come from bots and invalid traffic, which click your ads (costing you a click) without offering any return. And to add insult to injury, they can massively skew your A/B test results.

Lunio stops fake and invalid users engaging with your ads, so you can get more reliable results and increase your marketing efficiency. Learn more about how Lunio tackles invalid traffic, then book a demo to get started.

Eliminate Fake Traffic From Paid Campaigns

Ads are for humans, not bots. Get a demo and save up to 25% of your advertising budget by automatically eliminating fake ad engagements across all paid channels.

Related Posts

You may also like...

Stop All Advertising Fraud in Seconds