Blog

13 best practices for A/B testing mobile ads

By John Koetsier September 17, 2021

You’re kicking off a mobile ad campaign. You want crazy good results. You want massively high ROI and incredible ROAS. And you don’t want to spend 70% of your budget teaching the platforms what works to attract the best users for your app. So how do you do A/B testing to find the best ads?

Here’s 13 different (and lucky!) things to consider …

1. There are algorithms for that (but start with a clue)

You know the algorithms do a lot of the work in finding the best versions of your ads, but you also know that there needs to be a starting point for your A/B tests. And that you just might be a little bit smarter than AI, at least for a few years yet.

There’s also a financial challenge if you just rely on Facebook’s or Google’s or any other ad platform’s machine learning algorithms to find the best ad. The reality is that they can chew through a lot of budget in a short time running split tests just to find out what you already know.

This doesn’t mean they’re bad. Doesn’t mean they’re not helpful. Doesn’t mean you don’t want to use them. And it doesn’t mean that it’s impossible to find out that actually, you were completely wrong about finding the best way to get new players, customers, and people in your app.

But let’s assume you’re smart and you have a clue. Start somewhere intelligent, and let the algorithms refine your tests.

And then, yes, occasionally throw caution to the winds and try a few campaigns that are just completely out of left field, just to see if AI has a card or two up its sleeve and it can teach you a few things about how to connect with your core audience.

2. Formulate a plan

What are you going to test first? What’s the full list of things you want to test? Do you have a hypothesis? Random action may unexpectedly produce a Picasso … but it’s not likely. Start your A/B testing with at least some of the scientific rigor that you hope to harness in selecting winning campaigns, creative, and copy.

What do you expect to happen? (OK, hope will happen.)

Document, report, and iterate.

3. Pick an audience

We’d all like to appeal to everyone but we’re not all Google. (And even then some people choose DuckDuckGo or Bing.) Even the simplest hypercasual game has different groups of people who want to use it; pick which one you’re going to target with your first A/B test.

Typically you’d start with the expected highest-value players, customers, or people. As you proceed with additional tests, at some point you will want to switch targeting to next-most-valuable categories. And, be open to the possibility that some groups will be either more or less valuable than you first imagined.

4. Test one thing at a time

There’s a time and a place for starting with totally different ad sets, calls to action, brand promises, etc. As you get deeper into optimization, however, A/B tests work best with smaller, definable, quantifiable changes.

If you change two things at once you won’t know what influenced your target audiences.

(That said, yes, we can get very meta here and postulate that test A with red text and a green logo does better than test B with green text and a red logo, and so on. So: limit the changes to see impact more clearly, but use your best judgement as you go.)

Caveat:

If you have massive budget and/or lots of time, consider kicking off multivariate testing so you can change multiple things at the same time. It’s more complicated and significantly more expensive, but it will get you deeper answers faster. Be sure, however, you have enough budget to reach statistical significance: it’s going to be much harder than A/B testing.

5. Assign cohorts to those audiences

OK, I’m cheating here by calling this #5 in best practices for A/B testing because it’s technically something to do AFTER your split testing, but not toooo egregiously. (I hope.)

Keep track of the cohorts of new customers/players/users you get in your app based on the segment that you were targeting. Work with product/development/live ops to customize their onboarding and app experience.

6. Be patient

Waiting sucks.

If you can get Chipotle via delivery drone in 5 minutes flat (it’s real, I promise), why can’t you get your A/B tests just as quickly? Because they need some time to develop.

The last thing you want is to prematurely declare a victor, and make poor decisions. You could theoretically accelerate spend to burn budget faster and achieve a result quicker, but does that really prove the point? Or does it show that at this point in the morning, that point at night, people want X rather than Y?

Better to give it some time, walk away, have a coffee, play a game, do some other work, and come back a couple days later to analyze the data.

If the platform itself doesn’t offer it, Google “A/B testing significance calculator” to easily check if you have enough data to be authoritative.

7. Declare victory. Then start a new war.

Found a winner? Pop the champagne, break out the cigars, and drop the beat on the party music.

But do that all in VR, because there is no finish line, all wins are contingent, all success is temporary. Once you’ve found a great ad, start a new series of testing.

8. Test for freeeeee

A/B testing is expensive. You burn budget to find the best way to burn more budget. If you’re completely starting out, you can kick off your testing for free in person. You can do some of it for free on social.

Caveat: free doesn’t really scale, and it’s much more inductive than deductive, directional than definitive.

But it can still have value.

Another way to (somewhat) cheaply get insight is via surveys. Pop-up mobile surveys can cost you a dollar per person on Pollfish or similar tools, so you can spend $300 or $500 to get insights that might cost you $3,000 or $5,000 in real ad campaigns. Another caveat: surveys aren’t real-world ad tests. They don’t have to deal with adblindness in the way real ads in the real world do.

So be warned: YMMV. Real ads in the wild over a significant interval of time with a significant number of views and actions are the gold standard here.

9. Pick an outcome to test for that matters

It’s tempting to pick click-through rate as the determining factor in your split testing. It’s quick, obvious, and available right in the same platform you’re doing the test in. While I’m sure there’s cases where this makes sense for you, generally speaking this is a Very Bad Idea™.

Test for a variable as far down your funnel as you can. That might be app installs, but the deeper you can go — engagement, sign-ups, purchases — the more useful the results of your A/B testing will be.

Rule of thumb: test for something you care about. Ideally, a KPI that is critical to the success of your business.

10. Be honest about the results

If you get results back from your 5,000 install A/B test and version B is 1.5% better than version A … you have a problem. The “improvement” is probably well within the margin of error and therefore illusory.

It’s tempting to declare victory and move on, but be honest: you might have two awesome results (if both convert well) or two complete dogs (basically blank ads work better).

Suck it up. Restart with a blank sheet. Stickhandle the impatience of your boss.

11. Pick smart things to optimize

Look. The world is full of examples like “people from Azmenistan think purple is the color of death, so your ad was like a funeral invitation.” (Why yes, that is a fake country from The Expendables 3.)

Few things are so upfront and obvious.

Test variables that matter.

Calls to action, offers, promises, value statements probably fit the bill. Key character featured, if your app is a game, is very likely significant. Gameplay featured, particularly in a video ad, is clearly important. Font size might be, especially in the extremes.

Changing a button color from sapphire to cerulean? Maybe not quite so much.

12. BONUS: Know when not to A/B test

When VP GrowFastNowAtAllCosts tells you to put the pedal to the metal, it’s not time to spend 2-3 days on A/B testing. When you have a seriously small ad budget, live A/B testing isn’t going to be your best option.

When you work for a brand that thinks Apple is way too fast and loose with their brand guidelines and takes three weeks to approve moving a piece of punctuation … yep … not the place for A/B testing. When there’s a HIPPO in residence so that the Highest Paid Person’s Opinion matters more than anyone else’s, and more than any data … also probably not a good context for A/B testing.

13. LUCKY BONUS number 13: A/B test your Android (and now iOS!) app listings

Great, your ads are amazing. They’re driving a huge number of clicks and traffic to your app listing.

But … is it converting?

Any lack of conversion adds marketing cost, even if you only pay for installs: more impressions per click and view equals less likelihood an ad partner will show your ad, meaning your bid has to go up.

So optimize your app listing page. Google Play has enabled that for a long time; very soon iOS will be joining the party.

One more thing: Apple is releasing the ability to optimize your App Store listing for different purposes. So if people use your app in very different ways — say remittances as well as payments in your fintech app — you can optimize your marketing for one and show an App Store listing customized to that purpose, which should increase conversion rate.

Need help with next-gen marketing measurement?

If you’re looking for solutions that help you zig when the world zags — and reap the benefits — you should chat with Singular. Book some time for a chat, and one of our experts will walk you through how the platform could help you supercharge growth.

Stay up to date on the latest happenings in digital marketing

Simply send us your email and you’re in! We promise not to spam you.