Why you’re losing 50% of your ad effectiveness if you’re not using creative reporting

What really makes ads work?

This simple question is the billion-dollar puzzle that drives the adtech industry. For marketers, finding the answer unlocks the door to optimizing growth.

– who you send your ads to matters
– where people see them matters
– how often people see your ads matter
– the brand attached to them matters

But creative outweighs them all. And not by a little. Combined.

Why creative is so overwhelmingly important

All of these things are important, naturally. But your advertising effectiveness is mostly determined by one critical quality: the creative.

According to Nielsen, the quality, messaging, and context of your creative is responsible for as much as 49% of all sales lift. How many people see your ads is just 22%. Targeting to the right kinds of people? Only 9%.

Why?

Creative is emotionally powerful.

In fact, a study published in the Journal of Advertising found that ad creativity impacts 13 key variables in five separate stages of the ad experience, from brand awareness to liking, accepting/rejecting claims, and future brand intentions. For 12 of those 13 variables, great creative drives positive impact, and poor creative gets ignored.

According to Ipsos, a massive 75% of an ad’s ability to make a brand impression is due to creative. Creative is so important that ads that win awards, Ipsos says, generate a full eleven times more share growth.

It’s hard to overstate the importance of this finding.

If you succeed at everything else in advertising, but fail in creative, you are leaving almost 50% of your results on the table according to Nielsen, and 75% of your potential results on the table according to Ipsos.

That is why you need creative reporting

Singular offers creative reporting because it’s so critical. It’s something you can get in many places, of course, in silos. Facebook, for instance, offers creative reporting that tells you what images performed well on its platform.

That’s great.

But what top marketers need is an understanding of how their creative is performing across all their ad partners.

One major Singular customer, for instance, works with 20+ different channels including Facebook, Snapchat, Instagram, Twitter, Pinterest, and Google at any given time. In each, this client is using between 15 and 30 different creative units.

Yep.

That’s between 350 and 600 different combinations of platform and creative at any given moment.

That’s not just mildly challenging to measure as a marketer. It’s basically impossible without automated help. The problem is that you don’t know which creative might resonate with which audience.

But you absolutely need to.

Some images will work well in one context and bomb in another. Some videos will resonate with the unique demographic slice that ad partner A accesses, and achieve a collective yawn from ad partners B’s audience. And a playable ad that hits one ad network audience’s behavior graph may not touch another.

Creative reporting is the solution.

“Singular’s Creative Reporting determines asset level ROI across more media sources than any other provider,” says Singular senior product marketing manager Saadi Muslu. “With it, you can quickly identify creative performance against any dimension and metric, group similar creatives regardless of minor copy or compression differences, or group creatives by keyword using tags based on any dimension.”

Why we sometimes ignore creative: piping and wiring

There is a lot of infrastructure in the modern marketing department.

The data explosion and almost 7,000 marketing technology tools haven’t helped with this, and the large numbers of ad networks and partners we work with add to it. In fact — and it’s something we’ll be releasing data on next month — Singular data indicates that top-performing marketers work with a much wider variety of ad partners than average or poor-performing marketers.

(Watch for that report soon!)

But there’s another challenge to all this martech/adtech piping and wiring.

Sometimes it’s easier to focus on the pipes and the wires than on what they’re actually carrying. The world of marketing technology seems concrete, observable, and controllable. If we create a drip email flow, we can set it up, schedule it, and press save. If we’re initiating an ad campaign, we set parameters, initiate buys, and monitor performance.

Creative doesn’t work that way (although AI is getting better at helping).

There’s no button to press for great copy, compelling images, or a funny video. It’s not linear, doesn’t follow a defined process, and can’t be switched on and off.

Magic: melding art and science, data and creativity

That’s where the magic enters, however.

When we pair marketing designers’ and writers’ creativity with insights from marketing data — like those in Singular’s Creative Reporting — we can set creativity free to try dozens of different things, and let data decide which resonate, which penetrate, and which generate productive results.

Nothing could be simpler, even at scale.

Another of Singular’s clients builds an astonishing 50 videos each and every week. Pairing that level of creativity with the data that indicates which ones work would be a tough task, manually. But letting machines do what machines do well makes it possible.

And, tells marketers once and for all: what makes their ads work.

Next step: learn more about Creative Reporting.

Using attribution data to calculate mobile ads LTV

Eric Benjamin Seufert is the owner of Mobile Dev Memo, a popular mobile advertising trade blog. He also runs Platform and Publishing efforts at N3TWORK, a mobile gaming company based in San Francisco, and published Freemium Economics, a book about the freemium business model. You can follow Eric on Twitter.

Note: if you’re looking for ad monetization with perhaps less effort than Eric’s method below, talk to your Singular customer service representative (and stay tuned for additional announcements).

Various macro market forces have aligned over the past two years to create the commercial opportunity for app developers to generate significant revenue from in-app advertising. New genres like hypercasual games and even legacy gaming genres and non-gaming genres have created large businesses out of serving rich media video and playable ads to their users by building deep, sophisticated monetization loops that enrich the user experience and produce far less usability friction than some in-app purchases.

But unfortunately, while talented, analytical product designers are able to increase ad revenues with in-game data by deconstructing player behavior and optimizing the placement of ads, user acquisition managers have less data at their disposal in optimizing the acquisition funnel for this type of monetization. Building an acquisition pipeline around in-app ads monetization is challenging because many of the inputs needed to create an LTV model for in-app ads are unavailable or obfuscated. This is evidenced in the fact that a Google search for “mobile app LTV model” yields hundreds of results across a broad range of statistical rigor, but a search for “mobile app ads LTV model” yields almost nothing helpful.

Why is mobile ads LTV so difficult to calculate?

For one, the immediate revenue impact of an ad click within an app isn’t knowable on the part of the developer and is largely outside of their control. Developers get eCPM data from their ad network partners on a monthly basis when they are paid by them, but they can’t really know what any given click is worth because of the way eCPMs are derived (ad networks usually get paid for app installs, not for impressions, so eCPM is a synthetic metric).

Secondly, app developers can’t track ad clicks within their apps, only impressions. So while a developer might understand which users see the most ads in their app and can aggregate that data into average ad views per day (potentially split by source), since most ad revenue is driven by the subsequent installs that happen after a user clicks on an ad, ad view counts alone don’t help to contribute to an understanding of ads LTV.

Thirdly, for most developers, to borrow conceptually from IAP monetization, there are multiple “stores” from which ad viewing (and hopefully, clicking) users can “purchase” from: each of the networks that an app developer is running ads from, versus the single App Store or Google Play Store from which the developer gathers information. So not only is it more onerous to consolidate revenue data for ads, it also further muddies the monetization waters because even if CPMs for various networks can be cast forward to impute revenue, there’s no certainty around what the impression makeup will look like in an app in a given country on a go-forward basis (in other words: just because Network X served 50% of my ads in the US this month, I have no idea if it will serve 50% of my ads in the US next month).

For digging into problems that contain multiple unknown, variable inputs, I often start from the standpoint of: If I knew everything, how would I solve this? For building an ads LTV model, a very broad, conceptual calculation might look like:

What this means is: for a given user who was acquired via Channel A, is using Platform B, and lives in Geography C, the lifetime ad revenue they are expected to generate is the sum of the Monthly Ad Views we estimate for users of that profile (eg. Channel A, Platform B, Geography C) times the monthly blended CPM of ad impressions served to users of that profile.

In this equation, using user attribution data of the form that Singular provides alongside internal behavioral data, we can come up with Lifetime Ad Views broken down by acquisition channel, platform, and geography pretty easily: this is more or less a simple dimensionalized cumulative ad views curve over time that’d be derived in the same way as a cumulative IAP revenue curve.

But the Blended CPM component of this equation is very messy. This is because:

  • Ad networks don’t communicate CPMs by user, only at the geo level; [Editorial note: there is some significant change happening here; we will keep you posted on new developments.]
  • Most developers run many networks in their mediation mix, and that mix changes month-over-month;
  • Impression, click, and video completion counts can be calculated at the user level via mediation services like Tapdaq and ironSource, but as of now those counts don’t come with revenue data.

Note that in the medium-term future, many of the above issues with data availability and transparency will be ameliorated by in-app header bidding (for a good read on that topic, see this article by Dom Bracher of Tapdaq). In the meantime, there are some steps we can take to back into reasonable estimates of blended CPMs for the level of granularity that our attribution data gives us and which is valuable for the purposes of user acquisition (read: provides an LTV that can be bid against on user acquisition channels).

But until that manifests, user acquisition managers are left with some gaps in the data they can use to construct ads LTV estimates. The first glaring gap is the network composition of the impression pool: assuming a diverse mediation pool, there’s no way to know which networks will be filling what percentage of overall impressions in the next month. And the second is the CPMs that will be achieved across those networks on a forward-looking basis, since that’s almost entirely dependent on whether users install apps from the ads they view.

The only way to get around these two gaps is to lean on historical data as a hint at what the future will look like (which violates a key rule of value investing but is nonetheless helpful in forming a view of what’s to come). In this case, we want to look at past CPM performance and past network impression composition for guidance on what to expect on any given future month.

Estimating mobile ads LTV in Python

To showcase how to do that, we can build a simple script in python, starting with the generation of some random sample data. This data considers an app that is only serving ads to users from Facebook, Unity, and Vungle in the US, Canada, and UK:

[code]
import pandas as pd
import matplotlib
import numpy as np
from itertools import product
import random

geos = [ 'US', 'CA', 'UK' ]
platforms = [ 'iOS', 'Android' ]
networks = [ 'Facebook', 'Unity', 'Applovin' ]

def create_historical_ad_network_data( geos, networks ):
 history = pd.DataFrame(list(product(geos, platforms, networks)),
 columns=[ 'geo', 'platform', 'network' ])

 for i in range( 1, 4 ):
 history[ 'cpm-' + str( i ) ] = np.random.randint ( 1, 10, size=len( history ) )
 history[ 'imp-' + str( i ) ] = np.random.randint( 100, 1000, size=len( history ) )
 history[ 'imp-share-' + str( i ) ] = history[ 'imp-' + str( i ) ] / history[ 'imp-' + str( i ) ].sum()

 return history

history = create_historical_data(geos, networks)
print(history)
[/code]

Running this code generates a Pandas DataFrame that looks something like this (your numbers will vary as they’re randomly generated):

[code / table]
geo platform network cpm-1 imp-1 imp-share-1 cpm-2 imp-2 \
0 US iOS Facebook 2 729 0.070374 9 549 
1 US iOS Unity 7 914 0.088232 3 203 
2 US iOS Applovin 7 826 0.079737 4 100 
3 US Android Facebook 2 271 0.026161 2 128 
4 US Android Unity 5 121 0.011681 9 240 
5 US Android Applovin 6 922 0.089005 9 784 
6 CA iOS Facebook 2 831 0.080220 9 889 
7 CA iOS Unity 8 483 0.046626 5 876 
8 CA iOS Applovin 7 236 0.022782 9 642 
9 CA Android Facebook 8 486 0.046916 4 523 
10 CA Android Unity 1 371 0.035814 5 639 
11 CA Android Applovin 8 588 0.056762 7 339 
12 UK iOS Facebook 2 850 0.082054 8 680 
13 UK iOS Unity 7 409 0.039483 3 310 
14 UK iOS Applovin 1 291 0.028092 5 471 
15 UK Android Facebook 7 370 0.035718 6 381 
16 UK Android Unity 3 707 0.068250 6 117 
17 UK Android Applovin 3 954 0.092094 3 581

imp-share-2 cpm-3 imp-3 imp-share-3 
0 0.064955 8 980 0.104433 
1 0.024018 4 417 0.044437 
2 0.011832 3 157 0.016731 
3 0.015144 7 686 0.073103 
4 0.028396 3 550 0.058610 
5 0.092759 8 103 0.010976 
6 0.105182 1 539 0.057438 
7 0.103644 6 679 0.072357 
8 0.075958 5 883 0.094096 
9 0.061879 1 212 0.022592 
10 0.075603 8 775 0.082587 
11 0.040109 6 378 0.040281 
12 0.080454 6 622 0.066283 
13 0.036678 8 402 0.042839 
14 0.055726 7 182 0.019395 
15 0.045078 2 623 0.066390 
16 0.013843 2 842 0.089727 
17 0.068741 1 354 0.037724
[/code]

One thing to consider at this point is that we have to assume, on a month-to-month basis, that any user in any given country will be exposed to the same network composition as any other user on the same platform (that is, the ratio of Applovin ads being served to users in the US on iOS is the same for all users of an app in a given month). This almost certainly isn’t strictly true, as, for any given impression, the type of device a user is on (eg. iPhone XS Max vs. iPhone 6) and other user-specific information will influence which network fills an impression. But in general, this assumption is probably safe enough to employ in the model.

Another thing to point out is that retention is captured in the Monthly Ad Views estimate that is tied to source channel. One common confusion in building an Ads LTV model is that there are ad networks involved in both sides of the funnel: the network a user is acquired from and the network a user monetizes with via ads served in the app. In the construction of our model, we capture “user quality” in the Monthly Ad Views component from Part A, which encompasses retention in the same way that a traditional IAP-based LTV curve does. So there’s no reason to include “user quality” in the Part B of the equation, since it’s already used to inform Part A.

Given this, the next step in approximating Part B is to get a historical share of each network, aggregated at the level of the Geo and Platform. Once we have this, we can generate a blended CPM value at the level of Geo and Platform to multiply against the formulation in Part A (again, since we assume all users see the same network blend of ads, we don’t have to further aggregate the network share by the user’s source network).

In the below code, the trailing three-month impressions are calculated as a share of the total at the level of Geo and Platform. Then, each network’s CPM is averaged over the trailing three months and the sumproduct is returned:

[code]
history[ 'trailing-3-month-imp' ] = history[ 'imp-1' ] + history[ 'imp-2' ] + history[ 'imp-3' ]

history[ 'trailing-3-month-imp-share' ] = history[ 'trailing-3-month-imp' ] / history.groupby( [ 'geo', 'platform' ] )[ 'trailing-3-month-imp' ].transform( sum )

history[ 'trailing-3-month-cpm' ] = history[ [ 'cpm-1', 'cpm-2', 'cpm-3' ] ].mean( axis=1 )

blended_cpms = ( history[ [ 'trailing-3-month-imp-share', 'trailing-3-month-cpm' ] ].prod( axis=1 )
 .groupby( [ history[ 'geo' ], history[ 'platform' ] ] ).sum( ).reset_index( )
)

blended_cpms.rename( columns = { blended_cpms.columns[ len( blended_cpms.columns ) - 1 ]: 'CPM' }, inplace = True )

print( blended_cpms )
[/code]

Running this snippet of code should output a DataFrame that looks something like this (again, the numbers will be different):

[code]
geo platform CPM
0 CA Android 5.406508
1 CA iOS 4.883667
2 UK Android 4.590680
3 UK iOS 5.265561
4 US Android 4.289083
5 US iOS 4.103224
[/code]

So now what do we have? We have a matrix of blended CPMs broken out at the level of Geo and Platform (eg. the CPM that Unity Ads provides for US, iOS users) — this is Part B from the equation above. The Part A from that equation — which is the average number of ad views in a given month that we expect from users that match various profile characteristics pertaining to their source channel, geography, and platform — would have been taken from internal attribution data mixed with internal app data, but we can generate some random data to match what it might look like with this function:

[code]
def create_historical_one_month_ad_views( geos, networks ):
 ad_views = pd.DataFrame( list( product( geos, platforms, networks ) ), 
 columns=[ 'geo', 'platform', 'source_channel' ] )
 ad_views[ 'ad_views' ] = np.random.randint( 50, 500, size=len( ad_views ) )
 
 return ad_views

month_1_ad_views = create_historical_one_month_ad_views( geos, networks )
print( month_1_ad_views )
[/code]

Running the above snippet should output something like the following:

[code]
geo platform source_channel ad_views
0 US iOS Facebook 73
1 US iOS Unity 463
2 US iOS Applovin 52
3 US Android Facebook 60
4 US Android Unity 442
5 US Android Applovin 349
6 CA iOS Facebook 279
7 CA iOS Unity 478
8 CA iOS Applovin 77
9 CA Android Facebook 479
10 CA Android Unity 120
11 CA Android Applovin 417
12 UK iOS Facebook 243
13 UK iOS Unity 306
14 UK iOS Applovin 52
15 UK Android Facebook 243
16 UK Android Unity 106
17 UK Android Applovin 195
[/code]

We can now match the performance data from our user base (gleaned using attribution data) with our projected CPM data to get an estimate of ad revenue for the given month with this code:

[code]
combined = pd.merge( month_1_ad_views, blended_cpms, on=[ 'geo', 'platform' ] )
combined[ 'month_1_ARPU' ] = combined[ 'CPM' ] * ( combined[ 'ad_views' ] / 1000 )

print( combined )
[/code]

Running the above snippet should output something like the following:

[code]
geo platform source_channel ad_views CPM month_1_ARPU
0 US iOS Facebook 73 5.832458 0.425769
1 US iOS Unity 463 5.832458 2.700428
2 US iOS Applovin 52 5.832458 0.303288
3 US Android Facebook 60 5.327445 0.319647
4 US Android Unity 442 5.327445 2.354731
5 US Android Applovin 349 5.327445 1.859278
6 CA iOS Facebook 279 6.547197 1.826668
7 CA iOS Unity 478 6.547197 3.129560
8 CA iOS Applovin 77 6.547197 0.504134
9 CA Android Facebook 479 4.108413 1.967930
10 CA Android Unity 120 4.108413 0.493010
11 CA Android Applovin 417 4.108413 1.713208
12 UK iOS Facebook 243 4.626163 1.124158
13 UK iOS Unity 306 4.626163 1.415606
14 UK iOS Applovin 52 4.626163 0.240560
15 UK Android Facebook 243 5.584462 1.357024
16 UK Android Unity 106 5.584462 0.591953
17 UK Android Applovin 195 5.584462 1.088970
[/code]

That last column — month_1_ARPU — is the amount of ad revenue you might expect from users in their first month, matched to their source channel, their geography, and their platform. In other words, it is their 30-day LTV.

Putting it all together

Hopefully this article has showcased the fact that, while it’s messy and somewhat convoluted, there does exist a reasonable approach to estimating ads LTV using attribution and ads performance data. Taking this approach further, one might string together more months of ad view performance data to extend the limit of the Ads LTV estimate (to month two, three, four, etc.) and then use historical CPM fluctuations to get a more realistic estimate of where CPMs will be on any given point in the future (for example, using a historical blended average doesn’t make sense in the run-up to Christmas, when CPMs spike).

The opportunities and possibilities for making money via rich ads at this point of the mobile cycle are exciting, but they don’t come without new challenges. In general, with the way the mobile advertising ecosystem is progressing towards algorithm-driven and programmatic campaign management, user acquisition teams need to empower themselves with analytical creativity to find novel ways to scale their apps profitably.

. . .

. . .

Next: Get the full No-BS Guide to Mobile Attribution, for free, today.

Grow faster: How ‘Dual Integration’ unlocks vastly more value than vanilla mobile attribution

Peanut butter is just peanut butter. And chocolate is just chocolate. But if you have the creativity and insight to combine them, you create a magical mystery confection that makes your mouth cry out for joy and high-five your stomach. You get, perhaps, dual integration.

Imagine the peanut butter is marketing campaign data.

Imagine the chocolate is attribution.

Put them together, and the result is not magical and not mysterious: it’s marketing science that unlocks ever-increasing but previously hidden value. And that’s just one of the secrets revealed in our No Bullsh!t Guide to Mobile Attribution.

But what exactly is dual integration? And how does it work?

Dual integration technology

“Simplistically, dual integration technology is connecting marketing data with outcome data,” says Singular VP of Client Services Victor Savath. “On the marketing side, we’re talking about information on campaigns, publisher, creative, and sub-campaigns. On the outcome or attribution side, we’re talking about user or customer install and event data.”

Ultimately, you’re combining spend data with mobile attribution data.

But … at as granular a level as you implement your marketing spend.

That means every outcome, or attribution, is enriched with campaign information. Now you know not only that you acquired a new customer, or user, from Ad Partner XYZ. You also know what campaign it was from. Where the campaign and the customer intersected. And what specific creative cued the conversion.

When you combine these two datasets, you get true granular ROI, says Savath.

“It’s not about whether or not a network performs, it’s what is performing within a particular network,” Savath told me yesterday. “Sometimes we see that marketers are quick to dismiss performance marketing, or a particular ad network, because the results are all blended. But granularity highlights the pockets of value. For example, in one network … one specific set of creative might work very, very well, while another does not. With granularity, you know.”

Alternatively, some publishers or traffic sources that an ad network uses for your campaigns might be horrible: poor quality or even fraudulent. But other traffic sources are amazing. Seeing this close up means that marketers can optimize for the best-performing publishers within an ad network. That unlocks potential pockets of profitable growth.

The problem?

Most marketers aren’t able to get to that point.

Missing out on magic (or marketing science)

There are many different types of granularity: creative, publisher, network, campaign, region, with metrics from both the network and attribution side. But what matters the most is ROI granularity … which is inherently matched to your ability to tie both sides of the equation together.

The problem is that most marketers don’t have a tool that connects and aligns all the data properly.

And that means they’re making future resource allocation decisions based on limited information.

“For example, if you’re just using vanilla attribution data, you might say that a certain publisher is generating revenue for you,” Savath says. “The problem is, you’re not exactly clear at what specific cost you’ve achieved this revenue.”

Dual integration might show you that A, B, and E campaigns are really working well with a certain ad network, while C and D are not: they’re complete duds. That insight may mean the difference between writing off an ad network as a total loss versus optimizing your efforts with that partner.

And, of course, achieving much better results.

The big aggregated campaign picture alone has its own challenges, of course.

“Alternatively, if you’re just using spend data, you don’t understand your outcomes at all,” says Savath.

Magic isn’t hard. It’s science

The best part is that with Singular, dual integration isn’t any integration at all. At least, not on your part.

Singular does it for you. And it’s not a back-end thing, it’s a built-in thing.

Most attribution solutions provide tools to create tracking links, or make them in bulk, or allow marketers to import them. The problem is that most marketing managers build tracking links in a vacuum, without knowledge of how a partner will report spend back to you. With Singular, there’s no manual link building … Singular removes the whole element of manual creation of tracking URLs from the measurement workflow.

“Instead, Singular creates the links for you and automatically embeds campaign, creative, publisher, ad network, and other data into your tracking links,” says Savath. “Since our marketing data is informing what the link structure should be, you have automatic alignment between marketing data and attribution data. And thanks to Singular’s deep integrations to thousands of ad networks and marketing partners, your URLs will always have the right parameters and the right values.”

ROI versus IOR

Thanks to the performance-based nature of much of modern mobile marketing, marketers are not so much calculating return on investment as investment on return. In other words, they get the attributed results of their marketing and determine how spend and marketing activity relates to those results.

While there’s definitely a big place in performance marketing for spending based on results, only being able to look at marketing data this way creates serious challenges.

One of the biggest: data reconciliation problems.

“Singular’s approach is matching conversions to spend versus matching spend to conversions,” says Savath.

Get the full Guide for much more

The full No BullSh!t Guide to Attribution contains much more insight on how to do attribution right, focusing on seven core topics:

  1. Mobile Measurement Partners (MMPs)
  2. Data combining
  3. Granularity
  4. Reporting
  5. Fraud prevention
  6. Data retention & accessibility
  7. Pricing

Why Mobile App Uninstalls are Far More Prevalent in Developing Countries

As a mobile app attribution and data management services provider for a host of countries around the world, we see both commonalities and differences between the mobile app markets in different regions. One such difference is the frequency of uninstalls in developing versus developed economies.

Specifically, reported uninstall rates are higher in developing economies like India, China Brazil and Southeast Asia than in the EU or the US. Why? There are likely a variety of factors, including:

Phone Storage Size

The most popular phones in developing markets tend to have much smaller memories than those in developed economies. Many of the most popular phones, for example, have 1GB of memory, versus 16GB for the smallest iPhone 6. When a phone has a smaller memory, consumers must choose their apps carefully, or periodically uninstall apps they are not currently using in order to make room for other applications. They may choose to reinstall an app at a later date when the value proposition is more timely and urgent. But there are no guarantees that this will happen, which ultimately limits lifetime value. Further, it means a customer must be won over and over through marketing efforts and the Apple App Store and Google Play.

More Incentivized Mobile App Downloads

Many more app installs come via incentivized download programs in the developing world. App developers tend to use these platforms more in developing economies for different reasons, but it is clear that they have an impact. Free wifi for app download, free virtual goods for app download, and piggyback app downloads are quite common in developing markets. While some incentivized mobile app install programs attract high quality users, others drive installs with people who may ultimately have little interest in an app. Naturally, those  installs are much less likely to stick.

Network Issues That Appear to Be Product Issues

In markets where data service is spotty, it’s possible that a consumer will misinterpret network issues for app product issues. In those instances, uninstalls may be driven up even though the app itself is not faulty.

Lower Percentage of Paid Apps

Owing to greater price sensitivity as well as lower incidence of credit card usage, paid app penetration in the developing world tends to be lower. Paid apps, perhaps not surprisingly, have lower uninstall rates than free apps.

Whatever the reasons, it is clear that app uninstalls tend to be higher in places like India, China and Latin America than in the US or EU. But  uninstalls are an issue for a large proportion of apps across all regions, and marketers would be wise to better understand their uninstall rates, their sources of greater uninstalls, and the strategies to combat them.

Download The Singular ROI Index to see the world’s first ranking of ad networks by app ROI.

 

 

Mobile App Tracking: Here’s What You Should Track

Most of us understand intuitively that getting customer event data from our mobile apps is important and can help drive improved marketing effectiveness. But what specifically should we measure? How do we turn the good idea of measuring marketing activity into something that is both clear, focused and actionable?

For some, this post may feel a little “in the weeds.” But I am a big believer in helping marketers create a solid data foundation for everything that they do. Without mobile phone app attribution, many marketers have told us that they felt like they were flying blind. Once people see the value of app attribution, it’s critical to unlock its full power with an event measurement strategy that is robust, comprehensive, and clear.

I hope that by laying out a lot of specifics here with regard to the data to collect for iPhone and Android app tracking, that marketers who are considering app attribution solutions will be able to more fully grasp the importance of this unique set of data and ensure that they take full advantage of the data collection power of their attribution toolset.

Recommendations Based on Hundreds of Implementations

As a big player in marketing attribution across the cell phone app ecosystem, Singular has worked with many companies to set up their marketing attribution instances. Singular’s very low account churn rate attests to our success at helping clients create measurement plans that meet real business needs and drive significant improvement in ROI.

Getting started defining what you will measure with your Android and iPhone app tracking can feel like the hardest part of implementing an attribution solution like Singular’s. But trust me, that “difficulty” is more psychological than real. My goal with this post is to get rid of some of that discomfiture, so you can get up and running with your iOS and Android app tracking much faster.

Actions and Events

When you are just getting started with a measurement and attribution offering like that which is part of the Singular unified platform, the most important things you need to decide are which consumer actions you want to track. In the iPhone and Android device app industry, we call consumer actions “events”. By tracking events, we gather the data necessary to:

  • Understand the effectiveness of your marketing programs
  • Gain a genuine view into what consumers are doing in your apps

That’s mission-critical knowledge!

Different platforms allow their customers to track different numbers of events. Some allow only a handful of trackable event types, while others allow dozens – or more.  Singular allows advertisers to track up to 400 unique events per app version – an extremely large number for the category and one that makes us the choice of many of the most sophisticated mcommerce companies globally. These are businesses that understand the importance of complete and granular data.

It’s actually pretty simple to identify the kinds of events you need to track for mobile app tracking and analytics. At Singular, we advocate for tracking as many germane events as possible in order to facilitate richer understanding of your business and its key drivers. In fact, we actually created a pricing model that encourages this — by not charging marketers based upon the number of events that they track. That is an important difference for a mcommerce brand because it ensures that you don’t have to jack up your attribution fees for more granular data.

Types of Events That You Should Track

The first step in choosing the in-app actions to track is to identify the different types of events that every marketer should be tracking. Singular categorizes important marketing events into four categories.

Authentication Events– Events that help identify the user (anonymously, of course) so we can attribute an install to a device ID after your app has been installed in an app store.

Engagement Events– Actions that consumer takes that indicate involvement in the app and presage long-term usage over time.

Intent Events– Actions that indicate that the user is considering and preparing for a purchase. In other words, that making a purchase is on a metaphorical to-do list.

Purchase Events– Actions and information that are communicated after a purchase takes place. These conversion tracking metrics include characteristics of what was purchased. These help us get a richer and more comprehensive understanding of each individual anonymized customer.

Let’s dig into each of these event types:

Authentication Events

Authentication events allow the iOS or Android phone app marketer to gather user-level information that can be used for a variety of actions. Your primary goal with authentication events is to better understand the characteristics of the people that are using those specific instances of your app on their devices. These data points also help us – and other third party solutions providers – connect the actions that the customer takes in a mobile phone app with actions that they perform in other digital environments.

These cross-device connections build a more complete view of the customer and enable us to understand the interplay between different consumer touchpoints.

Some examples here include:

  • DEVICE ADVERTISING ID
  • CUSTOM ADVERTISER ID
  • AGE AND GENDER
  • LATITUDE AND LONGITUDE (IF RELEVANT)

Engagement Events

Engagement events provide the business owner insight into the extent to which the user has availed themselves of the app. These provide initial signals that a person is actively using the app. These events are helpful in identifying the quality users, both before and after a conversion takes place.

That matters even more than you might think when it comes to a mobile tracker. In certain mobile phone app verticals, a purchase might not happen for six to eight weeks after an install. In this case, tracking a user’s engagement within your app on their mobile device lets the marketer accurately optimize a network’s performance within days instead of waiting weeks. You simply optimize engagement events rather than conversion events.

Engagement events vary by the category in which you compete. Here are a few examples:

  • TUTORIAL VIEWED?
  • FILLED OUT SHOPPING PROFILE?
  • PRODUCT SEARCHES
  • WISH LISTING
  • WATCHED A PRODUCT VIDEO?
  • RATING SOMETHING IN THE APP
  • REVIEWING SOMETHING IN THE APP
  • RATING THE APP
  • SEND CONTENT TO A FRIEND/RECOMMEND APP
  • APP UPDATE INSTALLS

Intent Events

Intent events are useful in identifying users who are planning on making purchases (or taking other end actions) in the application. Like engagement events, the intent events can be used to identify quality users before a KPI occurs.

Perhaps more importantly, intent events can be used to remarket to users who have performed intent events but have yet to complete a purchase event. Including these events in your measurement plan will greatly increase the effectiveness of using remarketing ad network specialist Services…

Singular’s ability to send data to both re-marketing networks (via postbacks) and marketing automation tools through our Audiences offering gives the marketer complete discretion on how to use the data for re-marketing.

Here are a couple of examples of intent events:

  • ADD VIRTUAL GOOD TO CART
  • BEGIN CHECKOUT
  • INPUT CREDIT CARD INFO?

Purchase Events

Purchase or conversion events are obvious events to track, but capturing key attributes of the purchase events is also critical. In other words, you don’t simply want to know that someone bought but rather what they bought, how much they spent, and what contributed to the transaction. Tracking revenue at a user level allows the marketer to determine key metrics like Average Revenue Per User (ARPU), Average Revenue Per Paying User (ARPPU), or Return on Ad Spend (ROAS) for a specific marketing campaign.

Some examples:

  • TOTAL REVENUE
  • CATEGORY(S) OF ITEMS
  • QUANTITY OF ITEMS
  • NAME OF ITEMS
  • QUANTITY OF EACH ITEM
  • SKU OR OTHER ITEM NUMBERS
  • PRICE PAID PER ITEM
  • TAXES LEVIED
  • COLOR/SIZE OF ITEM
  • BOUGHT ON DISCOUNT?
  • SIZE OF DISCOUNT
  • GIFT?
  • LONGITUDE/LATITUDE

Conclusions

As I mentioned at the outset of this document, the key to taking those important first steps in mobile app measurement and attribution is to use a methodical and strategic approach to identifying the types of data that will help drive better decision-making. As you have seen, identifying events for tracking isn’t so difficult when you begin with a construct like:

  • Authentication Events
  • Engagement Events
  • Intent Events
  • Purchase or Conversion Events

With that in place, identifying the right events for your mobile attribution should be far easier. Naturally, every business is unique, and there is no substitute for expert advice. Singular clients can get additional assistance on defining the right set of events to track from their implementation and account management teams. Our people are product experts who can help speed the implementation process and ensure that you gather the right data – right from the beginning.