Why Singular Is The Only MMP Integrated To Twitter’s Ads API

Intelligent data that drives insights for growth requires three key ingredients:

  1. Accuracy
  2. Granularity
  3. Actionability

In order to obtain all three ingredients, you need to ensure the reliability of API integrations with each of your marketing platforms. This is where you find the Singular difference. Singular is the only measurement partner to have two separate API integrations with Twitter, along with over 1,000 additional marketing platforms, providing you the most comprehensive solution for ROI down to the creative level.

This is what we call “dual integration.”

WTH is the Dual Integration approach?

Before you can understand the importance of API integrations (and dual integrations) you first should understand the type of data you need to collect in order to have anything meaningful for your campaign optimization efforts. Simply put, there are two key data sets you need to collect from your marketing platform, whether that is from Twitter, Snapchat, Pinterest, Facebook, Google, Vungle, Unity, Amazon: you name it.

First, you need your campaign analytics data (aka pre-install data) to answer questions like:

  • “How much did I spend on this campaign?”
  • “How many impressions did that creative get?”
  • “How many clicks came from each publisher?”

Second, you need your attribution data (aka post-install data) to answer questions like:

  • “How many installs did that campaign generate?”
  • “What was the revenue on this creative asset?”
  • “How many people went to level two as a result of this keyword?”

Only by combining these two datasets with a robust cost aggregation solution can you really know your ROI by campaign, by creative, by keyword, and by individual ad. This gives you the power to optimize at the most granular as well as aggregate levels, providing your best opportunity to maximize profitability.

To do this manually, you would need to standardize the hierarchies (some sources offer only campaign and ad level, while others go right down to the keyword) and the taxonomies (names and terms differ) across every source, and then calculate your ROI by each dimension … every single time you need it.

Sounds like a pain in the @$$?

Good thing Singular has already done it for you!

This is the dual integration approach

Singular has spent years building API integrations for both sides of the puzzle across over 1,000 additional marketing platforms, and automatically combines this data to show you ROI at the most granular levels.

Unlike other analytics platforms who are only accountable for your “pre-install data” or other attribution providers who are only accountable for your “post-install data,” Singular is accountable for both. Which is why we are the only Twitter measurement partner to have integrations that collect BOTH datasets, just as we do for hundreds of other marketing platforms: so we can do dual integration for you, out of the box.

Inherent flaws with tracking links

You might be asking: So why can’t I just use tracking links to collect this data? My attribution provider uses tracking links and says they can do campaign ROI.

Great question! While the tracking link is the easiest way to collect the necessary macros for a given network, this method has some inherent flaws.

  1. It is not retroactive
    You are only receiving data at the time of the click, therefore if the numbers reconcile after the time of the click, this will not be reflected in your reporting.
  2. Not all networks support passing all macros
    For example, you might be able to receive campaign cost and clicks, but you may not get site ID or publisher ID.
  3. No creative assets!
    Singular is the only solution on the market to provide you the most complete reporting of your creative asset ROI across the most visual networks. However, creative assets and their performance can only be reported by an API integration.
  4. Data loss and discrepancy is HIGH
    In a recent study, we compared a number of customers who were using Singular along with a third-party attribution provider. In observing their “campaign data” collected via our API integration against the same data set collected via the tracking link by the third-party attribution provider, we saw a 31% discrepancy … with the numbers reported from our API integration matching identically to the number on the final bill.

Of course, we too sometimes rely on the tracking link for those marketing platforms that do not offer an API to collect campaign analytics. However, in the rare case that we cannot collect data via an API, we will also rely on alternate integration methods to ensure accuracy of the data.

For example, a daily email report, or a CSV file upload to an S3 bucket.

We understand every marketer is different, and how you look at your data may be completely different from your competitors. We are flexible and here to ensure the data you see in Singular matches your internal systems.

Heck, we even have a bi-directional API to push and pull data to your source of truth.

To learn more about Singular’s “Dual Integration Approach” and the Singular difference, contact us to request a demo today.

Already a Singular customer and looking to take advantage of our dual integration with Twitter? Check out the help center for details on how to configure your Twitter integration.

Singular ROI Index 2019: The unmissable advertising ROI webinar

Singular’s ROI Index is the largest study that ranks top ad networks globally based on their ability to deliver ROI for advertisers. We’ve already published the Index and made it available to the world, giving you the ability to find the best advertising ROI available.

But now it’s time to dig deeper.

This webinar goes beyond the Index to talk about not only where individual media sources rank, but also what some of the key differentiators are.

Meet the experts

To do that, we’re going to bring in the experts: Susan Kuo, Brian Sapp, and Christen Luciano. (Yours truly, John Koetsier, VP of Insights at Singular, will moderate.)

Susan and Christen have deep insight into how various ad partners performed in the Index. Brian has an even deeper insight into what mobile marketers look for, and what they need in terms of advertising ROI from ad networks.

Susan Kuo
COO, Head of Business Development
Susan has an extensive background in mobile ad tech, analytics, and gaming. Prior to Singular, Susan held senior leadership roles at Onavo and InMobi. Susan is an active member of the mobile community and serves on the advisory board for several mobile-focused start-ups.

Brian Sapp
VP, User Acquisition Marketing, Jam City
A mobile veteran with previous roles at Tapjoy and Web Games, Brian manages user acquisition for Jam City, which currently has six of the top 100 highest-grossing games across the App Store and Google Play.

Christen Luciano
Director of Partner Development
Christen oversees Singular’s relationships with key partners. Prior to Singular, she was a product marketing manager with Kenshoo and held multiple additional marketing roles. Her focus is collaborating with top marketing platforms to help advertisers grow reach and maximize performance.

We’ll review the 2019 Singular ROI Index, but also talk about fraud, things marketers need to know about their ad campaigns, some of the biggest surprises, and the role SANs (self-attributing networks like Facebook and Google) should play in marketers’ ad campaigns alongside some of the mid-tier players.

Advertising ROI is critical, of course, but it doesn’t happen in a vacuum.

So we’ll also talk about how to find niches of profitable growth, new innovative players, and what to look out for.

One of the things that the 2019 Singular ROI Index makes very clear is that Snap and Twitter have made significant moves recently in terms of the value they offer to advertisers. We’ve seen that in their recent quarterly reports: Snap grew quarterly revenue almost $100 million year over year, and Twitter had record quarterly earnings.

We’ll talk about what we’re seeing in the platforms that is driving increased advertiser adoption, and we’ll talk about everything else the Index reveals about advertising ROI.

Introducing global-first Cross-Device, Cross-Platform ROI analytics

How do you grow ROI while maintaining CPA and scale?

This is a question marketers face every day. And answering this question has become more complex as they advertise on more platforms across more devices than ever before. When conversions happen, it’s a struggle to connect the dots and understand what caused them.

Back when Singular was founded in 2014, we focused on solving this challenge first for the complex, highly fragmented, mobile ecosystem: providing a single solution that automatically collects and combines spend data and conversion data to expose mobile marketing performance, including ROI, at unrivaled levels of granularity.

That is powerful. And we quickly became the de facto solution for unifying campaign analytics and mobile attribution to expose ROI.

But in 2019, the game is different

Top brands advertise over a wide range of platforms to users on multiple devices. A customer may see an advertisement for a product on her desktop, and later buy that product on her mobile app. With today’s analytics, it’s hard to connect the two experiences and measure the customer journey accurately.

For mobile-first brands, this often leads to two separate teams, one web, one mobile app, using different tools, and even different metrics, to measure the customer journey. For web-first brands, it results in limited investment in mobile apps, preventing them from diversifying their marketing efforts to bring in incremental users, leaving untapped growth potential on the line.

Moreover, inaccurate measurement leads to misguided decision-making. Matter of fact, poor data quality costs brands an average of $15 million annually, according to Gartner. Making an investment and creative decisions with inaccurate and incomplete datasets is just plain costly.

In true Singular spirit, we sought to solve this new challenge for our customers so they can drive growth more effectively and efficiently in this multichannel world. And I’m happy to say that we have leveraged our vast experience in attribution and marketing analytics to do just that.

Cross-device, cross-platform attribution

Today, Singular is announcing the first-ever cross-platform and cross-device ROI analytics solution for growth marketers.

With the release of Cross-Device Attribution, Singular’s Marketing Intelligence Platform connects marketing spend data to conversion results across devices and platforms. First, we ingest granular spend and marketing data from thousands of sources. Then we connect it with attribution data from our easy-to-implement in-app and web SDKs as well as direct integrations with customer data platforms, analytics solutions, and internal BI systems, bringing the full customer journey into a single view. Finally, we match the two datasets.

The result is the most accurate cohort ROI and CPA metrics available to marketers, at the deepest levels of granularity including campaign, publisher and even creative.

That’s ground-breaking. It’s revolutionary.

But bringing cross-device and cross-platform ROI into Singular and measuring it accurately, at granular levels, is only the beginning to driving impactful growth.

Granular data for growth

Marketers can now access granular ROI cohort reporting that is more accurate than ever, as you can get clear, combined revenue for users across all devices. This is critical to achieving profitable growth and only possible with Singular – a complete platform that innovates beyond a single attribution solution.

Moreover, marketers can also utilize the wide set of capabilities that Singular’s Marketing Intelligence Platform offers to make smarter decisions and optimize their growth efforts with additional cross-device visibility; plus, they have more visibility into essential context such as the exact creative customers engaged with and the audience segments they belong to.

For example, you may find that a web channel’s impact is much higher than expected for specific types of customers. And now you can analyze the impact of the same creative across mobile and web.

In fact, we won’t be surprised if marketers start shifting investments with this new level of clarity. We are excited to see how growth strategists are going to rise above the crowd using this new solution to become part of the future wave of sophisticated marketers. Gone are the days of attribution feature wars – Marketing Intelligence has arrived.

Launching Cross-Device Attribution is just another step towards achieving our goal: to be every marketer’s indispensable tool in driving growth. We keep working not only to ensure that you can innovate your growth processes and have access to the highest data accuracy but also to ensure that we bring you the right insights at the right time to help you make timely strategic and operational decisions.

Are you ready to take part in the future of growth?

 

Using attribution data to calculate mobile ads LTV

Eric Benjamin Seufert is the owner of Mobile Dev Memo, a popular mobile advertising trade blog. He also runs Platform and Publishing efforts at N3TWORK, a mobile gaming company based in San Francisco, and published Freemium Economics, a book about the freemium business model. You can follow Eric on Twitter.

Note: if you’re looking for ad monetization with perhaps less effort than Eric’s method below, talk to your Singular customer service representative (and stay tuned for additional announcements).

Various macro market forces have aligned over the past two years to create the commercial opportunity for app developers to generate significant revenue from in-app advertising. New genres like hypercasual games and even legacy gaming genres and non-gaming genres have created large businesses out of serving rich media video and playable ads to their users by building deep, sophisticated monetization loops that enrich the user experience and produce far less usability friction than some in-app purchases.

But unfortunately, while talented, analytical product designers are able to increase ad revenues with in-game data by deconstructing player behavior and optimizing the placement of ads, user acquisition managers have less data at their disposal in optimizing the acquisition funnel for this type of monetization. Building an acquisition pipeline around in-app ads monetization is challenging because many of the inputs needed to create an LTV model for in-app ads are unavailable or obfuscated. This is evidenced in the fact that a Google search for “mobile app LTV model” yields hundreds of results across a broad range of statistical rigor, but a search for “mobile app ads LTV model” yields almost nothing helpful.

Why is mobile ads LTV so difficult to calculate?

For one, the immediate revenue impact of an ad click within an app isn’t knowable on the part of the developer and is largely outside of their control. Developers get eCPM data from their ad network partners on a monthly basis when they are paid by them, but they can’t really know what any given click is worth because of the way eCPMs are derived (ad networks usually get paid for app installs, not for impressions, so eCPM is a synthetic metric).

Secondly, app developers can’t track ad clicks within their apps, only impressions. So while a developer might understand which users see the most ads in their app and can aggregate that data into average ad views per day (potentially split by source), since most ad revenue is driven by the subsequent installs that happen after a user clicks on an ad, ad view counts alone don’t help to contribute to an understanding of ads LTV.

Thirdly, for most developers, to borrow conceptually from IAP monetization, there are multiple “stores” from which ad viewing (and hopefully, clicking) users can “purchase” from: each of the networks that an app developer is running ads from, versus the single App Store or Google Play Store from which the developer gathers information. So not only is it more onerous to consolidate revenue data for ads, it also further muddies the monetization waters because even if CPMs for various networks can be cast forward to impute revenue, there’s no certainty around what the impression makeup will look like in an app in a given country on a go-forward basis (in other words: just because Network X served 50% of my ads in the US this month, I have no idea if it will serve 50% of my ads in the US next month).

For digging into problems that contain multiple unknown, variable inputs, I often start from the standpoint of: If I knew everything, how would I solve this? For building an ads LTV model, a very broad, conceptual calculation might look like:

What this means is: for a given user who was acquired via Channel A, is using Platform B, and lives in Geography C, the lifetime ad revenue they are expected to generate is the sum of the Monthly Ad Views we estimate for users of that profile (eg. Channel A, Platform B, Geography C) times the monthly blended CPM of ad impressions served to users of that profile.

In this equation, using user attribution data of the form that Singular provides alongside internal behavioral data, we can come up with Lifetime Ad Views broken down by acquisition channel, platform, and geography pretty easily: this is more or less a simple dimensionalized cumulative ad views curve over time that’d be derived in the same way as a cumulative IAP revenue curve.

But the Blended CPM component of this equation is very messy. This is because:

  • Ad networks don’t communicate CPMs by user, only at the geo level; [Editorial note: there is some significant change happening here; we will keep you posted on new developments.]
  • Most developers run many networks in their mediation mix, and that mix changes month-over-month;
  • Impression, click, and video completion counts can be calculated at the user level via mediation services like Tapdaq and ironSource, but as of now those counts don’t come with revenue data.

Note that in the medium-term future, many of the above issues with data availability and transparency will be ameliorated by in-app header bidding (for a good read on that topic, see this article by Dom Bracher of Tapdaq). In the meantime, there are some steps we can take to back into reasonable estimates of blended CPMs for the level of granularity that our attribution data gives us and which is valuable for the purposes of user acquisition (read: provides an LTV that can be bid against on user acquisition channels).

But until that manifests, user acquisition managers are left with some gaps in the data they can use to construct ads LTV estimates. The first glaring gap is the network composition of the impression pool: assuming a diverse mediation pool, there’s no way to know which networks will be filling what percentage of overall impressions in the next month. And the second is the CPMs that will be achieved across those networks on a forward-looking basis, since that’s almost entirely dependent on whether users install apps from the ads they view.

The only way to get around these two gaps is to lean on historical data as a hint at what the future will look like (which violates a key rule of value investing but is nonetheless helpful in forming a view of what’s to come). In this case, we want to look at past CPM performance and past network impression composition for guidance on what to expect on any given future month.

Estimating mobile ads LTV in Python

To showcase how to do that, we can build a simple script in python, starting with the generation of some random sample data. This data considers an app that is only serving ads to users from Facebook, Unity, and Vungle in the US, Canada, and UK:

[code]
import pandas as pd
import matplotlib
import numpy as np
from itertools import product
import random

geos = [ 'US', 'CA', 'UK' ]
platforms = [ 'iOS', 'Android' ]
networks = [ 'Facebook', 'Unity', 'Applovin' ]

def create_historical_ad_network_data( geos, networks ):
 history = pd.DataFrame(list(product(geos, platforms, networks)),
 columns=[ 'geo', 'platform', 'network' ])

 for i in range( 1, 4 ):
 history[ 'cpm-' + str( i ) ] = np.random.randint ( 1, 10, size=len( history ) )
 history[ 'imp-' + str( i ) ] = np.random.randint( 100, 1000, size=len( history ) )
 history[ 'imp-share-' + str( i ) ] = history[ 'imp-' + str( i ) ] / history[ 'imp-' + str( i ) ].sum()

 return history

history = create_historical_data(geos, networks)
print(history)
[/code]

Running this code generates a Pandas DataFrame that looks something like this (your numbers will vary as they’re randomly generated):

[code / table]
geo platform network cpm-1 imp-1 imp-share-1 cpm-2 imp-2 \
0 US iOS Facebook 2 729 0.070374 9 549 
1 US iOS Unity 7 914 0.088232 3 203 
2 US iOS Applovin 7 826 0.079737 4 100 
3 US Android Facebook 2 271 0.026161 2 128 
4 US Android Unity 5 121 0.011681 9 240 
5 US Android Applovin 6 922 0.089005 9 784 
6 CA iOS Facebook 2 831 0.080220 9 889 
7 CA iOS Unity 8 483 0.046626 5 876 
8 CA iOS Applovin 7 236 0.022782 9 642 
9 CA Android Facebook 8 486 0.046916 4 523 
10 CA Android Unity 1 371 0.035814 5 639 
11 CA Android Applovin 8 588 0.056762 7 339 
12 UK iOS Facebook 2 850 0.082054 8 680 
13 UK iOS Unity 7 409 0.039483 3 310 
14 UK iOS Applovin 1 291 0.028092 5 471 
15 UK Android Facebook 7 370 0.035718 6 381 
16 UK Android Unity 3 707 0.068250 6 117 
17 UK Android Applovin 3 954 0.092094 3 581

imp-share-2 cpm-3 imp-3 imp-share-3 
0 0.064955 8 980 0.104433 
1 0.024018 4 417 0.044437 
2 0.011832 3 157 0.016731 
3 0.015144 7 686 0.073103 
4 0.028396 3 550 0.058610 
5 0.092759 8 103 0.010976 
6 0.105182 1 539 0.057438 
7 0.103644 6 679 0.072357 
8 0.075958 5 883 0.094096 
9 0.061879 1 212 0.022592 
10 0.075603 8 775 0.082587 
11 0.040109 6 378 0.040281 
12 0.080454 6 622 0.066283 
13 0.036678 8 402 0.042839 
14 0.055726 7 182 0.019395 
15 0.045078 2 623 0.066390 
16 0.013843 2 842 0.089727 
17 0.068741 1 354 0.037724
[/code]

One thing to consider at this point is that we have to assume, on a month-to-month basis, that any user in any given country will be exposed to the same network composition as any other user on the same platform (that is, the ratio of Applovin ads being served to users in the US on iOS is the same for all users of an app in a given month). This almost certainly isn’t strictly true, as, for any given impression, the type of device a user is on (eg. iPhone XS Max vs. iPhone 6) and other user-specific information will influence which network fills an impression. But in general, this assumption is probably safe enough to employ in the model.

Another thing to point out is that retention is captured in the Monthly Ad Views estimate that is tied to source channel. One common confusion in building an Ads LTV model is that there are ad networks involved in both sides of the funnel: the network a user is acquired from and the network a user monetizes with via ads served in the app. In the construction of our model, we capture “user quality” in the Monthly Ad Views component from Part A, which encompasses retention in the same way that a traditional IAP-based LTV curve does. So there’s no reason to include “user quality” in the Part B of the equation, since it’s already used to inform Part A.

Given this, the next step in approximating Part B is to get a historical share of each network, aggregated at the level of the Geo and Platform. Once we have this, we can generate a blended CPM value at the level of Geo and Platform to multiply against the formulation in Part A (again, since we assume all users see the same network blend of ads, we don’t have to further aggregate the network share by the user’s source network).

In the below code, the trailing three-month impressions are calculated as a share of the total at the level of Geo and Platform. Then, each network’s CPM is averaged over the trailing three months and the sumproduct is returned:

[code]
history[ 'trailing-3-month-imp' ] = history[ 'imp-1' ] + history[ 'imp-2' ] + history[ 'imp-3' ]

history[ 'trailing-3-month-imp-share' ] = history[ 'trailing-3-month-imp' ] / history.groupby( [ 'geo', 'platform' ] )[ 'trailing-3-month-imp' ].transform( sum )

history[ 'trailing-3-month-cpm' ] = history[ [ 'cpm-1', 'cpm-2', 'cpm-3' ] ].mean( axis=1 )

blended_cpms = ( history[ [ 'trailing-3-month-imp-share', 'trailing-3-month-cpm' ] ].prod( axis=1 )
 .groupby( [ history[ 'geo' ], history[ 'platform' ] ] ).sum( ).reset_index( )
)

blended_cpms.rename( columns = { blended_cpms.columns[ len( blended_cpms.columns ) - 1 ]: 'CPM' }, inplace = True )

print( blended_cpms )
[/code]

Running this snippet of code should output a DataFrame that looks something like this (again, the numbers will be different):

[code]
geo platform CPM
0 CA Android 5.406508
1 CA iOS 4.883667
2 UK Android 4.590680
3 UK iOS 5.265561
4 US Android 4.289083
5 US iOS 4.103224
[/code]

So now what do we have? We have a matrix of blended CPMs broken out at the level of Geo and Platform (eg. the CPM that Unity Ads provides for US, iOS users) — this is Part B from the equation above. The Part A from that equation — which is the average number of ad views in a given month that we expect from users that match various profile characteristics pertaining to their source channel, geography, and platform — would have been taken from internal attribution data mixed with internal app data, but we can generate some random data to match what it might look like with this function:

[code]
def create_historical_one_month_ad_views( geos, networks ):
 ad_views = pd.DataFrame( list( product( geos, platforms, networks ) ), 
 columns=[ 'geo', 'platform', 'source_channel' ] )
 ad_views[ 'ad_views' ] = np.random.randint( 50, 500, size=len( ad_views ) )
 
 return ad_views

month_1_ad_views = create_historical_one_month_ad_views( geos, networks )
print( month_1_ad_views )
[/code]

Running the above snippet should output something like the following:

[code]
geo platform source_channel ad_views
0 US iOS Facebook 73
1 US iOS Unity 463
2 US iOS Applovin 52
3 US Android Facebook 60
4 US Android Unity 442
5 US Android Applovin 349
6 CA iOS Facebook 279
7 CA iOS Unity 478
8 CA iOS Applovin 77
9 CA Android Facebook 479
10 CA Android Unity 120
11 CA Android Applovin 417
12 UK iOS Facebook 243
13 UK iOS Unity 306
14 UK iOS Applovin 52
15 UK Android Facebook 243
16 UK Android Unity 106
17 UK Android Applovin 195
[/code]

We can now match the performance data from our user base (gleaned using attribution data) with our projected CPM data to get an estimate of ad revenue for the given month with this code:

[code]
combined = pd.merge( month_1_ad_views, blended_cpms, on=[ 'geo', 'platform' ] )
combined[ 'month_1_ARPU' ] = combined[ 'CPM' ] * ( combined[ 'ad_views' ] / 1000 )

print( combined )
[/code]

Running the above snippet should output something like the following:

[code]
geo platform source_channel ad_views CPM month_1_ARPU
0 US iOS Facebook 73 5.832458 0.425769
1 US iOS Unity 463 5.832458 2.700428
2 US iOS Applovin 52 5.832458 0.303288
3 US Android Facebook 60 5.327445 0.319647
4 US Android Unity 442 5.327445 2.354731
5 US Android Applovin 349 5.327445 1.859278
6 CA iOS Facebook 279 6.547197 1.826668
7 CA iOS Unity 478 6.547197 3.129560
8 CA iOS Applovin 77 6.547197 0.504134
9 CA Android Facebook 479 4.108413 1.967930
10 CA Android Unity 120 4.108413 0.493010
11 CA Android Applovin 417 4.108413 1.713208
12 UK iOS Facebook 243 4.626163 1.124158
13 UK iOS Unity 306 4.626163 1.415606
14 UK iOS Applovin 52 4.626163 0.240560
15 UK Android Facebook 243 5.584462 1.357024
16 UK Android Unity 106 5.584462 0.591953
17 UK Android Applovin 195 5.584462 1.088970
[/code]

That last column — month_1_ARPU — is the amount of ad revenue you might expect from users in their first month, matched to their source channel, their geography, and their platform. In other words, it is their 30-day LTV.

Putting it all together

Hopefully this article has showcased the fact that, while it’s messy and somewhat convoluted, there does exist a reasonable approach to estimating ads LTV using attribution and ads performance data. Taking this approach further, one might string together more months of ad view performance data to extend the limit of the Ads LTV estimate (to month two, three, four, etc.) and then use historical CPM fluctuations to get a more realistic estimate of where CPMs will be on any given point in the future (for example, using a historical blended average doesn’t make sense in the run-up to Christmas, when CPMs spike).

The opportunities and possibilities for making money via rich ads at this point of the mobile cycle are exciting, but they don’t come without new challenges. In general, with the way the mobile advertising ecosystem is progressing towards algorithm-driven and programmatic campaign management, user acquisition teams need to empower themselves with analytical creativity to find novel ways to scale their apps profitably.

. . .

. . .

Next: Get the full No-BS Guide to Mobile Attribution, for free, today.