Industry leaders’ growth marketer resolutions for 2020: Game Hive

If you’re like most people, you dumped your New Year’s resolutions on January 19thBut if you’re a growth marketer, you’re just getting started. This year has a lot in store for our customers—and, we’re right there with them as they try new things and look to scale.

We touched base with marketing leaders from the industry to find out new tactics on the horizon and ones they discarded with the old year.

First, we checked in with Mary Kim, Head of Growth at Game Hive located in Toronto, Canada. Mary currently leads the growth team in UA and Ads Monetization, working on titles such as Tap Titans, Beat the Boss, and Tap Tycoon. Mary entered the mobile gaming industry working at one of Europe’s largest gaming studios, acquiring users for desktop and mobile.

growth marketer
 

What marketing strategies or tactics will you start to test in 2020?

One of the biggest struggles we’ve had for a long time is attributing impression-level ad revenue in our reporting. Now we’re finally getting to a point where we’ll be able to do this. Our mediation partner, AdMob, is enabling this for us and we’re in the process of kicking things off. So now, whatever impressions we get on our ad side will be tracked more accurately at the user level. Along with the cost data from Singular, we’ll be able to get the whole 360° picture.

Are there any new ad channels you’ll start testing?

This is something we’re constantly doing. In fact, I’d say it’s really difficult to not test, given that the duopolies take the majority share of ad spend. We wouldn’t want to be in a position where something happens on the back end with the algorithms and performance suffers. So, we always like to allocate part of our budget to testing. 

Last year we dipped our toes into more DSPs. Now, we’re getting much more comfortable sharing data with our new networks. With some partners, it’s been going extremely well. For instance, we’ve found that with the DSPs we’ve had the most success with—although we’re getting the same inventory as we would with, for example, a rewarded video network—performance is better because they use different machine learning algorithms that optimize for high-value users, rather than optimizing for ad placements.

We’ll continue to do this in 2020 for sure!

We’re finding that the marketers leading the way in full-funnel performance marketing are doing exactly what you’re doing. They’re setting aside budget specifically for testing, on a quarterly if not monthly basis. 

Are there any other ad channels you’ve been testing?

TikTok’s one that we’re trying out. In the past, we thought maybe the audience would’ve been too young. But we realized that these days, the platform has evolved and the audience is not all that young. Since many of our users are male and there are more females on the TikTok platform, we were also worried that it may not be a good fit if we couldn’t reach our target demographic. Interestingly enough, we’ve discovered that there are more female content creators but more men watching the content.  So we thought okay—we sure don’t have a problem with that! 

Interesting. One of our industry analysts looked at aggregate data across industries to see what’s working and what’s not, so that we’re providing that information to our customers. One thing he discovered was that ad spend on TikTok tripled from May to November 2019. And obviously, marketers allocate budget like that when something’s working. Our ROI Index speaks to that as well. What about creatives? Are there any new tactics you’re trying out, like creative optimization, using new ad formats, or something else?

Yes! At least for us, 2019 was a huge year for creatives. We were able to crank out so many, so I guess you could say it was more a matter of quantity over quality in 2019. This year, we want to focus on quality over quantity. That includes getting more 3D assets, and coming up with faster video creation but with quality in mind. 

Tap Titans 2

Everyone’s been talking about video, video, video for the last 5-7 years. Obviously it’s the most engaging ad format. But, I think for a very long time teams struggled with the production aspect of it because it’s costly and time-consuming. It feels like you guys are figuring out ways to streamline production and get your 3D templates ready to not only raise the quality, but to also get more high-quality assets out there. 

Are there any new incentives you’re thinking of testing? Things like referral codes—we know you already do rewarded video. Which ones are you already doing that you’d like to buckle down on more? Or new ones you’d like to try out?

We’ve really started to invest more in re-engagement, especially because Tap Titans 2 has developed a wide user base. Keeping that in mind—and since re-engagement works so well—for those players that are coming back, we asked ourselves, what can we do to make them continue to do so? Right now we have deeplinks, but it doesn’t actually pinpoint a specific moment in the game or give them an actual incentive to come back. So, that’s been something we really want to highlight and focus on: giving them an undeniable reason to return, so we’ll give you some diamonds, or give you some rare legendary equipment so that you come back and you enjoy the game even more. 

Is there any new KPI you’d like to start measuring and optimizing against?

Yes—we’ve been trying to test this already, but you know how many advertisers like to optimize towards purchases and transactions? We feel that even before the users get to that, those who indicate that they’re going to be high-value is by whether they’re using hard currency in our game. So, a first-time hard currency user, or even someone coming in on a transaction-by-user basis, will be higher value for us. Hard currency is the main one we’re trying to test now.

You’re obviously automating your reporting with Singular. Are there other automations, like bidding optimizations, that you’re either already doing or plan to do?

Yes—at the moment we’re leveraging other partners to help us do this, but right now it’s only based on rules that we set. For example, here are my KPIs and the benchmarks I want to reach. If it doesn’t reach it, then decrease the bid by X, or lower the budget caps. However, it only goes so far if it’s just based on rules and sometimes dangerous if there are extreme outliers. This goes alongside a bigger project we’re working on, which is rebuilding our pLTV models to leverage user-level data to detect behavioral trends. This will allow us to predict which users will become high-LTV players by understanding what it means to have a particular session length or if they unlock certain achievements. By doing this we can increase our accuracy and feed our own algorithms more data. Ultimately, identifying this on a user level will really help our marketing automation. 

Are there any marketing strategies or tactics that you’re doubling down on in 2020, such as creative optimization, or anything else?

I think we’ll be doing much more partner testing. It’s interesting how a partner can surprise you once you give them a lot of love and care. And then, they tend to be one of the highest performers when some of your larger partners don’t perform well. Also, we’ll be testing more aggressively, and “failing fast” to learn quickly.

Battle Bouncers

Are there any strategies/tactics that you’re going to leave behind in the last decade?

I think we’ll be focusing less on optimizing towards CPI. That was somewhat the case during 2019, but every user is worth a different amount in terms of LTV. Every channel is different, too, as well as every environment. So, we do want to get away from thinking we have to have a low CPI because the risk is so high, and just be a little bit more lenient on that. Even if the ROAS isn’t there but they’re hitting the in-game event, then we know that’s a signal to them being a better user eventually. So, what we want to avoid is looking too short-term on a CPI basis, and use that less as a strategy in 2020.

This is a bit controversial, but are there any channels you want to stop testing?

I won’t mention which partner, but there are certain networks you hear in the industry are performing really well, and sometimes you have to accept that your game or app isn’t the right fit for that audience or inventory and move on. We’ve also tried really hard and spent a lot of budget on some networks where we realize it’s just not going to work. In that case, we learn to let go and instead, invest in other platforms. 

Have there been ad networks that you’ve decided to stop working with because you found that they have a lot of fraud?

Actually, we’re pretty good in terms of fraud detection. The networks we’ve been working with have also been pretty understanding whenever there’s a fraud claim. I wouldn’t say that’s the reason why we’ve stopped using an ad network. We’re lucky to say that! But we did work with some networks in 2019 where fraud was a big issue, where up to 30% of the total traffic was fraud. In that case, you really have to try to get that money back. But, as long as the network is willing and accepting of those changes, and hear you out, I don’t think there’s any reason to pause them if they’re otherwise providing good traffic.

That makes complete sense. And I think that a lot of the ad networks are interested in fraud prevention and what other industry leaders are doing around prevention as well, because they don’t want networks to be fraudulent. For the most part, these ad networks know that fraud can impact their reputation and they want help to stop it. Fraudsters are constantly evolving. We have to be willing to share your solution with ad networks—teaching them, presenting data. That’s the best bet for everyone in the industry, especially marketers.

To add to that, Singular has been amazing, because we’ve detected so much fraud, like APK fraud especially. We have so many APK downloads out there, in aggregate. To be able to have that capability and categorize these as a separate source—to be able to see all the installs that aren’t from Google Play, we’ve been able to detect fraud down to the version. So, if your app version is at 4.0 but you’re still seeing installs coming from 3.0, then you know there’s something really, really fishy going on. To be able to customize that? Oh my gosh—it’s helped a lot. I really, really like that feature.

Love it! That’s music to our ears. Mary, this has been a great conversation. Thank you so much.

About Game Hive: Founded in 2009, Game Hive is a pioneer in creating the best game experiences across multiple mobile platforms. With hit titles generating 200+ million downloads, Game Hive strives to simply make games the way they dreamed about when they were kids.

Singular’s 365-day cohort reporting: better data science equals better marketing

Cohort reporting is not limited to a month, three months, or even six months any more. Singular now supports a full year: 365-day cohort reporting periods.

In other words, if you’re doing cohort tracking or cohort analyses in verticals that need more data than a D30 retention rate report, you’re in luck. And if your marketing campaign time period is three to nine months, you can now get extra margin and extra insight in your cohort statistics.

Web-based marketers recognize cohort reporting from Google Analytics, where you can see your retention rate and the impact of your marketing efforts on the web. Mobile marketers need the same — in fact more detail —  analytics in their mobile marketing reports.

student class cohort
A class of students is a cohort

More data makes you smarter. More data means your marketing campaigns bring in more revenue.

In mobile marketing, more data tells you invaluable information such as your customer life cycle. Your average revenue per group of acquired users. Your average sessions per user, by cohort. This goes far beyond vanity metrics and gets to the most important behavioral analytics that define user lifetime value (LTV).

That’s why cohort analysis tools are so vital, and how marketers looking at a cohort table can see trends and create opportunities that, like product improvements, can have cumulative benefits.

I recently took some time to talk about the change with Singular VP of Product Alon Nafta.

John Koetsier: Singular recently updated its cohort lengths to 365 days. But before we talk about why, let’s talk about cohorts. What are the primary reasons to do cohort analysis, and what do marketers learn from them?

Nafta: When we say cohorts it’s important to define first what these are in the context of marketing and user acquisition.

By definition a cohort is a group with shared characteristics. In the context of user acquisition, a cohort often refers to users acquired in similar manner. At the most basic level this could be the date, but it can extend to the marketing channel, the campaign, and even the creative. (Imagine what knowing which creative a cohort responded to could tell you about that group of people.)

A cohort report takes these groups of users, and looks at how they behave over time, say after one day, one week, one month and so on.

By “behave” we often refer to retention or any other important KPI you want to measure your product by. That gives you a very clean view since for example different dates can correspond to different marketing activities or product releases. And different channels, campaigns, or publishers may result in an acquired user profile that can be dramatically different from each other.

So a cohort analysis also helps me establish my baseline — for example, how my organic users are behaving over time —  and benchmark acquired users for different campaigns to these profiles.

As a marketer, that can ultimately teach me if my paid acquisition is exhibiting the right results, and point me on what should I focus on when trying to improve. Equally important, it also gives me insight into how fast I’m earning back my acquisition costs, which ultimately is one of the most important things for effective paid marketing.

There’s just so much you can do with cohort analysis. It really is a fundamental tool for the mobile marketer.

John Koetsier: What are the primary ways to define a cohort, and when would you use each? Time of acquisition is of course one … what else is interesting?

Nafta: Interestingly enough, even time of acquisition is not a fully strict definition since acquisition is not just one singular point in time.

For mobile marketing, a common way is to look at the time (or rather, date) of install. But you can also define the starting point of a cohort by the timestamp of the attributed click or impression (AKA critical touchpoint), which tends to be the case for web marketers.

Some marketers, especially in the digital commerce space, may want to look at a different event such as registration or first purchase. They may consider that a much more meaningful and significant starting point in defining a cohort.

Once you define how the cohort is calculated, a good cohort analysis tool or report should give you as many breakdowns as possible to differentiate between important characteristics of these groups of users.

That includes data in a table or in visualizations around:

  1. How they were acquired
  2. Key properties of the acquisition campaign … the customer journey
  3. Whether they were new users or retargeted former users
  4. Acquisition costs
  5. What types of post-acquisition activities are they engaging in, including the active user rate
  6. Conversion rate to purchase or other value-creation activity
  7. And more, depending on your app, your vertical, and your KPIs

Lastly, you also define how many time units — commonly days — you’re looking at after the defined start time of the cohort, and if you’re measuring accumulatively.

Note, this may differ between different types of activities. For example a 30-day retention cohort would commonly mean how many users came back on the thirtieth day after install (or re-install). But a 30-day purchase sum cohort can either refer to the total number of purchases made in the first thirty days (which is more common), or just the sum on the thirtieth day.

Both are applicable. Which you’re using needs to be determined to understand what are you looking at, and avoid confusion with coworkers.

John Koetsier: Does Singular make cohorts available based on re-engaged and re-attributed users?

Nafta: Absolutely.

Looking at cohorts for re-engaged or re-attributed users just as important as it is for newly acquired users. In fact for some verticals, acquired users in the sense of users who have just installed the app is almost not interesting at all for paid marketing, since almost everything focuses on re-engagement.

John Koetsier: OK, let’s get to the big story. Singular updated cohort periods to enable 365-day cohorts. Why?

Nafta: Well, as explained earlier, you’re trying to look at how different groups of users behave over time and draw conclusions accordingly.

In some cases you can draw conclusions or make some predictions based on a relatively short timeframe. For example, you might be able to predict the life-time value (LTV) of a user in a mobile game based on the first seven days of in-app purchases.

However, for some companies and products, the window of interesting activity that is important for prediction may take a much longer time.

For example, if I do a monthly subscription for my fitness app, I’ll probably need to review at least three to six months to understand how users are engaging, retaining, and upgrading their subscriptions. Or, if it’s a digital commerce product where customers are buying more expensive items that you typically don’t buy daily or weekly, marketers likely need to look at several months, half a year, or even a full year of data to be able to produce high-quality conclusions and predictions.

These numbers are extremely important for data science teams, who are often tasked with modeling LTV as well as LTV prediction. More time allows them to improve their models, test them better against reality, and iterate accordingly.

John Koetsier: What business types or app verticals typically benefit most from longer cohort periods?

Nafta: It really varies but I think it ultimately comes down to the expected activity profile beyond retention for your product. If users are taking actions on a monthly or longer basis, one-year cohorts — and even longer — are extremely important.

We see this for subscription services, digital commerce, fintech, and even gaming (with varying impact from hyper-casual to mid-core to hard-core games). It also depends on the level of sophistication and effort companies can invest.

Longer cohort periods produce more data and can allow better models, if you have the resources in place to take advantage of it.

John Koetsier: Often we look at cohorts individually over time to see return on ad spend (ROAS) for a group of acquired users. What else can you learn by tracking individual cohorts?

Nafta: ROAS is only one metric. It’s very meaningful of course to marketers who are working on paid sources. But cohorts are also meaningful to product managers, since by looking at retention against product release dates I can learn quite a bit about how my releases affect retention and adoption.

It’s also important for understanding seasonality, and many more insights.

A different creative asset, for example, which shows a different item to be purchased, uses a different coupon or offer, or just uses different design — such as more straightforward versus more artful — can say a lot about the types of users you’ve just acquired.

This is important data collection for mobile marketers, and an analysis report with insights here often leads straight to increased revenue.

John Koetsier: As you’ve said, cohort analysis is pretty important for LTV analyses. Does Singular automatically surface LTV and ROI for cohorts?

Nafta: Yes. By default we surface three important KPIs in our cohort report: LTV, ROI, and CPE (cost per event) for every event a marketer has defined as interesting.

Of course, a lot can be customized to meet individual needs. And specifically for retention we have a designated retention report which shows the same cohorts.

John Koetsier: Anything else?

Nafta: At Singular, cohort reporting is at the core. Our philosophy is to ensure that everything can be reported in a cohorted manner.

While reporting by date — which we sometimes refer to as actuals — can give you insight especially in real time, reporting against cohorts is what truly uncovers the outcome of your marketing. This includes being able to attach the cost of acquisition, the type of campaign, ad set and ad, the bid, the strategy, and the type of campaign.

And of course … what all of these are generating in terms of business results.

John Koetsier: Thank you for your time!

Cohort reporting: next steps

Interested in learning more?

Can click validation solve the problem of app install fraud? (Big hint: no!)

Recently, we’ve seen some noise about click validation, claiming that this will solve the mobile app install industry’s problem with fake users. Sadly, this is far from true.

In fact, there are far better tools to fight fake users, and click validation is one of the worst. Read on to learn why.

TL;DR

  1. Click validation requires an almost-impossible level of adoption to to be effective across all channels.
  2. There are better tools to fight click injection/spamming, ad fraud, and fake users.
  3. Click validation is not useless, but there are more effective options and better ways to spend time and energy.

Click validation: what it is and how it works

In a recent white paper, a competitor claimed that a leading cause of fraud  (I would call it an enabler, not a cause) in the industry is that there’s no proof of actual user engagement when a click is reported to an MMP. The white paper also claims that server-to-server clicks remove some of the data and make it harder to spot fraud.

Fraud is a significant problem. Get Singular’s recent fraud prevention report.

These things are definitely true. And their purpose with click validation is to solve these problems by introducing a new mechanism.

Here’s how it would work:

  • When an ad network serves an ad it would also fire an “Impression Proof” callback to the mobile measurement partner (MMP). Each impression will get a unique ID and the proof will be authenticated by the MMP in some way.
  • When the user clicks the ad, the click callback would also include that impression ID.
  • Validation would be performed by doing the following:
    • Make sure an impression with matching ID and parameters (for example, device ID) was provided beforehand.
    • Make sure the impression didn’t cause too many clicks.
    • Optionally, do further statistical analysis to look at the time between the impression and the click, and potentially other parameters.

The claim: Click validation done this way would make app install fraud impossible … or at least very costly to do.

Let’s analyze different properties of the proposed click validation mechanism to spot its strengths and weaknesses.

Let’s start with the big one: Adoption

First, let’s look at ad networks. Pretty clearly, effective click validation requires support on the ad-serving side.

Let’s assume for a moment that click validation is in fact a silver bullet for fighting fraud in the mobile advertising space. Even if that’s true, it would still require all networks and affiliates to implement the feature.

Unfortunately, that’s next to impossible, unless this becomes a standard in the industry.

mobile attribution fraud prevention
Findings from Singular’s 2019 mobile attribution fraud prevention report

Fact is, any given MMP’s weight and reach are limited and they would likely only be able to enlist a few quality ad networks who are already actively fighting fraud to work with them. That’s great … except that it means that customers are left open to fraud by all of the other networks in the industry. And if that’s the case, then all MMPs would need to work together to create a standard around click validation.

That alone would force the majority of clean ad networks to implement the solution.

Since this is not currently the case, marketers need to ask themselves what do they stand to gain by leveraging click validation. Currently, it probably only means that they can trust those networks which they already trust a little bit more. That’s not huge value.

Secondly, what about affiliates?

One big hole that click validation fail to address is affiliates and sub ad networks. There’s no solution for cases where the party serving the ads is not the actual network integrated with the MMP. And that would create a unique — and potentially dangerous — situation where these affiliates would actually be trusted just like the network itself (given keys and data).

Or, their impressions would not be verifiable at all, creating a data black hole.

What about click spamming? Can click validation stop it?

The white paper claims that click spamming would now be impossible as “the attackers cannot control the ad serving process.”

This is a big assumption about the parts some networks and affiliates play in some app install fraud. Singular has found more than one case where the ad-serving entity’s SDK was the one to include click injection and spamming code. In addition, these fraudsters had API endpoints supplying tracking links upon request to specific devices, apps, and other parameters.

Additionally, fraudsters do not need to control the actual ad-serving process. Often, they can just rely on the existing process by creating clicks for ads that are served but are not shown or not clicked.

The white paper then also claims that “Click spamming would be made impossible by the same principle, requiring an unviable amount of corresponding impressions with realistic click-through-rates (CTR).”

Here, there are two possible cases:

  1. The attacker does NOT control the ad-serving process
    In this case, creating a realistic CTR would indeed be harder for the fraudsters. But in such a case it would actually be easier to just compare the number of impressions reported by the network and the number of clicks reported by the tracker.
  2. The attacker DOES control the ad-serving process
    In this case it would be pretty easy for the attacker to create a realistic CTR as they would just game the impressions.

Realistically, there are better tools for the job.

Click spamming is usually used to poach organic users, claiming credit for installs that would have happened anyways. Singular provides Organic Poaching Prevention on Android which renders click spamming campaigns ineffective. (Plus many other fraud prevention tools on both Android and iOS.)

Conversion rate (CVR) is also a great tool for spotting click spamming at scale. Marketers don’t really need super-fancy mechanisms for stopping high-scale click spamming attacks. A simple look at conversion rates can and should be enough. Any source with less than 1% CVR should probably not be trusted.

Finally, as stated previously, comparing network data to MMP data can also help in spotting these cases. (And, by the way, Singular makes this extremely easy by putting all of the data in one place!)

And click injection … can click validation stop that?

Well, let’s start this section by assuming that click injection is still a problem. (It’s solved.)

The white paper claims that click injection is done by listening to “app install broadcasts and firing the click in between the broadcast and install completion.” It continues by claiming that “logically it’s impossible that the user was served a matching ad and clicked it within the same second that the app install was broadcasted.”

This assumption about how click injection works has been invalid for more than two years now.

Fraudsters have long ago found better methods than relying on the install broadcast and can now spot installs before they happen or at the same time the user presses the install button and the app starts to download. In both cases, that means that most of the time, attackers will have ample time to “serve” a new fake impression and then report a click.

Again, there are better tools for the job. Click injection is a problem that is present only on Android, where it’s solved by utilizing other mechanisms such as Google Play Referrer and its timestamps. (And yes, Singular does so very, very effectively.)

So, can click validation solve the problem of fake installs and fake users?

The white paper says that “spoofed users would require spoofed impressions for their clicks, either dramatically increasing the amount of data to be fabricated, or making it downright impossible to perform – as the attackers cannot control the ad serving process.”

Sadly, this is not true even if you assume that the fraudsters are not controlling the ad serving.

In fact, fraudsters can just serve ads and fake installs when they see an ad matching their faked “inventory.” In addition, they can also learn how to query ad-serving entities and make them serve them matching ads.

I’ve personally seen this happen before … even on the biggest Tier 1 networks.

So no, click validation cannot stop fake user attacks. And, to be clear, it’s pretentious to claim that it can. Once again, there are better tools for the job. Many tools, in fact.

Ready to go deeper?

Here are three ways to go deeper and get started on the process of eliminating fraud — and fake users — from your app install campaigns.

1. To learn more about those better tools, please refer to our recent blog post to read about the different methods and their pros/cons.

2. And, check out a short report with data from customers using Singular’s Fraud Prevention solution.

3. Even better? Get a demo of the solution first-hand.

DraftKings unifies siloed marketing data to uncover deep insights for growth

We sat down with DraftKings Senior Director of Growth Marketing, Jayne Pimentel, to discuss how her team leveraged Singular to unify their siloed marketing data and uncover deep insights for superior optimizations.

Video

Transcription

Introduction

My name is Jayne Pimentel and I’m the Senior Director of Growth Marketing at DraftKings. DraftKings is a sports media technology company. Most people know us for daily fantasy sports. We’ve recently entered the sportsbook category, which has been around outside the US for centuries.

When I joined DraftKings, we had one product. We now have three. We brought everything in-house. We got rid of our ad agencies. To scale and to grow a business like that requires a lot of infrastructure. A lot of disciplines that aren’t really core competencies to DraftKings but things that we need to invest in third parties to allow my team to stay agile.

Why Singular?

I remember being a consultant when I joined DraftKings and my first call actually was a Singular call. And then before that when I was at Cognant, even at Machine Zone, over half our clients used Singular. So I was familiar with all of these great brand names that were out there, people like Lyft that we were working with, that also leverage Singular.

Establishing a single source of truth for marketing performance

Singular has helped us become fluent between different kind of siloed teams. The fragmentation of data is something that is achievable to overcome but requires a lot of ingestion of data as well as leveraging something like Singular if you can’t adjust that data yourself.

So if you don’t want to pay for a ton of servers to ingest impression-level data, click data, I mean that’s also just one piece. Then you also have user data within our apps. Then you also have revenue data and how we monetize. And the fragmentation even on the monetization of a user, how much they’re valued, needs to be also tied to how much we’re willing to pay for that user. And so that true lifecycle value of that user is something that requires data coming from email, S3 buckets, garbage Excel files, whatever it is. But you have to be able to have some sort of system to make sense of all that and to ingest it and unify it.

Democratizing creative reporting & optimization

And it’s also been helpful with our creative team. We actually use the creative tool within Singular often because our creative team, they’re visual people, they’re talent and they like to see the performance but in a more visual way. And also having the creative and the image that is actually associated with the performance, it’s been really helpful to start conversations to help with testing agendas and to make everyone accountable across teams now that we have a baseline around the data we’re bringing in.

Ready to take your growth marketing to the next level? Let’s connect!

Personal Capital tackles cross-platform measurement

We sat down with Rachel Chanco, Director of Digital Marketing & Mobile Growth at Personal Capital, to discuss how they’re connecting cross-platform user journeys.

Video

Transcription

Introduction

I’m Rachel Chanco. I’m with Personal Capital. I lead all of the Digital Marketing and Mobile Growth initiatives.

Personal Capital is a digital wealth management company. How we differentiate ourselves from other FinTech advisors in the space is that we are a hybrid model. We leverage toolset technology but we connect you with a personal advisor that can actually really help you plan things out.

Personal Capital currently uses Singular as its mobile measurement partner.

Connecting users’ cross-platform journeys

The user journey is pretty unique. A lot of times people will come from the desktop and then download the app. A lot of times people come from the app and then convert on a desktop.

One of the things I really love about working with Singular is not only am I able to understand data from the mobile side but because of the custom integrations we can do with Singular, I am able to understand a user journey from mobile app install to a conversion that may occur on desktop.

So rather than just sticking to standard mobile measurement events, I’m able to leverage the platform to connect if an event is actually happening on desktop, even though the user came from mobile. I can say that this user was actually valuable even though on a standard analysis they would not appear to be valuable.

So we talk a lot about cross-platform being a real problem within the industry and Singular is helping me solve for that.

Ready to take your growth marketing to the next level? Let’s connect!

How Singular’s mobile attribution is saving app developers up to $500,000/month

Mobile attribution is a commodity, right? You can get it from anyone, correct?

Well, sure, if you don’t want elite-level marketing success. And, if you don’t mind paying fraudsters to funnel all your ad dollars into Lambos, vacations on the French Riviera, and sipping pina coladas on the beach.

That’s become incredibly clear in the last few weeks since Singular added deterministic android install validation to our Fraud Prevention suite. Fraud prevention is included, for free, in Singular’s mobile attribution solution.

mobile attribution fraud prevention
Findings from the 2019 mobile attribution fraud prevention report

One product release, 3 fraud-fighting solutions

The recent product update actually included significant updates to two additional fraud-fighting technologies: Android Organic Poaching Prevention and Android Click Injection Prevention. Android Organic Poaching Prevention stops fraudsters from claiming credit for app installs that are normal, natural user behavior.  Android Click Injection Prevention stops fraudsters from claiming credit for installs that other ad networks drove.

Customers and app developers are saying that collectively, this is having a huge impact:

“Singular’s new progressive anti-fraud solution detected more ad fraud than competing solutions,” says Ronak Jain, Mobile Marketing Manager at Cleartrip, the top travel technology platform for emerging markets. “This is a game-changer and will play a key role in making growth decisions.”

Some clients are saving more than $100,000 a week with the solution. Other app developers discovered that more than 90% of the app installs they had been paying for from a particular network were fake.

Another client in an on-demand services industry discovered that almost 50% of their paid installs suffered from mobile attribution manipulation. The wrong ad partner was getting paid … showing that fraud cheats ethical ad networks as well as advertisers.

mobile attribution fraud prevention
Findings from the 2019 mobile attribution fraud prevention report by Singular

Getting that clarity — and then being able to kill the fraud — returns app marketing to where it should have been all along: advertisers maximizing their hard-earned ad dollars to drive growth.

“Singular’s new fraud-fighting technology helps our User Acquisition team focus on legitimate campaigns and significantly boost return on ad spend,” says John Parides, Senior Director of User Acquisition at Glu, maker of the iconic Deer Hunter as well as Kim Kardashian: Hollywood.

But how does this work? What’s the philosophy behind Singular’s mobile attribution fraud prevention?

App developers: 4 key fraud prevention principles

It’s easy to say that you fight ad fraud, or catch mobile fraud. It’s another thing to do it effectively.

Singular’s cyber security team is constantly monitoring anomalies and abnormal behavior, checking a wide array of signals. Once we find something that is abnormal, we dig deep to find the root cause and find a deterministic way to fight that fraud methodology. Essentially, what we’re doing is emulating the way fraudsters think and then reverse engineering their schemes.

Here are the four key principles behind Singular’s Fraud Prevention product.

Deterministic
Singular strives to have no false positives. We want to clearly identify fraud at a granular level. So Singular’s fraud results apply to actual individual installs, devices, and users, not blanket-level sources or publishers.

Proactive
Finding fraud after it has already occurred is too late. Advertisers have already paid for traffic or users or customers, and then they’ll have to engage in time-consuming and difficult cost reconciliation conversations with partners.

A potentially bigger problem when you let fake users in: marketers get fraudulent engagement and purchase data along with the fake users, muddying your analytics and making it hard to decide where to re-invest. And even worse, legitimate ad networks’ algorithms can adapt to the fraud in real time, de-prioritizing campaigns and sources that are actually working because they are getting fewer installs attributed, thanks to theft by the fraudsters.

So it is absolutely critical to eliminate fake installs BEFORE attribution.

Transparent
Both advertisers and ad networks need to know what constitutes fraud, and they need transparent reasons why traffic, installs, or other activity has been classified as fraud. So Singular provides user-level decision logic for every single install, click, and impression.

Customizable
No marketer wants fraud. But marketers do want to personalize their fraud prevention strategies and define how aggressive they want to be. A marketer using largely self-attributing networks like Facebook, Google, Apple, Snap, and Twitter prefers a different strategy to one who is using many different niche ad networks, for example.

So Singular lets customers decide both the fraud rules they’ll use and what actions they’ll take upon finding suspicious activity.

Then, add scale

At Singular, we’re applying that philosophy while also harnessing the power of big data: analyzing more signals in higher volume. In June 2019 alone, Singular measured 70 billion ad impressions, almost 11 billion clicks, almost 6 billion app installs, and almost $350 million in ad spend.

And it’s not just big data. We’re also digging deeper, analyzing detailed signals from individual impressions, clicks, and app installs at greater depth to uncover suspicious activity.

That volume — and depth — are just two of the reasons Singular was recently able to unveil three new fraud-fighting technologies that collectively have become part of our already industry-leading Singular Fraud Prevention suite.

Get all the mobile attribution fraud prevention details

We compiled a data-driven report on the results our beta-test clients got when they used Singular’s latest Fraud Prevention suite.

Check out what they found by getting The Death of Install Fraud on Android for yourself.

Fixing a $13B problem: How Singular is killing app install fraud

You probably saw the news that we released last week: deterministic Android app install validation. This, along with a number of other improvements we’ve recently made, is a massive industry breakthrough that is completely game-changing for many of our clients.

Some of them are now saving massive amounts of money:

“Singular’s updated Fraud Prevention suite is the most powerful mobile app install fraud prevention I’ve seen,” says Channy Lim, Head of BI Department at Com2uS, maker of the hit mobile game Summoners War. “This will save us literally hundreds of thousands of dollars every month, and lead us to make more effective marketing decisions.”

The news is exciting, but I wanted to dive a little deeper.

I would like to share a little more detail about how app install fraud works, the problems with existing methods of finding it, and what we doing differently at Singular.

How app install fraud works

One of the ways fraudsters steal billions of advertisers’ dollars annually is app install fraud. Or, to put it another way: fake installs.

App install fraud is a collection of fraud methods that create fake mobile users and app installs. As opposed to attribution manipulation fraud, which steals credit for existing legitimate app installs, app install fraudsters take matters into their own hands and create app installs out of thin air.

There are multiple ways to perform fake installs fraud, and naturally, some are better than others.

The simplest and most low-tech way is a device farm. You get a bunch of devices, click a lot of tracking links, install a lot of apps, then open them, delete them, and reset each device’s Advertising ID (Android) or IDFA (iOS). Rinse and repeat regularly, and you’re collecting ad dollars.

But there are far more complex and advanced ways to perform fake installs that generate a lot more money far quicker.

One of the other ways fraudsters scale up their device farm operation is to use emulators and bots instead of real devices and real human beings who use the devices. This can be done in the cloud, and potentially on multiple servers in multiple locations, to try to look authentic.

One of the most notable techniques leveraged by smarter fraudsters is SDK spoofing.

Mobile marketers place software (an SDK) from a Mobile Measurement Partner (MMP) in their apps to monitor and measure the results of their marketing. In SDK spoofing, no app is ever actually installed … but an install is being reported to the MMP and potentially other analytics providers by faking the SDK’s traffic. This can be done by technically advanced fraudsters who understand how communication with the measurement service works and how to emulate that communication.

This is far more scalable than running a device farm, because once they have done the initial work, they can create a script to run on servers around the globe. That creates fake installs on fake devices. Alternatively, they can write code that can run on legitimate users’ devices anywhere, reporting installations of apps that have never been installed: fake installs on real devices.

Another example comes in the form of malware, where malicious apps install and run legitimate apps on real users’ devices. This happened for example with the Viking Horde malware. In such cases the user is real and the app is real but the install itself is fraudulent.

As fraudsters become more advanced they tap more and more into the power of the high-tech fake install techniques, and for good reasons. These attacks are highly scalable and hard to find, therefore netting the fraudsters huge amounts of money.

Detecting and preventing fake installs is hard

There are multiple ways to detect fake installs. The problem is that many are unreliable, inaccurate, and most importantly, ineffective.

SDK Message Hashing
Since SDK spoofing aims to fake an MMP’s SDK traffic, MMPs (including Singular) protect each message sent from the SDK. That’s typically done via hashing: taking the data from the message, a secret key that is different for each app, and combining them to create a blob of data that can be verified on the MMP’s backend.

The problem is that the secret is not so secret, as apps that run on users’ devices can create these hashes, so SDK fraudsters can extract the secret and algorithm from the publicly available app binary. At times they don’t even need to reverse engineer the algorithm since the SDK is open source.

Abnormal numbers of new devices
One interesting statistical technique to fight fake install fraud is to look for a high percentage of brand-new or never-before-seen devices coming from specific ad networks or publishers. When you see abnormally high ratios, it’s generally clear that something fishy is happening.

The problem however, is that fraudsters sometimes leverage existing devices or mingle their fake traffic with traffic from real devices, making it harder to spot anomalies.

Abnormal retention rate or other KPIs
Marketers can sometimes identify fraud by seeing abnormal rates of retention, in-app purchases, or other KPIs. For example, if your average retention is 15% on D14, but installs from a particular campaign, publisher, or network show a 1% retention rate, it’s clear that there’s something that deserves further investigation.

But Singular research shows that fraudsters have learned to fake retention and post install events/purchases.

For example, Singular uncovered a case of extremely sophisticated SDK spoofing campaign on iOS that fools most fraud prevention solutions in the industry. The fraudsters not only generated seemingly legitimate app installs but they also continued to send post-install events, in essence faking real users’ activity. They have even tried reporting in-app purchases, and while doing so reported revenue receipts for these fake purchases.

Sensor data and user behavioral analysis
Sensor data based solutions take post-install fake user detection one step further. These solutions try to detect abnormal devices or users by looking at non-marketing data points such as device movements (via a smartphone’s accelerometer and/or gyroscope), battery data, and user-screen interaction.

How?

Simple: sensor data for real devices should look different than simulators that don’t move.

The challenge is that this can be faked as well as shown in the huge “We Purchase Apps” scandal revealed in October 2018. In this massive ad fraud campaign the perpetrators bought real apps, studied the usage patterns of their real users, and then created fake users coming from those same apps.

One of the biggest targets of this campaign was none other than Google itself, the company who has probably put the most effort into profiling real user activities and protecting advertisers from fake user emulation.

And more …
There are multiple other methods, each of which has its strengths and weaknesses.

The problem with post-install fraud determination

While post-install methods do an important job of raising the bar against fraud they have some inherent caveats that stop them from being effective fraud prevention tools.

1: Statistical (in)significance
Post-install methods are statistical tools that work by looking at groups of installs and checking if one or more of these groups exhibit anomalous activities. Usually these groups would be installs coming from the same publisher. For example, when looking for new devices it’s unsurprising to see a legitimate user with a new device, as new devices are constantly being sold to consumers.

However, for a publisher driving thousands of installs, seeing 95% of those installs from new devices should be highly suspicious. Fraudsters have figured out that they can’t be so blatant, and so they take action and hide. Some drive their traffic from many different publisher IDs and even networks to keep numbers low; some mix their fraudulent installs with legitimate installs to make the anomaly less apparent.

Utilizing such techniques allows fraudsters to avoid detection by making the anomalies statistically less significant, making it a lot harder to distinguish legitimates traffic from fake traffic and so making it harder to stop the fraudulent activities without incurring high false positives.

2) Post postback friction
As the name suggests, post install methods only come into effect after an install has happened, and might be processed days or weeks after the install. That also means that they are evaluated after an install postback is sent to the media source, which means after conversion and billing notification in CPI campaigns.

The result is that the media source will charge for the now-known-to-be fraudulent conversion … unless a process of reconciliation is done. This process is often manual, messy, and a cause of great friction between ad networks and advertisers.

3) Non-optimized optimization
Ad networks often perform real-time optimizations based on initial success analytics: evidence of conversions such as app installs. Now, however, those optimizations will be skewed by fraudulent activities.

In effect, having been rewarded by fraud, they will now optimize for MORE fraud.

As an example, if publisher A drives more installs than publisher B for some advertisers, the network might prefer to prioritize publisher A over publisher B and send more ads its way. Now imagine publisher A is actually driving fake installs which are not prevented in real time (as happens in post-install detection). The network will funnel more budget to A over B.

Even if those fraudulent installs are detected post-install and reimbursed, the damage has already been done and goals will not be met because of the optimization changes and budget shift.

Singular’s solution: deterministic pre-attribution fraud decisions

Singular strives to have no false positives. We want to clearly identify fraud at a granular level. So Singular’s fraud results apply to actual individual installs, devices, and users, not blanket-level sources or publishers (although we can – and do – block those too).

We also want to find fraud as conversions or installs happen.

Anything less will suffer from the problems outlined above.

When we took time out earlier this year to consider everything, it was clear that we needed a different approach here. We needed something that would work in real time — install-time — and have an extremely low false-positive rate while still maintaining effectiveness.

To meet these requirements, we decided to disregard everything we thought we knew about ad fraud and look for something new. As we reported publicly last week, after an exhaustive search we found what we believed would be a high-quality deterministic fake install detection method that works at install time.

The new method we discovered depends on signals from the install device that allow us to verify that a user exists, they truly installed the app from the store, and they haven’t installed the app an unreasonable number of times (sorry-not-sorry, fraudsters who “install” an app on a phone hundreds or thousands of times).

Of course, once we found this method, we knew we needed to validate that it works as expected at scale, in the real world, on thousands of ad networks. To do so we tested with some of the most successful mobile publishers on the planet. And we validated our results against post-install metrics.

The actual implementation of our new fraud prevention method proved to have a tremendous effect on some of our customers, eliminating their fake install problem. (Find more about it in our report.)

In a later blog post we will share some more details about our findings, but it’s safe to say that we were blown away by the scale of the fraudulent activity we’ve found, and as more and more customers utilize the feature, the numbers are only going to grow.

Interested in learning more? Schedule a demo to go even deeper.

CEO insights: Why creative fatigue isn’t as simple as it sounds

CEO Insights is a new column by Singular CEO Gadi Eliashiv focusing on some of the most challenging issues in scientific marketing.

Most sophisticated growth organizations we’re working with are placing an enormous importance on creatives. These companies usually have in-house design teams dedicated for making creatives, plus processes and metrics around the production and launch process.

All of it is designed to ensure optimized results.

These companies understand the power of creative optimization, and distribute shared responsibility for amazing creative throughout the organization. Designers have been educated about performance metrics, and they’re savvy enough to combine their art with science in the form of cold, hard metrics.

These top brands also have periodic meetings (bi-weekly or more) where the design team sits down with the marketing team. Together they carefully examine the performance of various assets, and find a balance between introducing new winning concepts, sustaining proven concepts, and eliminating bad ones.

More advanced marketers also apply particular conventions to how assets are managed and tagged, so that tens of thousands of creative variations can be grouped by a handful of key concepts, which helps identify key trends.

All of these workflows and analysis capabilities are available out of the box for our customers through Singular’s creative optimization suite, and it gives our customers an enormous edge. Click here if you want to learn more about that, or email me if you’d like to see a demo.

So: what is the right process?

One area that was of interest to me was the pace at which companies swap out creative assets.

When asking various companies, I got a range of answers from: “we don’t have bandwidth for that at all” to “we have a constant refresh rate.” Some companies update on a fixed period of time (every two weeks or a month), while others update their creative “whenever design creates a new one.”

Obviously, not all creative costs the same to produce, and some creative is super expensive to produce in time and money like playables and videos. Other assets, however, can be produced quickly and efficiently, and when infused with time-specific context (such as a big concert, or a particular live event in a game) they can produce great results.

A common theme I’ve heard is the following way to run analysis on your creatives:

  • Cadence
    • Weekly or bi-weekly
  • Data input
    • Creative asset performance from all channels (Singular does that out of the box: check out our API)
    • Campaign targeting option data, particularly around the major self-attributing networks, to identify targeting methodology (value optimization, bid optimization, etc. …)
    • Channel, country, region, plus any other breakdowns that makes sense to you
    • Four weeks of data
      • Period A: first 2 weeks of data
      • Period B: second 2 weeks of data
  • Two simple data outputs
    • Check the trend of currently running creatives to detect big drops that might suggest these creatives should be cycled.
      • The drops could be in clicks, installs, eCPM, or any other metrics that make sense
      • For customers using Singular’s attribution, we enable ROI granularity all the way down to the creative level, so you can check for a drop in your main KPI (which is often what the ad engines optimize against)
    • Isolate the creatives that did not exist in Period A, but existed in Period B, and identify how they are trending. Learn from new concepts that are succeeding well, and some that are failing to ramp up.

One example:

Creative Period A Period B
  CTR     Conversions     eCPM     CTR     Conversions     eCPM  
Creative 1     3% 7,500 $9.50 1.5% 3,300 $11.75
Creative 2 n/a n/a n/a 3.5% 15,000 $11
Creative 3 n/a n/a n/a 1.5% 3,400 $9
Creative 4 1% 2,200 $3.40 2.3% 4,300 $4.23

Creative fatigue and time

As I look at all this data, the questions I keep asking myself are:

  • When is the right time to swap creatives?
  • Do companies know those times?
  • Can they even figure them out?

The answers to those questions, as I found out, are very complex. After dozens of talks with top tier marketers I got literally dozens of answers, and none of them was the silver bullet I was hoping for.

(Mostly likely, there isn’t any one single silver bullet. The techniques that work for one app are different than those that work for another brand.)

The one common thread in all these conversations was the favorite topic of creative fatigue detection. The formal definition of creative fatigue is that consumers/users/customers do not even see your ad anymore. They’ve become so used to it, that it is now just part of the default background for them.

Traditionally, the first thing people think about fatigue is that CTRs drop over time, because people have seen your ad again and again, and those who wanted to click have done that already.

But when I started researching some data, that naive assumption quickly surfaced as being incorrect.

When dealing with optimizing algorithms like Facebook’s and others, they will track the number of exposures each user had seen (frequency) and will cap that at a certain point, because their algorithm understands that it’ll be a waste of an impression, and also lead to a bad user experience.

So FB simply chooses another ad to show.

You can quickly see this phenomenon in the chart below.

In the first chart, CTR does not drop appreciably throughout the campaign. A campaign manager who looks only at this probably thinks that all is well with her ads.

CTR over time: no creative fatigue?

But there is actually a significant problem.

What’s actually happening behind the scenes is that Facebook knows that it has exhausted your chosen audience, and the number of people it is showing the ad to has dropped precipitously:

Creative fatigue … sometimes, Facebook is smarter than you

It’s important to say ads will not always behave that way. That’s why when analyzing fatigue you need to not only know what assets you’re using, but also what ad channels you’re running on, what bidding methodology is being used, and what their algorithms do.

(For example: due to saturation, the algorithm could also start increasing the CPM bid to generate more impressions, which will decrease your ROAS).

In general, even if these algorithms are smart enough to avoid audience fatigue, it is still the responsibility of the marketer to identify it and remedy the situation. You can find new audiences, add new creatives, and so on.

But there can be more going on

Sometimes when you’re looking for creative fatigue you’ll see data that doesn’t make sense at first. For instance, you might have a click-through rate chart like this one, which shows creative gaining strength over time:

Creative fatigue: can ads gain in CTR and conversions over time?

All looks well at first glance. But … if you check impressions, there’s clearly something else going on. The number of impressions is skyrocketing:

Creative fatigue: Oops, impressions are skyrocketing

Something very different is going on here.

Hint: this behavior can be related to changes in bids and budgets … another key thing to think about when testing for creative fatigue. Changing the bid (even if it’s a CPI/CPA bid) will directly impact the amount of money you’re willing to spend on a certain impression, therefore creating more impressions that were not accessible before at your previous bid.

In short: creative fatigue is one of those concepts that seems easy to understand and easy to diagnose … but actually isn’t. To find out if creative fatigue is actually happening, you need to dig deeper into the data than most can or will.

Fortunately, that’s where Singular can help

What’s next

That’s it for this post. In the next post, I’ll look more at how bids and budgets impact click-through rate, impressions, and conversions.

 

3 critical things CGOs (and CMOs) absolutely need to drive growth campaigns

In the simplest possible terms, a chief marketing officer’s role is to implement strategy that ultimately increases sales. A chief growth officer’s role is even simpler and more explicit: grow the company.

But how?

And what tools do they need to achieve those goals?

Singular is privileged to work with growth marketers at companies like Lyft, LinkedIn, Rovio, Wish, AirBnB, DraftKings, StitchFix, plus many more. We’ve seen what the best growth marketers the planet do, and we know what technology they use.

We also know how much data they have.

In a recent survey, 200 CMOs told us that their biggest challenge isn’t marketing data. Quite the opposite, in fact — they have plenty of data. They have avalanches of data.

And that’s the core challenge.

data

Drowning in data

“Marketers are drowning in data,’ says Jo Ann Sanders, a VP at Optimizely.

That’s the problem.

“With the exponential growth of data over the past decade … it’s becoming harder daily to turn information into action,” says SurveyMonkey CMO Leela Srinivasan.

Marketers are drowning in data thanks to the unprecedented data exhaust of our digital lives.

We browse the web, we install apps, we watch four million videos on YouTube every minute, we search on Google 40,000 times a second. The world will soon have almost six billion mobile subscribers, and American adults now spend more than 3.5 hours a day on their phones in branded apps, sponsored media, and ad-supported sites.

At the same time, marketers are dealing with an exponential rise in tech tools, more digital channels than ever before, and more billion-user platforms every year.

Add in global competition, and 76% of CMOs say they can’t measure marketing performance accurately enough to make truly informed decisions.

marketing intelligence platform

Marketing intelligence platform

What marketers need most is actionable insights for growth. So CMOs’ (and CGOs’) biggest challenge is simply mining nuggets of gold from all that data. That requires real-time measurement and analysis at scale across potentially hundreds of platforms, partners, and channels.

That’s why Singular built what we call a Marketing Intelligence Platform.

The new marketers are different. They speak data and write code. They form hypotheses and run experiments; then measure results and optimize. These new marketers are marketing scientists, and they need tools of their trade.

With a Marketing Intelligence Platform, marketers achieve three critical things:

  1. Unprecedented visibility at scale
  2. On-demand flexible reporting
  3. Full customer journey insights

That’s seeing not just your data, but your ROI on every activity. It’s slicing and dicing not just by campaign, but getting CAC per creative asset. And it’s measuring not just conversions, but cross-device and cross-platform journeys that led to customer action.

This requires at least nine components, combined into a single platform, grouped in three sections. We’ll take a very brief look at each. For a full in-depth overview, however, check out our complete Marketing Intelligence Platform report.

The three things that CGOs and CMOs need to drive and accelerate growth are …

unify

One: Unified marketing data

You can’t get the golden nuggets of actionable insights without mining your data, and that starts by unifying it.

Unifying marketing data includes:

  • Data governance
  • Data ingestion
  • Data processing
  • Attribution
  • Dimensional data combining/synthesis

Data governance ensures clean data from every source, and enables processing, enriching, and combining later on.

Ingestion is getting all your relevant data from every source, and it’s not easy. Processing is essential to standardize and normalize it, at which point you can conversion outputs to marketing inputs. Combining and synthesizing top-funnel and low-funnel data reveals deeper trends and granular results.

growth insight

Two: Intelligent insights at scale

At a high level, marketers need to know the score: across all their campaigns, are they winning or losing? At more granular levels, they need to know if a specific campaign, partner, publisher, or creative is performing.

Generating intelligence insights includes:

  • Reporting and visualization
  • Actionable insights

Reporting and visualization shows marketers what’s happening, and actionable insights provide clues for future profitable growth. Some of those insights are pull, but some need to be push: alerts about out-of-scope campaigns, click-through rate drops, poorly performing ad partners, and so on.

Three: Automation

The volume of data flooding marketers’ dashboards, reports, and spreadsheets cannot be handled manually at scale. Automation is required, and it includes:

  • Data transport
  • Alerts, fraud, audiences
  • And much more

It is not useful to have a system that only ingests data. Marketing data needs to move from systems of deployment to systems of analysis to systems of engagement, and sometimes in multiple directions. So building in the ability to do that via API, exports, or S3 to internal BI systems and hundreds if not thousands of external partner systems is critical.

And while modern scientific marketing is not a set-it-and-forget-it activity, marketers increasingly need to be able to automate actions within set parameters.

That includes automated creation and distribution of audiences for retargeting, look-alike campaigns, or suppression lists. It also includes built-in on-by-default configurable mitigation of fraud, along with both whitelisting and blacklisting of sources and publishers in paid media campaigns.

And at higher levels, it includes automation of bids and buys for ad campaigns at scale.

Results: what a marketing intelligence platform delivers

What does a marketing intelligence platform deliver?

Find out soon in part two of this blog post, coming next week.

Or, click here to access Singular’s entire Marketing Intelligence Platform report right now.

Mobile user acquisition in India: How the most efficient advertisers grow faster

The following is an in-depth overview of the growth strategies of a very well respected and profitable online business in India. This is a Singular customer who runs a transaction-oriented business, and the data is used with permission.

In the last 15 months this advertiser has accrued approximately 13 million installs on both Android and iOS devices. What is impressive is that they have focused on checking fraud right from the get-go and only paying the partners on Cost Per Transaction (CPT).

Here are some key areas this advertiser focuses for user growth.

Focusing on Android for mobile user acquisition

It’s no brainer – Android is the winner in India market and therefore is this advertisers’ key focus area.

iOS marketing efforts were light until this publisher accrued a respectable user base on Android.

Paid beats organic

Organic ranking is good; but paid marketing gets you there faster The advertiser has been very comfortable with a lower rate of organic installs.

When they started on this journey, they had approximately 35% organic installs on Android and 99% organic installs on iOS. Over the last 15 months, they have experimented with various sources and found the best ones for their vertical. After 15 months the advertiser finds they now have approximately 10% organic installs on Android and 60% organic installs on iOS.

Reducing the organic installs percentage can sometimes be perceived as cannibalization but given that the advertiser is focused on a cost per transaction, they ensure that these paid installs are coming with linked transactions which translates directly to revenue per user.

For mobile user acquisition, in experimentation you must trust

The advertiser started with 20 sources on Android and two sources on iOS. (For the purpose of this case study, a source is considered valid only if it delivered more than 100 installs in the calendar month.)

As the paid user acquisition program saw success the advertiser has scaled to 40 sources for Android and 15 sources on iOS.

On an average the advertiser tests configurations with over 180 sources. Once the tests have been successful, the advertiser goes live with the chosen few sources every month. This eye to detail and diligence in evaluating sources gives the advertiser an edge in optimizing spends and driving growth.


Scaling media sources is critical to growth. See how smart marketers pay 30% less and get 60% more.


Singular helps the customer test new sources with the following:

  1. Custom postbacks
    The ability to add additional information in postbacks enables advertisers to know what works and enables them to further segment acquired traffic.
  2. 1,600+ network integrations
    Singular is integrated with 1,600 plus networks. Chances are that a network the advertiser wants to evaluate is already integrated without any custom work, which boosts speed of execution.
  3. Dedicated support
    Singular has a dedicated customer success and support team that helps with configuration. This team is governed by an SLA, meaning that the customer is not alone in this never-ending effort. Having Singular’s team available to action changes makes the UA manager a winner.

The following is a cumulative growth graph of the app installs that the advertiser has driven in the last 15 months.

Detailed logs and their usage

Singular’s platform enables the customer to extract detailed Click, Install, Postback, and Fraud logs via the interface, API, and firehose methods.

These logs are used for log for log validation in case of discrepancies. The API and firehose methods are used to have a complete repository of the data. This repository is processed in the customer’s internal BI system to manage the payouts and make goods with the advertising partners.

Singular has world-class fraud protection, but if they wish, customers can also work with a third party install fraud detection service by using the attribution logs. This gives them complete control of how advertising spend decisions are made.

Take these tools for a test drive?

Interesting in experimenting with Singular’s marketing intelligence platform to see you you can drive similar results?

Talk to a Singular representative in your market for more details on how you can build a solid user base and drive transaction volumes and not just installs. Or get a demo now.