iOS ad efficiency dropped up to 75% post-ATT: How DSPs use AI to target and optimize ads in the age of privacy

By John Koetsier June 23, 2023

Assume iOS ad efficiency pre-ATT with almost unfettered IDFA access was 1. What is it now, after App Tracking Transparency, and in the age of SKAdNetwork? This was just one of the things I recently chatted about with’s Joseph Iris. is a demand-side platform that focuses hard on machine learning and AI to find profitable advertising opportunities, mostly for user acquisition and re-engagement. And they do it at scale: 3 million queries per second.

Our main topic: how DSPs function in the age of privacy-reduced signal. 

But the relative iOS ad efficiency change since ATT has been a hard puzzle to solve, and I’ve asked multiple mobile experts without getting a straightforward answer. What I mean, of course, is: how efficiently can the adtech ecosystem target an ad at the right person, at the right time, in the right context, and stimulate action?

Iris accepted the challenge.

iOS ad efficiency drop under ATT

There was a significant efficiency decrease, Iris said.

But, there’s an important caveat.

“So if you look at sheer numbers, that’s like 25% efficiency versus what we had before. But prices decreased by even more than that. So again, this kind of balance … proves there’s an opportunity for smarter buyers.”

That’s super-interesting because an ad environment that is much cheaper but only 25% as efficient at pairing the right people at the right time with the right offer is going to be a significantly worse ad environment in every way besides privacy. You’ll see more ads because they’re cheaper and marketers have to show more to achieve the same results, and the ads you do see will be less relevant because marketers know less about you and can’t target as well.

Iris was specifically referencing casual games, and there are other factors at play, so don’t take this percentage as general across-the-board guideline. Apps that are super-popular, published by well-known brands, and appealing to a wide audience are probably much less affected. Apps that are very niche, monetize on a very small slice of their active users, and don’t have a big brand, could be impacted more.

The good news: SKAN 4 will provide more marketing signal. The bad news: SKAdNetwork and App Tracking Transparency still don’t offer anything like Privacy Sandbox from Google, where there’s a privacy-safe on-device mechanism for targeting. It’s not perfect — though it’s improving — but at least is offers something to marketers beyond context, and early guesstimates of ad efficiency drops are in the 10-20% range.

Signals DSPs use to target ads

So how do DSPs manage in such a challenging environment? By using every signal they can, and by remembering yesterday in extremely accurate detail and applying it to today, Iris says.

The signals include:

  • Time
  • Date
  • Major local/global events (e.g., Super Bowl)
  • Session signals, including length of session
  • IP
  • ISP (probably derived from the IP)
  • Device type (particularly important on Android where there’s more diversity)
    • Pixel density
    • CPU cores
    • RAM
  • App descriptions (believe it or not: keep reading!)
  • Device identifiers, where available
  • And pretty much any else possible (more is available on Android than iOS, for example)

The core of the machine learning adtech companies apply is not all that complicated and shouldn’t be over-glorified. It’s essentially applying memory of the past to the likelihood of future events.

“What we’re effectively trying to do is remember yesterday very accurately at extremely high scale,” Iris says. “The assumption of any machine learning based prediction is that reality didn’t change dramatically from yesterday.”

App store descriptions for context (much more reliable than app categories)

One of the key factors is app store descriptions, believe it or not. That’s a key source of context for largely because context in-app is a very different animal than content on the web, where pages can be spidered, consumed, and categorized.

ASO specialists have perfected the art of moving to a category in which you can be a big dog, and that’s why categories are almost useless for contextual targeting purposes.

“One thing we’re lucky about is that when you’re building your App Store page, the description kind of has to reflect what’s inside the app, otherwise the users are going to be very upset very quickly,” Iris says. “So in order to not be contaminated by ASO, we take the store descriptions themselves.”

App Store and Google Play descriptions have to accurately describe the apps features and capabilities, so the DSP ingests all those descriptions and parses them for contextual relevance. The eventual output is a mapping of which apps are contextually relevant to each other, and therefore which apps (that are advertising) might be appealing to users in a similar or related app. That sounds simplistic, but I’m sure there’s all kinds of non-linear connections that enable the DSP to know that ads in a hiking trails app shouldn’t necessarily only be other hiking-related apps, but also accommodations, food, maps, points of interest, shoes or boots, camping gear, and so on, all with varying degrees of calculated contextual closeness.

Unicorn creatives and the testing tax: 10%

A common question from marketers around optimizing creative and offers: how much should I spend on known winners, and how much should I spend on testing for new heroes?

For, it’s about 10% of your budget.

“Over time as the champion or champions become more statistically significant they get more of the weight,” says Iris. “They get, let’s say, up to 85%, 90% of the traffic. And the other remaining 10% is still left there for exploration for the option of new champions to emerge.”

I’ve often chatted with marketers about unicorn creatives: those ads that for some reason just continue to perform month after month, even quarter after quarter. It’s a good situation: you have a great ad unit that just continues to perform and essentially print money, but it’s a super-frustrating situation because you literally can’t seem to beat the ad with a better one.

Iris has seen the same thing:

“​In some cases, we can have social casino apps that have the same champion for a year. That just happens sometimes. The stars align, something about how the coins are dropping from the skies, just getting people to install, and you’ll see situations where the table of champions doesn’t really change.”

Metrics deathmatch: CTR versus CVR 

There’s an odd inverse connection between clickthrough rate and conversion rate, Iris says, that likely most mobile marketers have noticed upon occasions: high CTRs equal low CVRs.

Not always, but often.

“The predictions are always diametrically opposed to one another, right?” says Iris. “So if a user has a high probability of clicking or installing, he’s gonna have a low probability of becoming a high value user in most cases. It’s very rare that the stars align and all the probabilities are high.”

Stars aligning, of course, is really nice. But it’s rare, even when you’re buying high-quality traffic from respected ad networks and supply side platforms. 

Especially when you’re trying to optimize for a low CPI.

Which is why smart marketers pay attention to CTR, but don’t give it too much weight. The key metric is CTI, click to install ratio. That’s more challenging on iOS in the era of SKAdNetwork attribution, because the signal is delayed, but it’s still possible use, Iris says.

So much more: watch, subscribe, listen

There’s so much more depth in our conversation. Watch the video above and subscribe to our YouTube channel

Also, subscribe to our Growth Masterminds podcast on your favorite platform to get the audio interviews we do with leading experts in growth, marketing, and adtech.

Plus … a full transcript of our conversation

If you read faster than you watch or listen … here’s a full transcript of my conversation with Joseph Iris, who leads machine learning at

Note that it’s largely machine-generated, so may contain errors.

John Koetsier:

What actually happens when a demand-side platform engages machine learning to boost your bids? 

Hello and welcome to Growth Masterminds. My name is John Koetsier. 

Using machine learning, of course, in mobile advertising and to drive bidding is super interesting. There’s less data to drive decisions than ever before. Makes it more and more critical to use each possible piece of data you can get and use it well. So how do DSPs do it? 

Here to chat is Joseph Iris. He’s the director of ML products at Welcome, Joseph.

Joseph Iris:

Thanks John, nice meeting you.

John Koetsier:

Great to meet you as well. 

Let’s start with the signals. What signals feed into machine learning for bidding?

Joseph Iris:

So it all starts in what we get in the requests from the exchanges, right? Obviously up until recently with everything with iOS and privacy, the device ID was like the biggest factor in it. 

But over time, as we go into a more privacy-oriented ecosystem, then that signal is becoming less and less significant and you can’t really rely on it anymore. 

So other than that, you have signals regarding the user’s connection, which ISP is coming from. Those things, even though they sound very … not necessarily related to the users, like actual engagement with ads, sometimes those things are useful as well. You get additional contextual signals from each exchange. 

Sometimes it, back in the day, used to even include the level of battery still left, but those things are kind of not there anymore. But basically they try to give you additional science into what sort of mindset the user is inside. One problem there is that it’s not unanimous across the board, like each exchange tries its own different stuff. 

So the things that you can rely on, you can use machine learning in order to manufacture around features around what that publisher means. It connects your relationship with the app you’re promoting. It’s a prior performance, and basically you can learn a lot from that. 

There’s device enrichment, which is significant. So on iOS it’s not really relevant because you have a very limited list of devices, but on Android it’s insane, right? We have tons of vendors, tons of models. As time progressed, you were able to, let’s say, differentiate between a high value phone from a feature phone pretty easily, but it’s becoming more difficult as manufacturing costs decrease and you can get really strong non-brand phones. 

So with device enrichment, we take the device UA stream, which is obviously not connected to anyone’s identity, so that’s going to stick around. And we can enrich it with the pixel density of the device, the number of cores, the RAM, and everything to create a profile of the device. And this way we can create different device segmentation, again, to differentiate between high-value users and also connect it to the context. 

Other than this, there are session signals that are coming from the exchanges as well because they do know where the user is placed inside the session so that is very informative in regards to the probability of clicking, installing and everything like that …

John Koetsier:

So when you say session, are you talking about how long somebody’s been in a particular app or how active they are?

Joseph Iris:

Yeah, I know that some exchanges try to do this at a multiple app level, but again, in the future it’s not gonna be that way, it’s gonna be just for the current publisher, it is what it is.

John Koetsier:


Joseph Iris:

I do wanna set the tone of the usage of machine learning for our use case, right? 

Because … marketers and me in my past life tend to over glorify any tool that you would actually use to do anything. And with machine learning it’s easy because it sounds like it’s from the future and stuff like that. Especially with all the buzz around it with ChatGPT and everything. 

And a layperson, not necessarily a layperson, even technical people using this tool would be amazed by its capabilities. The way I like to describe it to prospects and people that don’t really understand the industry, like my wife, no offense, but obviously she’s not really connected, she’s a dog trainer, you know. 

There’s a big gap. So what we’re effectively trying to do is remember yesterday very accurately at extremely high scale. So the assumption of any machine learning based prediction is that reality didn’t change dramatically from yesterday. And when it comes to at least our use case, there weren’t any significant breakthroughs in terms of like the underlying math. Under all of this compute and how we can now scale training much faster and it’s like a click to set up a huge cluster of devices on the cloud or on whatever infrastructure you use. It’s all the same concepts that most people know around statistics. 

So this should really reduce the entry barrier to at least discussing it because when you frame it that way, it’s no longer this magical black box that you can’t understand. It’s basically a tool trying to remember yesterday. That’s much more portable and that’s the reality.

John Koetsier:

Is there any other data that you use that is maybe contextual data or other data that isn’t necessarily confined to what you’re getting from an exchange? You know, there’s time of day, there’s seasonality, there’s other things like that. There’s also, and you mentioned this, your assumption is that today is pretty much like yesterday. 

Well, if today is a Super Bowl, today is not like yesterday, right? So are you feeding in things like that?

Joseph Iris:

Yeah, so you need to factor in really dramatic events that change stuff. You have features for that. You enrich your data with whether it is today a holiday or is today like a dramatic event in some market and that way it could expect it. 

But the reality of being able to adapt to those things quickly is by training incrementally instead of retraining your data like every day or every hour is to basically stream your learning and we operate in that way. So if reality is starting to change, imagine like the beginning of COVID, stuff changed right?

So as long as your pipeline is adaptable and it understands that the latest current trends are more important, and that is always done with weights that you apply to the more recent samples, then you’re pretty much fine. 

But yeah, other than this, a lot of stuff that we do with context comes from us preparing for the future of let’s call it interest groups, the same way that Google calls it in their plans for the privacy cloud, is to create cohorts based on their engagements with … with segments of categories. I can’t really call it categories because store categories are filled with lies, and you kind of need to create your own if you really want …

John Koetsier:

They really are filled with lies.

Joseph Iris:

Yeah, that’s the ASO people. I mean, I’m a fan, don’t get me wrong, but it makes our job more difficult. We need to win.

John Koetsier:

I am not a fan. I am definitely not a fan. I’m, I’m almost like, you know, where’s, where’s Apple or Google as a dictator saying, this is your category, stick in your category. It’s like people picking new categories all the time.

Joseph Iris:

One thing we’re lucky about is that when you’re building your App Store page, the description kind of has to reflect what’s inside the app, otherwise the users are going to be very upset very quickly. You remember the days of the fake ads.

John Koetsier:


Joseph Iris:

That doesn’t end well eventually, right?

John Koetsier:

Are those days over? I’m not sure they are.

Joseph Iris:

Kind of, kind of, kind of, kind of. They’re not fully over, but it’s not, you know, there were a few months where it was like basically everything. 

So in order to not be contaminated by ASO, we take the store descriptions themselves. And this is where we can actually use robust models that we don’t necessarily understand what happens under the hood, to understand the context. So when a couple years ago, you don’t know this and time goes by so quickly, I think it was even three years ago, when the whole scan conversation started, we understood, okay, it’s time to adapt to a reality where the user no longer exists.

[We created a] solution around using the store descriptors because we said, okay, you’re not going to change that into something that doesn’t make sense. It always has to consider the features that you’re offering, the theme, and what makes you different. 

One of the most important things in machine learning is this sentence, which is amazing if you think about it. It’s trash in and trash out. If you input the wrong data into your model, you’re going to get something completely useless. The technologies are all established, as I said before. There are no groundbreaking things really happening. You’re just computing faster and getting more accurate. But again, the concepts are the same.

You load up your input, you make sure it’s relevant, you help the machine learning to stay relevant, you’re gonna get good outputs out of it. So what we did, and I’d be happy to demo it, this is around how we use context to the best of our ability. I’ll describe it first and then I’ll use stuff to show you. We take the store description of the promoted app, and we also scrape all the store description of all the apps in the wild where we can actually have access to the inventory. We then create a vector representation. We embed it into something mathematical that we can use. That represents its context. And then when you have two vectors, you can measure the distance between them. So this allows us to say, for example, if you’re promoting a boxing app, anything that has the words boxing or fighting are going to be very, very close, and they’re going to have a high score. 

It’s like a score between one, zero to one. So it’s going to be really near one. 

Then you go further away, you will go to other sports, you go further away, you go to sports news until you reach stuff that are completely unrelated. So when we designed it this way, we thought about A, not wanting to take stuff ourselves because that sounds like a nightmare to maintain. A lot of companies do that, not just in our industry. 

Tagging is like a huge … it’s becoming an industry. Tagging and notation … I think that’s how we would call this whole field. 

We don’t want to do that stuff. We’re too lazy, I guess. We want something that’s automatic. So we basically build this process that constantly scrapes the stores for any changes or new apps. We feed this into this already established data set. And for each new app, we can say, okay, yeah, this is its context. Then we had a case study …

John Koetsier:

There’s probably a product right there actually, which is a new way of categorizing apps, which is just: I’m not categorizing them how they say they’re categorized, I’m categorizing them how they’re actually categorized.

Joseph Iris:

Yeah. Yeah. There’s definitely room to add the insights into the industry there. We didn’t go that far. It’s basically proprietary … so yeah, I can show you. Let me find the button to do that stuff. 

This is accessible for a website, just Google context distance calculator with the top score because apparently people don’t use that. So yeah, inside the app description you have keywords and the keywords, we actually detect them by their frequency. There’s a term in machine learning and managing text in natural language processing.

That’s called TFIDF. Again, sounds crazy, but it’s very simple. It’s term frequency inverse to the document frequency. It’s basically the reality of the world against the entire corpus . Uh, so if a world is more rare, It would get higher weight because it expresses the context higher. So we use the model called Elmo. A lot of the models in this field, in adtech come from MuppetNet, so it started with Elmo, then it was Burt, and then it, so it’s kind of funny, I don’t know why they use that, because they teach words, I guess, that kind of makes sense if I think about it. 

So we use this already established model in order to create the embedding, and we apply the weights according to the rarity of the words, because these express the context. So as you can see in this example, words that appear less, and weights, this way we get an app representation and we can measure distances. 

And if I take a couple of demos that I have here, if we take Homescapes for example, you can see that it’s going to find, we show only like the top 20, so usually you only select the top 20 the closest ones, you can see obviously some of the direct competitors and some stuff that like use similar features but not necessarily the exact same theme. 

If you go for dating, and I chose Bumble, I didn’t use a dating app ever because I’m old, but I think that’s a popular context, right? The keywords are very upfront and they tell you exactly what they are. And when it comes to things that are more complex, this can of course hit or miss, but the right thing to do here when you design a campaign using this sort of tool for either SKAN or probabilistic attribution is to just build your campaign structure around this and use it as a feature in the model. 

So eventually it understands, okay, if this is a close context, is it good or is it bad for my performance? Because sometimes it doesn’t necessarily have to be good, but just accordingly, a bit less, bit more. But yeah, this was our way of making it like future proof and not needing to keep doing it manually. 

John Koetsier:

It’s super interesting to hear this understanding of context in the app world. Because of course, there’s always been context on the web, right? And context on the web is pretty easy because as you’re deciding what ad to put on a page, you know a lot about that domain. You know a lot about the content on that page. It’s easily scrapable. It’s easily understandable. And so you have a lot of contextual data. 

But in-app doesn’t have those kind of realities, doesn’t have those kind of accessible pages to a scraper or something like that. So it’s a super interesting way to look at context and how it works. Love it. 

Talk about creative. How does creative come into your models?

Joseph Iris:

Yeah, so when we originally designed the system, we started just with A-B testing, but very quickly we understood that it’s not … I mean, you can build all sorts of automations around it to make it effective and make sure it’s statistical, and you have people that make careers out of it. 

And yeah, sometimes that can be the right tool, but in our use case, when everything moves really quickly. We understood we have to leverage more advanced technologies. So the way we approached it, and by the way, we look at it very differently from a UA manager. For us, the creative is a tool. For UA managers, it’s a tool as well, but there’s a lot more thought going into what you’re putting inside it. There are huge art teams.

John Koetsier:

Brand … does this look like our app … all that stuff.

Joseph Iris:

So that makes sense, right? I mean, that needs to happen. But we’re at this, we need to focus on taking what we get and just making it the most useful tool for us to collect observations and samples faster and more effectively. 

So in order to train our models for new partners, where we kind of have a cold start problem, we have to collect samples quickly. Otherwise they’ll be like, I’m not gonna spend $50,000 exploring with you because I don’t see anything happening. So we built a solution. There’s a different field, the more advanced fields in machine learning called reinforcement learning. That’s the field that’s used for training bots that you’d play against in video games and stuff like that. Because it has mechanisms to give the machine rewards and punishments based on its actions. a double bombastic name, a multi-armed bandits, which is actually coming from the one-armed bandit analogy of a slot machine …

John Koetsier:

Yes …

Joseph Iris:

… because the theoretical problem they were trying to solve was which slot machine do you play with to increase your odds. So in effect, what it is again, it’s much more simplistic. We have the capability to know each creative CTR and IPM in real time. It’s not simple to store this data and serve it very quickly and update it with each new observation that you get, each new impression click, etc. We figured that part out and we were able to scale it. 

And this way you start off without knowing anything, but as soon as you get the first signal you have a champion and you can start giving it more of your traffic. So you’re not, you’re effectively A-B testing in real time with much more than two variations and you can adjust very quickly. 

So over time as the champion or champions become more statistically significant they get more of the weight. They get, let’s say, up to 85%, 90% of the traffic. And the other remaining 10% are still left there for exploration for the option of new champions to emerge. In some cases, we can have social casino apps that have the same champion for a year. That just happens sometimes. The stars align, something about how the coins are dropping from the skies, just getting people to install, and you’ll see situations where the table of champions doesn’t really change, like number one is number one. 

But in other cases where you introduce new, you know, the creative teams that are a bit more, how do you call it, adventurous, you can have things shifting all the time. So we had to build something that can always explore. This is a pretty robust solution as is. I mean, as far as we can tell, it really fits the use case. 

One thing that’s missing for me, ironically, based on all of the conversation so far is context. because this solution is designed for one champion at a given time. But of course, when you’re targeting users, you have much more than one persona. So the next iteration we’re working on, and I’m hoping to release this quarter, is basically adding context into this. So when you’re selecting the champion, you’ll have different cohorts that get different champions based on their features. you’ll get the additional boost.

John Koetsier:

Super, super interesting. It’s funny you talked about those creatives that are just winning for like a year or something like that. I’ve called those unicorn creatives.

And I’ve seen that from marketers in the past where there’s been just this one unicorn creative they can’t beat, they just can’t beat it. They try, they’re beating their heads against a wall and they can’t win against this one creative.

It’s a good problem to have.

Joseph Iris:


John Koetsier:

It means that something is just working really, really well … but it can be frustrating for marketers. It’s also interesting that using about 10% of the budget for testing is kind of the testing tax, right? You need to do that, you need to find your next unicorn creative, your next one, that works really well. 

I guess the big question is, what are the signals that mean success to you as a DSP? Is it a click? Cause that’s pretty easy to game, right? Somebody just shoots up an SKOverlay when you look at this playable ad, you didn’t do anything with boom, you’re in the app store almost, right? You know, and other things like that …

Joseph Iris:

Happens on Android as well.

It’s not just iOS now. It’s not just SKOverlay. Like you said, like clicks are, I mean, for years, clicks haven’t been like what they used to be. You know what I mean? A click doesn’t necessarily mean intent.

John Koetsier:

Did they ever?

Joseph Iris:

I mean, yeah, I know what you mean. At some period in time, they probably were. But when we came into this field, this was like after like six, seven years of tech experience already. 

But building this programmatic tool at this crazy scale when you’re processing three million queries a second and you have still have room to grow. It teaches you things quickly like reality hits you hard … can life comes at you fast as you say … so very quickly we assumed okay let’s assume we got rid of all the BS in adtech, right? 

We’re not buying anything fraudulent, we’re directly integrated with all the major SSPs, you know, Unity, AppLovin, and all those good guys, right? So we said, okay, it would be enough just to get a good CPI and from there, everything’s gonna work itself out, right? 

These are real humans, the apps convert at like 5% from install to purchase, we gotta be fine. 

No. Definitely not, especially when you’re trying to optimize towards a lower CPI. 

The predictions are always, I think you say diametrically opposed to one another, right? So if a user has a high probability of clicking or installing, he’s gonna have a low probability of becoming a high value user in most cases. It’s very rare that the stars align and all the probabilities are high. 

And we were like those marketers from before, breaking our heads against the wall of why these users that are clicking are not really installing or paying. We figured out that, yeah, you can’t rely on those signals. Not [completely], because obviously they affect attribution. But you need to treat them with a grain of salt. And with all sorts of tools in machine learning, give them less weight. 

So the full string of predictions we do for a single ad request consists of predicting the auction price for bid sharing. We can get into that later. That’s really interesting in and of itself. 

Predicting the probability of a click, an install given a click, a post-install event, or even an LTV or something about quality and the value it reflects for the advertiser depending on his KPI, and the probability of a view through attribution. 

But that’s a lesser part of this. because you multiply the first three and viewthrough comes in at the end. So when you’re looking at this, the things that matter most are the CTI, that conversion, the click to install, and the post-install one. So those are given significantly higher weight in anything that we consider. Everything else is a tool in order to get those targets so we can train effectively.

John Koetsier:

It’s funny because as you’re talking about that, as you’re talking about the signals that you could potentially look at, and then the one that you really care about, this click to install, and your machine learning model is trying to compute all that in a couple hundred milliseconds.

Joseph Iris:


John Koetsier:

And I’m thinking you’re boxing in the dark while you have a blindfold on. while your hands are tied behind your back.

Joseph Iris:


John Koetsier:

How many other hurdles can I put in your way?

While you’re standing on a ladder over a thousand meter fall because you don’t know anything about that person who’s viewing that ad or potentially viewing that ad because of course we’re in the era of privacy on SKAdNetwork. You just don’t know, you have to go in this context and that’s soon gonna be the case largely the case on Android as well as Privacy Sandbox comes in there. 

Are you using any SKAdNetwork data? Does that impact anything that you’re doing in real time in those couple hundred milliseconds?

Joseph Iris:

So right now it’s still limited. 

John Koetsier:

So that’s a no, right?

Joseph Iris:

It’s not a no at all because obviously we have to prepare for the future. I mean, yeah, we still live in a different reality, but we have to prepare for tomorrow or rainy day, however you wanna call it.

John Koetsier:


Joseph Iris:

The same concept of weighting … it works in the same way now. So because those signals are not specific at all, even with SKAN 4, like the lowest level, it’s very different from anything deterministic or probabilistic where you can still tie it to a transaction. So you have to treat it with a grain of salt and build your scheme around it. And that’s what we do.

So with this new structure of SKAN 4, where you do get more detailed information. So again, we weight the signal according to how much it’s, they say coarse and what’s the other word …

John Koetsier:


Joseph Iris:

Yeah, yeah, so the fine, so those get higher weights, coarse ones still get some reward, but they’re not as close to the significant ones. And we can use it.

It’s just, so this shift towards less signal for us was scary at first, but when you saw the dynamic of the market changing so rapidly. So what happened when first when this whole ATT came into play, all the budgets went to Android, right? And the prices on iOS plummeted.

John Koetsier:

Yes, we know that.

Joseph Iris:

That means that even if you can’t classify the same way you could before, as long as you can still classify, you can play the game. So I

John Koetsier:

You know, and the funny thing was that the smart money stayed on iOS, the smart money stayed on iOS because just because you couldn’t measure success, didn’t mean you didn’t have success.

Joseph Iris:

It was just very scary.

John Koetsier:

So if you a) had some faith or b) had alternate means of measurement, as in maybe MMM, media mix modeling or other things like that, or c) figured out SKAdNetwork really, really quickly – because you can make it performant if you know how to do it and if you have the right tools to do it – you had a huge advantage for a couple months there, maybe even existing to a certain extent till today because there’s still marketers that have stayed away from it, still haven’t figured out SKAdNetwork, then you had a big advantage. 

But it’s hard for me to understand how SKAN data can make it into your machine learning models because not only is it aggregate, and therefore not tied to a specific device or anything like that … it’s also delayed … and the delays in SKAN 4 are significant. We’re talking easily 35 days in some cases.

Joseph Iris:

So with adtech and machine learning, you kind of have to get ready for delayed feedback and what’s called sense of data out of the box because that’s the way attribution works. So when you have a click attribution in the open for seven days and you want to train so frequently, you  have to have tools built in, assuming that the impressions you’re getting now can turn into installs later. So just adjusting to that is not so difficult. Adjusting to the reality that you can’t connect an install to an impression obviously is much harder. 

But as long as you map to the lowest degree you can, based on how you get the data back from Apple. When it was 100 IDs, we had a case study with Tilting Point where we were actually able to leverage that and get better performance than the normal iOS traffic at the time. But then it was mostly because it was the savvy users not getting ads, not just everyone, because they knew how to opt out behind like eight screens inside the settings on iOS. 

So, you factor it in this way, you use weights in order to give higher importance to where you actually know the publisher app or the ad set and the deeper levels and hope for the best. 

Just kidding. 

John Koetsier:


Joseph Iris:

But I mean, you still need to design the campaigns in a way where you still get enough signal to keep this going. But again, the prices always adjust to your capability, to our capability and any performance based buyer that actually has machine learning capabilities because we set the tone, right? 

Performance buyers are the only ones that are able to beat crazy high CPMs on UA. Retargeting is a different story, but on UA we’re the only companies that can say, okay, this impression hides a hundred IPM under it somehow, right? 

So, you asked before about like the decrease, so I can give you some numbers actually about like what the impact was.

John Koetsier:

Yeah, and here’s context because this was before we started recording. 

I’ve been wondering for some time on, you know, where are we in terms of ad efficiency on iOS specifically? And we’ll talk about Android in a year or two or something like that. 

But you know, if our level with IDFA pre-ATT was one. Right? Let’s say that our efficiency was one. What is our efficiency now with SKAN 3 and maybe thinking about SKAN 4? Is our efficiency 0.5? Is it 0.3? Is it 0.7? Is it a range, depending on how well we understand scan and how to advertise in this reality? Throw it over to you.

Joseph Iris:

So obviously there are a lot of factors that go into it. One being like the prominence of the promoted app. The more popular it is, the easier it is to actually work with SKAN. But generally speaking, I can give you an example that I know the numbers of, because I looked at a lot. 

Casual games, of course, are a big part of our mix when we, like, with our UA partners. So when we look at our ability to classify and discriminate and bid higher and lower based on the predicted IPM or post-install event probability, the IPM range we would see for casual games, the lowest probability to the highest would be between basically no installs, 0 IPM to around 20. This was pre-ATT and this is the reality in Android to some degree.  

Post-ATT, it decreased from 0 to 5. So if you look at sheer numbers, that’s like 25% efficiency versus what we had before. But prices decreased by even more than that. So again, this kind of balance is about and proves there’s an opportunity for smarter buyers.

John Koetsier:

Wow. So interesting, so interesting. 

So you think about the broader impacts of that, right? There’s obviously the specific impacts in terms of adtech, in terms of publishers and advertisers. OK, we’re 25% as efficient, but our costs drop more. So in the end, we don’t really care about that. 

What are the bigger environmental impacts? Ads are cheaper. You’re going to see more ads, right? Ads are less effective. Are you going to see worse ads? We don’t have time to get into all that right now. but that’s absolutely fascinating. It’s something I’ll probably dive into in a blog post or something like that. 

We have to bring this to an end. This has been super interesting. It’s been super informative. I absolutely love the things you’ve been talking about. We have to bring this to an end. I wanna end here. What signals are most predictive? Is it context in terms of the app description, app listing? Is it something else? What signals do you find are most predictive?

Joseph Iris:

So in reality, most of them by themselves are not meaningless, but are not enough to get you what you need. 

So, the classic example, like one of the first tasks you would run as a machine learning practitioner would be to predict the price of a house. Right? That’s like the classic use case. And the features for that are like the number of rooms. and how many stairs are in the house, is there a basement, stuff like that. And you create an equation that says, based on each of these different things, what’s the price going to be based on prior knowledge. The reality eventually comes from all of these interactions between these features. 

So at face value, let’s say if a user is just at the start of a session, that means nothing. But if he’s in the start of a session, and this is a similar context to the context he’s playing right now, and it’s 8:00 PM and the Super Bowl was yesterday, for example, then at this point you have a very specific reality that can suddenly bring you that 20, 30, 40 IP.

It’s never a single feature by itself. It’s always a combination of a few things. They usually come around the session. The session is very strong, as long as you know how to use it because users’ attention spans are short. So usually in most contexts, the beginning of a session is better because they’re more open to learn about new things, but it has to come in conjunction with the context they’re in, their overall context or like the context of the cohort and still taking into account things sound like they’re meaningless, like the ISP as I said before, but again, when you combine it with all of this and you have enough information, they can get you to this very specific case where, yeah, like again, the stars align. 

I use that sentence a lot, but you kind of need that when you’re designing. So as long as …

John Koetsier:

Essentially, what you’re saying is that machine learning and finding the right ad for the right person at the right time is basically astrology, because the stars have to align. And all the factors …

Joseph Iris:

Yeah, but I mean, I wouldn’t, you know, when I read every, I used to do it when I was younger, you know, just open the newspaper and read like the predictions. So yeah, those are, the ones you get there are very generic, right? You’re going to have a bad time, you’re going to lose something, you know …

John Koetsier:

You will meet somebody new today!

Joseph Iris:

So with adtech, you’re trying … yeah, so in our case, yeah, you need to be more specific, but kind of, yeah, because if you think about what astrologists are doing, it’s just, yeah, so in many cases, last week, you probably, you know, these things happen. You were frustrated, you know, usual things that happen in everyday life are going to happen tomorrow. Yeah, yeah, yeah. So, so kind of. But yeah. It is very mathematical and very scientific method oriented. 

I mean, it doesn’t matter how much signal you have, if you set up the correct tools, you clean up the data correctly, you work with the reality that you have and you adapt quickly, which is the most important thing in the world right now, I think, with everything changing so rapidly, you can compete. And that’s why I work so long, so such long hours, because I believe we can do this even more effectively.

John Koetsier:

Excellent, excellent, excellent. Well, we’ve gone from science to astrology to fortune cookies to, you know what, do your homework and good things can happen.

Joseph Iris:


John Koetsier:

Joseph, this has been super informative and also quite a bit of fun. Thank you so much.

Joseph Iris:

Thanks John, happy to be here.

Stay up to date on the latest happenings in digital marketing

Simply send us your email and you’re in! We promise not to spam you.