Use Creative Clustering to Optimize Your Ad Creatives
Here at Singular, we aim to be the best at what we do — utilizing every bit of your mobile marketing data to provide meaningful campaign analytics. In this blog post, we’ll explain what creative clustering is, we’ll demonstrate several clustering methods and we’ll show you how clustering techniques can be used to improve ad performance.
Creatives are the campaign images, ad text or videos displayed to the user when an ad is served. Marketers constantly test different visual combinations on ad networks to determine which creatives drive the best performance in terms of revenue and app engagement. Using that data, they can successfully complete the task of ad optimization.
Often the marketer will use identical images across multiple campaigns (NY_males_18–35) and ad channels (e.g., Facebook, Twitter, etc.) In these instances, marketers need to group creative performance data for images that are exact matches.
However, in some cases, marketers might also want to test campaign image “themes” and aggregate data for similar images. But “Pixel-Perfect” matching algorithms frequently fail to aggregate data under image themes due to small differences in the images.
These differences in similarly-themed images typically arise in two types of scenarios:
- Intentional changes: small tweaks to the ad group creative, such as ad text language changes or to test different color schemes
- Unintentional changes: changes made by the ad channel you’re marketing on. Ad channels often resize or change the encoding of creatives, causing the images to not match anymore
For instance, in the example below, one Viking image underwent minor visual changes. While the marketer might want to see how Viking-themed images performed against other themes, “Pixel-Perfect” matching won’t recognize the images as belonging to the same theme:
Small differences between these two ad campaign creatives cause them to be classified under different creative themes.
In another example, images are similar, but the accompanying text below (which is part of the image) is different. One is in English and the other is in Russian:
The above cases make it quite obvious that the marketer may sometimes want to see which type of image or campaign image theme performs better, rather than how the exact image performs. It doesn’t matter how the English version of the image above performs against the Russian version. It’s more important to compare the above image with other images. For that, we can’t use a “Pixel-Perfect” matching algorithm. Instead, we need to use another kind of algorithm, called “Perceptual Hash”.
Perceptual Image Hashing
While you are probably familiar with hash functions like MD5 or SHA1, perceptual image hash functions are quite different. Hash functions like MD5 and SHA1 are mainly used as cryptographic hash functions, and are influenced by the Avalanche Effect, which implies that changing a single bit of input creates dramatic changes in output, or a completely different, random hash:
Instead, perceptual image hashing allows two similar images to be scaled or have different aspect ratios and still have the same resulting hash (or a very close one). Most of the perceptual image hashing algorithms work in quite the same way:
- Reduce to thumbnail size
- Reduce color to grayscale
- Average the resulting pixels
- Calculate the hash
For this demonstration, we’ll use the Average Hash (aHash) method to calculate the perceptual image hash:
1. Reduce size. This first stage allows the following stages and processing to run much faster. We only need to process 64 bits instead of millions. In this stage we are also “normalizing” the scale of our image to a thumbnail size. If you use a bigger size, the resulting hash will be more accurate. You should choose the size that fits you, taking into account the tradeoff between accuracy and processing time.
2. Reduce color. In this stage, we convert our image to grayscale. Removing all the color enables us to process less data much faster and put more emphasis on the structural similarities in the image.
3. Calculate the average pixel color.
4. Calculate the final image hash
Now, let’s take the following modified, but similar image and run aHash on it:
As you can see, the hash changed from 0xF8F25F3C9EC4F0F8 to 0xF8F25F3C9EC4F0F0 — a change of a single bit. So in order to compare how similar the two images are — we need to calculate the number of different bits in the resulting hash. This number is also called the Hamming Distance. The lower the distance, the more similar the images are.
This is our final aHash function:
But instead of implementing aHash ourselves, we can use the amazing imagehash Python package. With imagehash, it’s very simple to calculate the aHash:
Now, let’s look at another hashing algorithm — dHash. While aHash uses the average pixel values to calculate the hash and is very fast, it can generate a lot of false positives. dHash, by using gradients instead of calculating the average pixel value, gains a huge advantage — it operates at about the same speed as aHash, but yields far fewer false positives. This is how dHash operates:
- Reduce Size
- Convert to grayscale
- Calculate the differences between adjacent pixels — if pixel number 1 is smaller than pixel number 2, mark as “1”, else “0”
- Calculate hash
For example, this is the result of running dHash on the above two images:
We can see that the Hamming Distance between the two hashes is once again a single bit. Looking ahead, we have several ideas on how to further develop our ad optimization analytics product, by providing Creative Clustering for videos and offering even more valuable analytics on top of this framework.
Ultimately, one could use whatever image hashing or matching method that seems suitable. While there is no “one size fits all” method, there are several popular methods that are probably going to meet your needs.
Found this interesting? Looking for a career? Singular is hiring!
If you have any questions or feedback, please contact Nir at firstname.lastname@example.org.