Following on from my article on Hyper Local Geo Targeting I felt that it would be equally interesting to use scientific principle to forensically dissect another group of Ad-Tech companies that are often clumped together.
Not knowing where to start I figured that I might as well start at the beginning of Ad-Tech: Demand Side Platforms (DSP).
The goal was to understand the similarities and differences between the various DSPs. At face value they all seem very similar. They all say (a) that they have the best algorithm; (b) they all have the best data scientists; and (c) they have the best campaign results. The goal was to put them to the test.
The test needed to be a fair evaluation of these DSPs. There are many types of campaigns that could be tested but in the end it was easier to test a display / branding campaign rather than a video or a performance campaign. Performance campaigns may be easier to measure results (CTR / CPA / ROI etc…) however, they require a more complex setup as they require a conversion tracking pixels for each vendor.
The first step was vendor selection. Each DSP that had contacted me and also had a local office in our market was invited to participate in the test. There were 16 vendors in total that were contacted. If you know the Ad-Tech space then you can guess who was invited. Interestingly, the sub-set of DSPs that specialised in video campaigns decided not to participate. This was surprising as their sales collateral clearly state that they can support display campaigns. Furthermore, another sub-set of the DSPs that specialise in retargeting decided not to participate. This was just as surprising as their sales collateral clearly states that they also support branding objectives. In total: only 6 DSPs accepted.
The next step was to find a test campaign: My friends at Multiple Sclerosis Research Australia provided me a test creative for “Kiss Goodbye to MS“. It was important to be socially responsible and promote a good cause where possible. This campaign reached 400,000 unique users, over 600 clicks and over 400 total exposure hours.
“MS Research Australia is really thrilled and grateful for the pro-bono support that GroupM has provided for our Kiss Goodbye to MS campaign. The ability to market our campaign using various media channels is very important to us and something we would not have been able to do without their support. This has created significant awareness about MS” – Matthew Miles, CEO, MS Research Australia
The next step was to find somebody to administer the test: Our ad-serving partner Sizmek were great as they donated free ad-serving and countless hours of their time in trafficking the campaign.
“Ask Wall Street, a journalist, or anyone within the adtech ecosystem, and they’ll tell you that trying to discern the differences between adtech vendors within a category is futile. That’s why it’s so important for agencies like GroupM to employ test campaigns to show how vendors stack up, and who ultimately provides the best results,” said Neil Nguyen, CEO of Sizmek. “We were excited to have the chance to work with GroupM in an open and transparent way to deliver the most impactful campaign possible.”
Lastly, we needed some measurements and our partners at Moat measured the viewability for free and Nielsen also measured the % In Target Demographics at no cost. Big thanks to all of them.
“Moat were thrilled to participate in this GroupM study. The team at GroupM organized a thoughtful apples-to-apples approach that allows marketers to understand differences across DSPs. This type of study can serve as the basis for understanding performance when in all cases viewability matters, demographics matter, brand safety matters, and humanity matters,” said Jonah Goodhart, Co-Founder and CEO of Moat.
The scientific objectives of the test were simple: Each DSP had a maximum of 100,000 impressions to hit three specific objectives.
- Highest Viewability Rate
- In Target Demographic % (Males 25-54)
- Maximum Unique Reach (lowest frequency)
Note: non-human, non-brand safe and non-domestic impressions were removed from the results.
The results were fascinating (see table below).
|Vendor||In-View||On-Target||Freq.||Safe %||IVT %||Score|
Please note that the last row in the report is industry averages for a 300×250 display creative. They are cobbled together from various studies from Moat, Nielsen, Grapeshot and AppNexus. The only number which is an estimate is the Freq which is an average of the test itself.
Legend of metrics explained.
- In View – This is the % of impressions whereby the whole surface area of the creative was viewable for 1 second or longer.
- On Target – This is the % of the impressions that were delivered to an audience that was believed to be Male between 25 and 54 years old according to the Nielsen data-set.
- Frequency – This is the average frequency of the campaign. The lower the number the better.
- Safe % – This is the Brand Safety of the site as measured by our partners at Grapeshot.
- IVT% – This is the % of the impressions that were delivered to non-human or “invalid traffic”. The lower the number the better.
- Score – This is the final score. It represents the % of the media spend that was delivered to a viewable, brand safe, unique, human, male between 25 and 54.
Here are my observations for each of the vendors
- Vendor 1 – They were very organised right from the beginning. Their attitude to this process was “yeah sure, we will just load-up the campaign for you.” They didn’t seem to break a sweat in this whole process.
- Vendor 2 – They performed amazingly well. They took this process very seriously. They manually checked the campaign daily and maximised their viewability and frequency. However, they were just didn’t have as much demographic data.
- Vendor 3 – I was blown away by their service. They proactively checked, optimised and re-optimised their campaigns manually. Very high level of service.
- Vendor 4 – They had some solid results. However, they struggled with frequency. I feel that this was because of their “X-Device” targeting solution not being as strong as vendors 1 to 3. I had some brand safety concerns due to the large % of foreign web sites on their site list.
- Vendor 5 – They did reasonably well with Viewability but once again they didn’t have enough demographic data. Their frequency also blew out which I put down to too little manual checking.
- Vendor 6 – Whilst you may feel that compared to the rest the numbers were low please keep in mind that their viewability and demographic targeting were on par with the market average.
Before you ask: I can’t / won’t name the vendors. Please respect this 🙂
In Summary: There was so much data. I have hundreds of megabytes of data from our partners at Sizmek and Moat. Between the two of them I have over 100 metrics for each combination of site and placement, however I want to keep this article short so here are my key take reminders for advertisers.
- DSP’s core function – When selecting a DSP think about your marketing budget in terms of Display vs. Video and Branding vs. Performance. Build your tech stack accordingly.
- Demographic Data Optimisation – It’s important for you to ask your DSP where they get their demographic data from. There are a number of vendors in market and not all of them are cracked up to what they say.
- Site List Optimisation – It’s important that you understand if your DSP runs on a White or Black list and for you to be involved in setting up the Brand Safety % tolerances.
- X-Device Optimisation – It’s important that you ask your DSP what technology they use X-Device connecting Mobile and Desktop impressions.
- Manual Campaign Optimisation – If you are an advertiser it’s important to know how often your campaign will be manually checked / optimised.
In Summary: Please be mindful that not all technology is equal. I often hear about advertisers wanting to build their own “tech stack” and I strongly recommend that you kick the tires hard and dig deep into the technology before selecting any Ad-Tech vendor.