In this three part series, Timothy Whitfield, director, technical operations at GroupM puts four Demand Side Platforms (DSPs) to the test. Read part one here.

The ultimate goal of this test was to review a number of DSPs and understand which DSP was correct for a specific brand to launch their product in the market. This brand is called Nutrikane and sells a health product which maintains blood glucose levels which can be invaluable if you are a diabetic, or even if you are worried about keeping your glucose levels in check.

The feedback from the first article in this series was overwhelming and I can summarise it into three equal categories:

  1. Appreciating the Test – There was a large amount of feedback simply appreciating the fact that we are testing various ad tech companies. This was very pleasing as the whole goal of the test is to educate and demystify how some of this ad-tech works. Summarised by: “Keep on testing!”
  2. Feed the Algorithm – The second group of feedback was about people confirming that in order for a good DSP to work it needs a minimum viable amount of conversion data from the brand’s web site which is used to create LAL (Look-a-like) models. Summarised by: “Data is the fuel for the programmatic algorithms.”
  3. Horses for Courses – The last group of feedback was about people confirming that DSPs should really be classified in two different groups. Firstly, Retargeting DSPs (ads which follow you around the internet) and secondly Prospecting DSPs. Summarised by: “Select the right tech for the job.”

Getting the basics right

In the section below we start to look at the results from the test. Remember that we asked circa 16 different DSPs to participate in this test. We felt that the ideal number to test would be a maximum of six as it would be hard to juggle more on a single plan. Once we received six vendors that wanted to partake then we closed the invites and started the test. However, two of the vendors changed their mind at the very last moment. They called up just hours before the test was going live and said “Sorry, we have changed our minds.” This was naturally very disappointing but once again, underlines the nervousness about having your technology put under the microscope.

The test ran for one entire month from 1st to 31st March 2017. Each vendor was given 3 x HTML5 digital banner ads (300×250, 728×90 and 160×600) asked to follow the following goals/limitations.

  1. Delivery Objective – Impression GoalWe wanted to make sure that the test was as fair as possible and asked each of the DPSs to deliver as close to 1,000,000 impressions as possible. This extra objective was designed to ensure that they didn’t “game the system” by delivering a small number of impressions and a single acquisition but technically have the lowest CPA.
  2. Optimisation Objective – Lowest CPAThe ultimate goal of the test was to drive the lowest CPA and also the highest number of conversions.
  3. Media Objective – Brand SafetyWhilst we didn’t make this a priority, we wanted to make sure that we were measuring brand safety. We were thrilled that our partners MOAT were able to help donate some free measurement to help measure the impressions in this test.

Chart 1 – Delivery Objective – Impression Goal

In the above chart you can see that DSP2 and DSP3 correctly hit their impression goal of circa 1,000,000 impressions. DPS4 only managed to hit 75 per cent of the impression goal and DSP only delivered circa 1.3 per cent of the impression goal.

We were very concerned that DSP1 was so far behind pacing and we conferred with them each week but they assured us that their technology was correctly configured and that “the algorithm” was optimising towards the best audience.

Chart 2 – Optimisation Objective – Lowest CPA

In this chart above we can see that DSP2 and DPS4 have a similar click through rate whilst DSP3 is slightly lower than it’s competitors. However, we can clearly see that DSP1 has a click and conversion rate which is several times larger than its competitors. The average CTR in the market is circa 0.05 per cent and the CTR from DSP1 was 0.6 per cent. Considering that small amount of impressions that DSP1 delivered it started to feel that something was wrong with the targeting from DSP1.

Chart 3 – Media Objective – Brand Safety

In this chart above we looked at what I call BAV:

  • B – Brand Safety – Was the ad displayed on a web site that was contextual brand safe. For instance was the text on the page about terrorism, adult content, illegal downloads etc… Ideally, we wanted to be 100 per cent brand safe which means that we wanted to have 0 per cent unsafe impressions.
  • A – Ad-Fraud – Was the ad seen by a human that had the opportunity to purchase the product. In other words, avoiding Ad-Fraud means that we want to stay away from Click Farms, Data Centres, Web Robots etc… This is all called IVT which stands for InValid Traffic and the industry average for this at the moment is about 5 per cent. Anything less than that is good.
  • V – Viewability – Did the ad have the opportunity to be seen (or not). You wouldn’t want to buy a billboard on a highway that was facing into the forest, and in the same way you don’t want to buy an impression that has no opportunity to be seen. We want the number to be as close to 100 per cent as possible. The industry average at the moment for display advertising is about 50 per cent. Anything north of that number is good.

We can clearly see that DSP2, DSP3 and DSP4 had similar levels of Brand Safety, Ad-Fraud and Viewbility. It could be argued that DSP2 had a slightly higher Viewability and a slightly lower Ad-Fraud that the rest of the DSPs. However, we can clearly see that DSP1 had a significant amount of Ad-Fraud and also a significantly lower Brand Safety score.

Clearly we could see that something very fishy was going on with DSP1. It only delivered only 1.3 per cent of the impressions but had a massive amount of Ad-Fraud and low Brand Safety. However, when we started to look at the log level data and see de-dupe the conversions the results were astonishing…

In summary

If you are a CMO and you are thinking about using a DSP to launch a brand from scratch then there are a number of key takeaways from these findings. I can summarise them as follows;

  • Clear Testing Objectives – It’s important with any head-to-head test that you make very clear objectives. In this test we created 3 x objectives that were designed to help each other to find the winner. Brand Safety was a mandatory as all vendors need to keep your brand safe. Impression Goal was very basic but not everybody hit it and CPA objective really showed how the “algorithm” worked.
  • Always Measure Brand Safety – Brand Safety is “table stakes” for any programmatic buying these days. When you work with a DSP then you need to make sure that there are clauses in your contract that ensure that they hit minimum standards for Brand Safety, Ad-Fraud and Viewability.
  • Be Skeptical – All of these companies were large multi-national companies and all of them are supposed experts in their fields. However, it’s good to have a small amount of healthy scepticism. As can be seen from these results DPS4 wasn’t able to hit the impression goal and DSP1 seemed to have a problem with Ad-Fraud and IVT.

You may be wondering “Which DSP had the lowest CPA?” “What’s the story with DSP1 and why are its results so different to everybody else?”. Read the conclusion here.

Previous post

Brands realising the opportunity digital brings to outdoor: ooh!Media

Next post

SAP Appoints new ANZ MD and executive director

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.