In this three part series, Timothy Whitfield, director, technical operations at GroupM puts four Demand Side Platforms (DSPs) to the test. Read part one here and read part two here.

After talking to each of the four DSPs this week, they have all agreed to share their identity with the public! However, they have all requested that their position in the test remain anonymous. I fully respect that decision as I’m thrilled that they participated as it shows strength and pride in their technologies that they participated. Especially when over a dozen companies refused to even enter. The DSPs that participated in the test were as follows:

DSP head to head companies

Please note; once again; this is just the list of participants, in no particular order.

Client Service Feedback

It’s important to point out that we were dealing, not only with four technologies, but we were also dealing with four client service teams as well. At the end of the test it was important to understand how well my operations team worked with each DSP. The feedback below is a summary of the feedback that was written by my operations team.

  • DSP1 – We did not receive any commentary regarding campaign insights and performance and we no visibility into this. Throughout the test I often flagged several red flags regarding low impression count, IVT % and ‘Unsafe imp %’ and initially did not receive a response.
  • DSP2 – This vendor was a pleasure to work with. The programmatic specialist was quick to dissect the brief and ask various questions in order to seek clarification on the campaign objective and test definitions.
  • DSP3 – This vendor had a ‘no fuss’ approach to campaign set up initially – only a couple questions were asked and they were quick to send over pixels and confirm ad tags were received. No regular reporting was sent through and we were not really across any updates, issues or results that they were seeing on their end.
  • DSP4 – This vendor had close to no questions regarding the test objectives and requirements which was a surprise considering the amount that came through from the other participants. There was minimal communication during campaign set up and I had to follow up several times initially for pixels to be sent and creative tags to be confirmed.

De-Duplicating Acquisitions

It was important to note that this test was designed to find the lowest CPA. That means we needed to find the DSP that could provide the highest amount of conversions for the lowest amount of money. There were two problems with this model; (a) what happened if somebody clicked on the ad from multiple DSPs? and (b) how could we independently evaluate the media spend?

Solving the 1st problem wasn’t too hard; we decided to log level analysis of the conversions and “normalise” any attribution. That means if a user had seen DSP1 and then DSP2 and then purchased the product then both DSP1 and DSP2 would receive even weighting towards the conversion event. Naturally, there are many finer points that we could raise here including (1) what about clicks vs impressions (2) what about time decay of events (3) what was the lookback window time etc… Yuk! I really didn’t want to get bogged down in the miniature of the test so we agreed to use a simple, even weighted, attribution model. I appreciate that the hard-core ad tech people out there would have liked a much more rigorous model, but I hope that you appreciate that it would have been much harder to get all the DSPs to agree to the test if the model was more complicated.

Solving the 2nd problem (looking at cost) turned out to be much easier than we had anticipated. We received spend trackers from the DSPs at the end of the campaign period and (within reason) each of the DSPs had spent a similar amount of money! (naturally, that didn’t include DSP1 as they had only delivered 1.3 per cent of the impression budget). When we backed out this spend back to a CPM it was surprising to see that the CPMs were very similar to each other. My thinking on that is that the Australian market has very good Brand Safety guidelines and therefore most of the “junk inventory” which brings down prices was removed. That only leaves the good, premium display inventory and the laws of supply and demand apply evenly to each of these vendors which seemed to make the CPM prices somewhat similar.

Therefore, we only needed to look at the overall number of conversions! Phew!

We found some very interesting results when we looked at the log level analysis of the conversions. We found that DSP1 had 16 conversions on one specific day right at the end of the test. We were very impressed as it showed that their “algorithm” was obviously working. However, when we got the logs we were very surprised to find out that all 16 of these conversions happened from the same IP address, same browser all within the space of a few minutes of each other. We then cross referenced these 16 events against the client’s Sales Database and found that only 1 of these events actually generated into a sale. Therefore, we notified DSP1 that we would reduce their overall conversion count for that day down from 16 back to 1. Once again, this showed that DSP1 seemed to be trying to game the system.

The final de-duplicated conversion numbers looked like this.

From the graph above you can see that DSP1 was in the lead for the vast majority of the campaign. In fact, right up until the very last day when DSP2 caught up. DSP3 and DSP4 seemed to battle it out and were neck and neck for the entire campaign.

It was very interesting that the improvements in the conversions for DSP2 coincide with the weekly WIPs between their sales team and my operations team. I asked my operations team about this phenomena and they simply said: We would give them feedback about reach, frequency, brand safety, CTR and they took all that feedback onboard and guided the algorithm. In other words: They were listening!

The Winner

At the end of the day we needed to find a winner from this test. However, before I say who “won” I want to point out that, in my mind, all four technologies that participated were winners because they helped me by giving their time and energy in putting this test together which resulted in stunning insights and hopefully will help a CMO when they want to launch their brand programmatically.

 

From our evaluation of the test we feel that DSP2 won the test. We saw that DSP1 and DSP2 had the same number of conversions but it would seem that DSP1 had a much larger amount of BAV (Brand Safety, Ad-Fraud and Viewability) concerns.

  • DSP2 won the test!

Key Takeaways

There was so much we learnt from this test that it’s hard to find just a few specific takeaways. However, I’ve tried to summarise this into a few tangible take aways.

DSP head to head results

Conclusion

In January this year we came up the idea of this DSP test. In February we designed the test and asked the vendors to participate. In March we ran the test. In April we collected the data and collated the results. In May we started writing the collateral and informed the vendors. In June we started publishing theresults. Time to announce the winner. In this test DSP#2 was… AppNexus. There are so many people to thank, including all the DSPs, for participating, however my big shout out goes to Niki Banerjee from AppNexus that was so patient and talented at setting up and optimising the campaign for this test. Well done to everybody that participated and this now wraps up this test. All finished!

Previous post

Video: How MadTech will impact agencies

Next post

Video: How Customer Experience is Impacting Marketing