Inspired by results from retargeting, marketers are scrambling to stitch together customer data, unleash machine learning and deliver personalised experiences in display and video ads, websites, apps, watches and – soon – refrigerators.
But in our rush to hypertarget, marketers ignore the perils of personalisation. Algorithms – or “algos,” as we say – can be intrusive and are prone to overuse. They build a commercial echo chamber and hone us down to our obvious features. Every time an optimisation is made, some data is discarded. Usually, it’s data that doesn’t fit the model, which are exactly the features that make us unique.
Our algo addiction is a hidden threat to advertising’s function as a catalyst of discovery.
Algos are scripts that use rules to make decisions and improve their own performance over time. Marketers have employed them for years. Recommender systems use them to suggest which items we’re more likely to watch or buy. Personalisation engines assemble website pages with articles, videos and images with which we’re more likely to engage. There are algorithms that write emails and uppity bots that want to fix it all.
They are already turning against us. For example, retargeting tech is not good at knowing when you buy an item or lose interest. Obnoxious ads contribute to an ad rejection culture that inspires ad blocking. GroupM calls this “repetitive irrelevance” and notes the irony that the “tracking and targeting intended to make advertising welcome makes it a nuisance.”
Robots On The Rampage
In February, the Pew Research Center issued a report that canvassed dozens of algo experts in many fields, including marketing. While praising the potential for machines to automate tasks and improve decisions, these experts also raised disturbing specters, such as corporate power creep and robot-driven bread lines.
Even rejecting dark fantasies, we have two reasons for caution. First, the techniques used to build algorithms, such as those used for ad targeting or product recommender systems, have their limits. Second, the human brain is biased in ways that can conspire with these algos to make marketing worse.
Algo perils: Optimisation algos do the best they can with the data they’ve got – but that’s all they’ve got. Limited info is fed into what data scientists call a sparse array. For example, a product recommender may know a few things about you, such as the items you’ve bought, some demographics and location. While trying to predict what you’ll buy next, it looks for correlations between you and other customers, your items and other items, or both.
Common techniques applied here are principle component analysis, singular value decomposition and matrix factorizations. These are forms of dimensionality reduction which – as the name implies – are used to find the strongest signals in a lot of noise. To oversimplify, these methods reduce all customers and items (or ads) down to their most common features and ignore data that don’t fit.
This method of matchmaking is like trying to find a spouse by locating a bunch of guys your age who live in your city, averaging their girlfriends into one woman and marrying her. It might make for an OK date, but it’s no way to find true love.
Brains Out Of Balance
People have won Nobel Prizes showing us how biased our brains are. For programmatic marketers, the primary perils are self-seeking and herd behaviour.
Self-seeking: It turns out that we humans also use algos to make decisions. We perform a kind of singular value decomposition on life. Technology makes this worse by giving us more ways to co-curate reality. Eli Pariser, the founder of MoveOn.org, argued in a popular book and TED talk called “The Filter Bubble” that personalisation hurts us:
“The filter bubble has dramatically changed the informational physics that determines which ideas we come in contact with.”
We have a tendency to like things we agree with and ignore things we don’t. This is confirmation bias. We show it in our social networks by “unfriending” people during elections and in our online news reading by ignoring sources we don’t like. In doing so, we train content targeting algos to reinforce our first prejudice.
In programmatic terms, we give digital signals from our comfort zone that label us as the Brooks Brothers man or luxury two-seater millennial. For a minute, these labels improve ad response. But over time, putting people into audiences flattens them and they lose their impact. The result is a narrowing of our digital marketing experience that makes it less interesting.
It’s already happened. If you don’t believe me, borrow someone else’s non-ad-blocked browser. Look at the ads. You will see some you’ve never seen before and won’t miss the usual suspects you’ve already tuned out. You will notice someone else’s ads more because they’re fresh.
Herd behaviour: When we’re not sure what to buy, we look for popularity. There is nothing wrong with herd behaviour, but once again, algos are exploiting it to death. They turn a herd into a stampede. Studies show that item and content voting tools quickly converge on a few top items, silencing the rest. Other studies show that the digital experience itself makes people lazy, focusing attention on fewer things over time.
The Serendipity Maker
What’s also lost in algorithms is serendipity. It’s a term first used by Horace Walpole in 1754 in a story in which the three princes of Serendip stumble on things they weren’t seeking. Using models that learn from what we’ve done before – and only what we’ve done before – recommender systems, personalisation engines and ad targeting tools are the coded opposite of serendipity.
Yet good marketing is the art of discovery. It is supposed to capture our attention. Repeating messages we’ve already seen and recommending things because they’re popular might be logical, but it hardly fulfills David Ogilvy’s definition of a good ad: “I want you to find it so interesting that you buy the product.”
What can we do? Amid the original surge of panic around filter bubbles in 2010, a programmer at The Guardian created a simple “serendipity maker” for news. It pulled stories at random from a number of sources and presented them in a feed. Adding some randomness back into our marketing models would be a good place to start.
Humans are not machines, and the customer is always right. Too often, the algorithm gets that wrong.
*This article is reprinted from the Gartner Blog Network with permission.