Advertising technology belonging to tech giants Google and Facebook is fuelling the global spread of COVID-19 misinformation, despite the efforts of both to stem the widespread fraudulent misconduct.

And the problem again exposes the great weakness of self-regulation — the platforms themselves are among the beneficiaries of commercial malfeasance, albeit inadvertently, due to the way their algorithms optimise for engagement.

On March 11th the World Health Organisation confirmed COVID-19 to be a pandemic — “a crisis that will touch every sector”. As of 5 April, there were 1.2 million confirmed cases of coronavirus and over 64,000 deaths worldwide, although Australia has so far avoided the worst-case outcomes that are emerging in the US, UK, Italy and Spain.

More than a month before it declared a pandemic, WHO had already warned information associated with the virus would cause an “infodemic”.

Since then the amount of COVID-19 related information spreading online has eclipsed that of any other event.

“I’ve never seen anything like it,” says Sydney University Associate Professor, Dr Adam Dunn, an expert in biomedical informatics and digital health.

Associate Professor Adam Dunn,
Head of Discipline for Biomedical Informatics and Digital Health, Faculty of Medicine and Health. Sydney University.

Dunn and his colleagues use social media data and machine learning to monitor what people are exposed to online and how the information impacts their behaviour. His recent work has focused on the effect of the online “anti-vax” movement, where groups have sought to discredit the safety and importance of vaccinations.

“When we do that, when we collect those data on Twitter, we have no problem at all. We can do that very easily within the free API endpoints that we have to collect the tweets.”

But the COVID-19 crisis is different. Dunn says neither he nor his global peers have been able to keep up with the sheer volume of tweets about the virus.

Dunn’s analysis of COVID-19 keywords last week showed between 11 and 13 million related tweets every day. But he stresses that’s a conservative estimate based on select keywords on one platform.

“It’s probably ten times what we’re catching now,” Dunn told Which-50.

Facebook is even harder to monitor, according to Dunn, because it is a much more balkanised platform and the company is wary of sharing data with researchers since the Cambridge Analytica scandal.

Dunn says it could take years for researchers to get a handle on the flow of information online, including the proportion of misinformation. But it’s already at dangerous levels.

“We’ve seen misinformation spreading constantly about alternative cures [and] all sorts of things that can affect people’s health behaviours that are just wrong.” 

The misinformation is coming from the usual places. “You won’t be surprised to know that Alex Jones was trying to sell toothpaste that was going to cure or prevent coronavirus,” Dunn says. 

But an increasingly connected and concerned global population is helping it to spread like wildfire and there’s no shortage of groups taking advantage.

“People are opportunistic,” Dunn says, “And if the public health crisis has shown us anything, it’s that plenty of people are very agile in how quickly they can repurpose what they’re doing to make some more money or to gain reputation or to advance their careers in some form or another.”

And others can be forgiven for taking the bait.

“It’s much harder [to ignore] if you’re currently fearful, anxious — you feel like a loss of control. And you see an advert is about something that you care about, which is COVID-19. And you’re much more likely to click on it.”

Overtime

Both Facebook and Google insist they are working overtime to stem false information on their platforms. The companies are actively removing false or dangerous information regarding COVID-19 and are working with the WHO to update the approach as advice changes.

Google, Facebook and Twitter are also offering free advertisements to health authorities and promoting verified authoritative sources alongside COVID-19 content and at the top of search results.

Sally Hubbard, Director of Enforcement Strategy at Open Markets, an anti-monopoly US think tank, argues that, faced with a global health crisis, the platforms are scrambling to wind back a business model built on keeping people online and serving them whatever information keeps them there.

Sally Hubbard, Director of Enforcement Strategy at Open Markets.

“When it comes to disinformation, most people think Facebook and YouTube’s main challenge is they can’t take millions of posts down quickly enough,” Hubbard tells Which-50.

“But both platforms actually boost disinformation. Their algorithms prioritise ‘engagement’ — clicks, likes, comments and shares — to keep users on their platforms longer. Content that provokes fear and anger ‘engages’ humans the most and disinformation agents use this to their advantage.

“The more time people spend on Facebook and YouTube’s platforms, the more data the platforms collect, the more ads they show and the more money they make. Giving anger and fear-inducing disinformation top priority best serves Facebook and YouTube’s business models because ‘engagement’ makes them the most money.

“Their amplification of disinformation is not an inevitability of the internet or human nature. It’s just a business decision.”

Off Platform

The platforms are also facing renewed criticism about the use of their advertising technologies outside their walled gardens.

Dr Augustine Fou, a cybersecurity and anti-ad-fraud consultant, says Facebook and Google’s advertising technology is being exploited as part of a multibillion-dollar adtech market. 

Dr. Augustine Fou.

Fou describes the technology as the “pipes” or the “rail tracks” between a web site and advertisers, allowing online ads to be bought and sold in a fraction of a second. Google dominates the market, but Facebook also operates in what he warns is an incredibly opaque system regularly exploited by bad actors.

“For those intent on spreading this disinformation, they can set up web sites, add some adtech code and basically make money overnight. So because they’re able to make money [with advertising] they’re able to continue doing what they’re doing and not just survive, but actually proliferate and grow.”

This type of fraud is well established and well known in the industry, Fou says, but there’s little incentive for the adtechs, including Google and Facebook, to stamp it out. 

“I think they know about it, I think they want to do the right thing,” Fou tells Which-50. “But it would represent such an enormous hit on their revenues that unless someone else tells them to, or forces them to, they’re not going to do it.”

Asked how they proactively stop the misuse of their advertising technology by misinformation sites, Google declined to comment and instead pointed to the steps it is taking within its own platforms.

Facebook Australia Head of Communications, Antonia Sanda, told Which-50 the company has strict policies for who accesses its adtech services and will remove sites or apps which severely or repeatedly violate them. Sanda said every ad impression is analysed for patterns of abuse.

Fou says the automation of buying and selling online ads — programmatic advertising — has been a key driver in the growth of fake news and COVID-19 is the latest opportunity for bad actors to cash in.

The Global Disinformation Index, a UK government-backed non-profit tracking disinformation sites, says fake news is big business, especially in a pandemic.

“Globally, the GDI has estimated that nearly a quarter-billion dollars is generated in ad revenue each year by disinformation news sites,” the group’s Programme Director, Craig Fagan told Which-50.

“The whole reason this can happen is that the online adtech system is notoriously opaque. Advertisers often don’t know where their ads end up.”

Currently, many of the ads from reputable brands are ending up on known misinformation sites, according to GDI, which also places much of the blame on the advertising intermediaries or adtechs because they help facilitate the transaction.

“All the decisions about which ad we see take place in milliseconds when we load a page. We found in our research that Google provides the lion’s share of ad services to disinformation sites that we have sampled. In the case of the coronavirus conspiracy sites, Google provided ad services to 86 per cent of the nearly 50 sites that we looked at.”

According to Fagan, it’s harder to track Facebook’s role because of its walled garden model, but it’s clear the world’s biggest platform is being used to amplify false information. 

“What we do know is that the sites that are pumping out disinformation use Facebook and other platforms to circulate their narratives and generate increased traffic to their sites. We are able to track these link shares from sites and platforms.”

Fixing fake news

While Fou and Fagan are critical of the adtechs for providing the systems that allow online fraud and fake news to proliferate, they say other players have a responsibility to clean up the ecosystem too.

Fou urges brands and advertisers to simplify the process and test out a scenario which effectively removes the advertising middlemen.

“[Advertisers] know which are the big magazines, which are the big news sites. Just buy something direct from them. So that alone will mean that your ads go to those publishers and your budgets go to fund them so they can stay alive.”

The ad buyers could also be confident their advertisements would not be helping to fund misinformation or ad fraud, according to Fou.

GDI’s Fagan says advertisers and brands are slowly catching on to the disinformation risks created by their inventory buys.

“As a result, a growing number of advertisers are now going to direct ad placements on many sites to cut out the middle. This is also happening on the part of respected news outlets that operate online. Many are going with direct sales to secure more value and ensure a clear ad revenue stream.

“This is one of the first ways to force the system to change. But more is needed if we are going to disrupt the disinformation ecosystem that has been built.”

‘Prebunking’

As for stopping the spread of false information on social media, The University of Sydney’s Dunn argues the platforms should be doing more than what he described as the “bandaid” fixes of adding health authorities to the top of search results.

“The platforms themselves need to be able to do a better job of identifying things and removing them. And I don’t think the solution is necessarily just a tech solution. I think it’s a combination of tech and human resources. Which is hard right now [because of COVID-19 lockdowns].”

Perhaps more importantly, Dunn says, people need to be better educated on spotting and assessing misinformation, as well as the risks of passing it on.

Dunn’s research group is working on a “prebunking” project — a way to flag potential misinformation when it appears to users online using tools like browser extensions. Dunn explains the idea is to make people think twice about where information comes from before sharing it with friends and family.

“The idea is that it doesn’t stop them from advertising [misinformation], it doesn’t stop the flow of money. It doesn’t stop any of those things. What it does is it helps people to avoid passing it on.

“It stops that spread, just like you would with social isolation.”

LinkedIn
Previous post

We need to tackle waste more holistically: Boomerang Labs CEO

Next post

Consumers will score brands harshly during Coronavirus trust test: Edelman