The Wall St. Journal wrote in a March 1, 2019 article that the Need for AI Ethicists Becomes Clearer as Companies Admit Tech’s Flaws. I’m all for ethics being applied to an uncharted technological domain that could have tremendous consequences. But what’s being described sounds more like “AI business risk mitigation” than “AI ethics” to me.

The start of the article points out the difference (bold added for clarity):

“The call for artificial intelligence ethics specialists is growing louder as technology leaders publicly acknowledge that their products may be flawed and harmful to employment, privacy and human rights.

Software giants Microsoft Corp. and Salesforce.com Inc. have already hired ethicists to vet data-sorting AI algorithms for racial bias, gender bias and other unintended consequences that could result in a public relations fiasco or a legal headache.” 

So the public call for AI ethics is growing louder since AI may be violating human rights. And the response is to uncover areas where AI can cause PR or legal problems. I sense a disconnect.

I’m glad companies recognise that doing the right thing can have a positive impact on the bottom line. This is a beneficial feature of capitalism in a society with rights of protest and freedom of the press. When the buyers and public care about human values they can hold their suppliers to account.

But that’s still a bit short of ethical standards. Ethics is about good and bad, right and wrong – and the tricky work of debating what those terms mean. The set of issues which will cause PR, legal, or recruiting (since millennials are said to care about ethical behavior by their employers) hassles doesn’t entirely overlap with what is good or bad for society.

Instead of “ethics”, the model that fits this behaviour more closely is “risk assessment”. Risk assessment weighs the potential costs to the potential benefits of a business activity and passes that analysis on to business decision makers to decide. Indeed, Gartner has predicted that by 2023, over 75 per cent of large organisations will hire AI behavior forensic, privacy and customer trust specialists to reduce brand and reputation risk.

How do ethics and risk assessment differ? Take these examples:

  1. An AI risk assessment could show that a morally compromised AI model is very unlikely to be unmasked and is a core foundation of an entire division’s profits (the morality of keeping everyone employed!), therefore worth continuing.
  2. An AI activity that is morally sound, but would be easy for the public to misunderstand or competitors to paint as evil, so a morality-free risk assessment may show that the potential damage to reputation exceeds the revenue of the product.
  3. A use of AI whose negative impacts are be so far away or difficult to grasp that the public is unlikely to protest. For example, AI applied to “dark UX” (user interfaces designed to be addictive or trick the user) is unlikely to create a public upswell that could harm a vendor’s reputation. But most of the public may consider it wrong.

These are cases that would have opposite recommendations when presented to an “AI ethicist” versus an “AI risk analyst”.

Many vendors have been realistic about these positions, with job titles such as “Head of Investigations and Machine Learning, Trust and Safety” or “Compliance Analyst, Trust & Safety”. And the article quotes Microsoft’s 2018 annual report as stating “If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”

I applaud that transparency, as long as everyone understands that the loud cry for AI ethicists is really to have someone inside the AI developers acting as an angel on the shoulder, not just a bean counter nearby. To the extent the risk to reputational harm guides good behaviour, it is worth heeding. But with so much of the future of work and society (not just the vendor) at stake, there should be room for a voice of reason unbound by considerations of the visibility of bad outcomes.

*This article is reprinted from the Gartner Blog Network with permission. 

LinkedIn
Previous post

How Lego pieced together the data puzzle to optimise marketing

Next post

Terrie Anderson appointed Forescout Country Manager for Australia and New Zealand

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.