Artificial intelligence is not safe to be let loose on customers, according to the AI lead at a global software company which uses the technology in its own platform. Pegasystems Dr Rob Walker said AI has not yet matured to a point where it can make moralistic and empathetic decisions and still risks further marginalising people.
However, AI is useful for augmenting decisions of humans and automating processes at scale, he said, arguing Pegasystems’ AI-powered platform and new empathy tools can help organisations do that.
“We really cannot leave ethical customer engagement to AI quite yet. Not fully, not unattended [by humans],” Walker said during Pegasystems annual user conference in Las Vegas today, where he revealed the company’s forthcoming AI empathy tools, designed to provide some guardrails for AI based decisions.
Walker showed several instances of AI generated images, text and video, nearly impossible to distinguish from real life, to show the potential, good and bad, of the technology.
— Which-50 (@Which50) June 4, 2019
AI is particularly good at creating such images because it “mines” millions of other data points. A similar technique can be applied by businesses to automate customer interactions. But, Walker says, AI’s shortcomings and reliance on data sets which typically reflect human bias means the technology can’t yet make appropriate decisions about customers. At least not without human oversight.
Walker pointed to Google’s efforts to mine its news platform to train AI on the context of words to “understand” humans as an example. The result, according to Walker, was AI could equate the word “sisters” to woman and “brothers” to men. An impressive outcome considering it required no coding and was largely automated, according to Walker.
But moving beyond basic recognition quickly showed AI’s limitations and potential problems.
“If you ask it, father is to doctor as mother is to what? … It says nurse. It also would say that if men is to computer programmer, as woman is to homemaker.”
Throwing AI models at datasets will reflect our own bias, Walker said.
“It’s not the algorithm or the AI that is wrong. The data is biased. And any AI that is studying real world data will be biased because it’s learning from us. And we’re not always setting great examples.”
The inherent problems does not necessarily mean AI will not one day be successfully applied to customer interactions, Walker said. But reaching that point will require AI to be explainable and transparent, although opaque algorithms can be valuable, he said.
“The algorithms and their predictions need to be trusted. They need to be fair, so no bias, they need to be explainable. And not just because you want to understand the algorithms. It may also be a legal requirement like GDPR in Europe, or CCPA in in California.”
Pegasystems uses AI to power its process automation and CRM tools but it works behind the scenes and the company does not provide more customisable AI tools like some vendors. Today it announced a new set of tools to provide customer empathy metrics which walker said could help companies establish an ethical framework for AI decisions.