In 2004 NASA scientists and engineers embarked on a two-year journey to develop artificial intelligence software that could design a satellite antenna.

They came up with an algorithm modelled on Darwinian evolution which takes specific design parameters such as size and frequency bands and generates random antenna designs. These are tested and the best ones are used to “breed” new designs.

NASA's AI-designed satellite antenna.

“The process of designing antennas using AI involves telling the AI algorithm the ‘what’ and letting it figure out the ‘how’,” says Jason Lohn, a former Nasa and Google engineer, who led the team that successfully developed three AI-designed antennas which were sent into space aboard NASA’s Space Technology 5 mission in 2006, pictured left.

The evolution process repeats itself until it produces a design that solves the problem at hand, Lohn tells Which-50.

“The evolved antennas we developed are largely unintelligible to human engineers; they have strange shapes that one cannot find in textbooks or handbooks. So, in a sense their ‘reasoning’ or how they work is opaque,” Lohn says.

As artificial intelligence — systems that change behaviours without being explicitly programmed, based on data collected — gets more complex, so does the ability to understand precisely how these systems reach their conclusions.

Powerful blackbox algorithms give rise to a new ethical and regulatory dilemma for businesses: as AI increasingly embeds itself within enterprise software, where is opaque AI safe to use?

“Because antennas can be exhaustively tested — every possible situation the antenna might produce or find itself in can be examined in advance — it is extremely safe to deploy such antennas,” Lohn said.

“The lesson for the CIO for instance is that if s/he can sufficiently test the software, deploying the opaque software should be acceptable.”

Opaque AI, such as deep learning algorithms, neural networks, ensemble models and genetic algorithms, cannot intrinsically explain itself. Or in other words, there is in a way no ‘why’ to their madness, explains Dr Rob Walker, Vice President, Decision Management and analytics, Pegasystems.

Walker, whose remit includes Pegasystems’ AI products, is quick to point out there’s no moral judgement behind opaque AI; it isn’t inherently good or bad, but it is risky.

That’s because you don’t know what attitudes or biases it may develop.

“It’s important to recognise that there are real benefits to using opaque AI, in that it can be more powerful and work as a better solution in the right circumstances,” Walker tells Which-50.

“Being transparent is a constraint on an algorithm and somewhat limiting its power. So there can be an incentive or even requirement to use opaque AI, but at the cost of not understanding its decisions or predictions.”

In highly regulated industries such as finance and healthcare, opaque AI systems quickly become problematic.

“One of the biggest risks relates to compliance. It’s likely we’re going to see more regulations like [Europe’s] GDPR which are more general in their nature and which will limit AI effectiveness by demanding transparency,” Walker says.

“Besides compliance, businesses are at risk of losing control of what AI does if they don’t understand how it’s doing things — even if it does them well.”

For example, a bank could find its AI automatically making credit risk decisions based on factors such as race or gender which will likely cause unintended problems. These systems can’t be controlled via the data it ingests. Even without telling the system the gender of a subject an opaque algorithm may still infer gender with a high degree of accuracy based on the other data points.

Walker agrees with Lohn on the importance of exhaustively testing opaque AI applications before they are taken into production. In the world of customer experience, this means not only testing business outcomes and technical ability, but also what unintended bias it may have developed.

“[Opaque AI usage] will depend on the industry and whether or not it can afford the price of transparency. If an opaque AI, like magic, would prove vastly superior to human doctors in diagnosing a medical condition, the price of transparency may be paid in lives,” Walker says.

Ethical questions

In some areas, transparency will be critical or even mandatory. While in other uses, such as recommending products or what to watch or listen to, opaque AI may be permissible.

“If an algorithm can help direct us home or serve us advertising content, that’s great. But as soon as AI is deciding whether we are approved for a home loan or life insurance or driving our car, we as a society expect transparency as to how an AI platform makes the decisions that it makes,” Michael O’Keefe IoT Lead, Microsoft Australia, tells Which-50.

The risk is that we will not necessarily know if the outputs and outcomes adhere to established guidelines or even the laws of the country they operate in.

“As an example, in the US basic machine learning techniques are already being used in courts – if this was to leverage opaque AI systems to make decisions, it may be seen as unfair for justice to be handed down without society as a whole understanding how or why,” O’Keefe says.

“In industries like health, utilities, finance and government, it will be complex to implement opaque AI because constituents and customers will have high expectations of trust in the organisation.”

Microsoft recently formed the Microsoft Research AI team, who will address the most difficult challenges in AI, including establishing an ethical design guide for AI for Microsoft developers.

Transparent AI

IBM takes a firm stance on transparent AI. In a letter sent to US Congress in June 2017, David Kenny, SVP IBM Watson and Cloud Platform, argued companies must be able to explain what went into their algorithm’s decision-making process and citizens have a right to understand how AI technologies work.

“We must know how an AI system comes to one conclusion over another. People have a right to ask how an intelligent system suggests certain decisions and not others, especially when the technology is being applied across industries such as healthcare, banking, and cybersecurity. Our industry has the responsibility to answer,” Kenny wrote.

“We believe that companies must be able to explain what went into their algorithm’s decision-making process. If they can’t, then their systems shouldn’t be on the market,” Francesca Rossi, Distinguished Research Staff Member, IBM Research tells Which-50.

At the core of the argument is the need for trust, which will be key to the adoption and efficacy of AI technologies.

“Without transparency into how AI systems make decisions, we will be less inclined to trust their output – and trust in AI is critical for mass adoption,” Rossi says.

This trust will be earned through repeated experience, where the cognitive system can naturally interact with humans and provide explanations for its behaviour. Rossi argues explanation capabilities are essential, as well as deep natural language understanding, so the software can communicate naturally with humans.

“For example in healthcare, an AI-based decision support system designed to help doctors to identify the best therapy for a patient should be able to explain why it is suggesting a certain therapy over another one,” Rossi says.

So, should transparency trump performance?

“Transparency should be considered as part of the performance, not a separate objective. AI systems that operate in a way that is transparent will always be more effective – because we can trust their output and immediately use them in our work,” Rossi says.

LinkedIn
Previous post

Different Brands Have Different Reasons To Improve Customer Experience

Next post

IBM opens new Sydney Data Centre