As society trusts more of its operations to autonomous systems, increasingly companies are making it a requirement that humans can understand how exactly a machine has reached a certain conclusion.

The research efforts behind Explainable AI (XAI) are gaining traction as technology giants like Microsoft, Google and IBM, agree that AI should be to explain its decision making.

XAI, sometimes called transparent AI, has the backing of the Defense Advanced Research Projects Agency (DARPA) an agency of the US Department of Defense, which is funding a large program develop the state of the art explainable AI techniques and modelling.

Dr Brian Ruttenberg was formerly the senior scientist at Charles River Analytics (CRA) in Cambridge, where he was the principal investigator for CRA’s effort on DARPA’s XAI program.

He argues XAI helps to identify bias or errors in algorithms and engenders trust in the technology.

“Doctors aren’t going to tell someone they have cancer because the machine learning box told them they have cancer,” Ruttenberg told Which-50.

“They need to understand the underlying factors and explain them to the patient. It’s about building trust between the doctors and the machines and transferring that trust to patients.”

Over the past two years XAI has emerged as a research field in its own right, but some AI conclusions are easier to explain than others.

Why did the driverless car cross the road?

Ruttenberg, who is now a principal scientist at NextDroid, a Boston-based autonomous vehicle and robotics company, says explainability is “very immature” in the field of driverless cars.

That’s because XAI research is largely concerned with explaining a single neural network rather than a series of systems that decide what a vehicle should do next, he said.

For example, explaining how an image classification systems built to identify pedestrians works is “straightforward,” he said. The conclusion the neural network reaches is based on the information contained in the image, ie if an object has wheels instead of legs, then is not a pedestrian.

In the world of driverless cars, where multiple systems interact and conditions change over time, explaining why a vehicle turned a corner or hit the brakes is a more complicated problem to solve.

“Autonomous vehicles have many different interacting systems,” Ruttenberg said. “They have perception, prediction and planning that are not just a single piece of software that you are trying to explain.”

Those machine learning systems are also interacting with a changing environment as the vehicle travels through the world.

“In these dynamics applications an explanation for a action could be your own action you took 20 seconds ago and that is a very difficult problem,” Ruttenberg said.

For example, a car may turn left suddenly because there is a car in front of it, but the reason the vehicle is dangerously close to another car is because 10 seconds ago it made a horrible lane change.

“That chain of understanding of how your system works could go back very far,” Ruttenberg said.

Regulators, take the wheel

In the race to build a level five autonomous car, Ruttenberg is skeptical carmakers and tech giants are spending extra resources on XAI.

“The first person that comes up with a level five autonomous system is going to have a tremendous advantage over any competitor. And I don’t think some of these big companies aren’t interested in explainability, or wouldn’t like it, but I don’t think they are going to spend resources trying to make their systems explainable or more explainable than needed to de-bug them,” he said.

That means regulation will likely be required to enforce a level of explainability not just for developers but so a car’s systems are transparent for buyers, passengers and society more broadly.

“Like in the financial industry where you have to prove to regulators [their lending process] isn’t biased, I think you are eventually going to have to prove to regulators that your system is safe and you’re going to have to do it through explanation,” Ruttenberg said.

Brian Ruttenberg will discuss the ethical implication of explainable AI during the IAPA Advancing Analytics 2018 National Conference in Melbourne on the 18th of October.

Previous post

Prime Video & HBO Now have a churn problem, Netflix not so much

Next post

The evidence is clear: data-driven decision-making drives profitability

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.