Should artificial intelligence be regulated? The answer is always ‘it depends’.
That’s the view of data expert Ellen Broad, the author of a new book exploring the ethics behind AI and automated decision-making.
Made by Humans: The AI Condition argues in favour of some form of AI regulation, but Broad emphasises any discussion around AI will depend on the context and requires highly specific language.
In an interview with Which-50, Broad compared regulating AI to asking “should flight be regulated?”
“And again, it depends. What do you mean when you say flight? Are you talking about commercial aviation?”
Instead, the book poses more narrow questions such as:
- Should people be able to understand and challenge automated decisions made about them?
- Should system designers be held accountable for statements about the accuracy of their decision-making systems that aren’t true? Should they be held accountable for error and bias?
The discussion of the ethical considerations at play when designing AI, machine learning and automated decision making systems is moving from academic circles into the mainstream as businesses look to the technology to reduce costs and improve productivity. AI-powered systems, which are susceptible to bias, will increasingly make decisions which impact human’s lives. For example the HR software designed to help sift through job applications which has only been trained based on the CVs of caucasian males will lead to a bias in hiring decisions.
Having spent much of her career arguing against regulation of technologies based on the government’s poor track record in the area, Broad shifted her position while writing the book.
“When I started the book I was really just thinking about ethics, as in voluntary frameworks, best practices and principles to guide good decision-making,” Broad said.
“By the end of the book I became concerned that while ethics and ethical frameworks and principles are very useful in driving ethical behaviour, there is a gap in needing some basic standards, some basic rules to prevent seriously harmful behaviour.
“Ethics are great for those that are already ethical, but we are going to need some forms of regulation to ensure that the rest of the industry adheres to certain standards,” Broad said.
Broad said it was up to the technology industry to make sure any rules were “flexible and forward thinking to avoid those historic pitfalls around technology regulation.”
In terms of jurisdiction, Broad anticipates that there will be sector-specific rules relating to how AI is developed, as well as overarching legislation that resembles the broader obligations companies have under the privacy act.
Ethics in AI software
Before considering the ethical implications of AI, businesses first need to determine the system they are implementing actually works, Broad said.
“Usually in a business — before even getting to the ethics — you are just thinking ‘am I getting bang for my buck?’”
To determine if the system can make decisions with the accuracy it claims, Broad recommends understanding how a system works, the data that it’s built on, how it has been tested and what its performance rate is.
“That’s a key consideration for businesses right now when artificial intelligence and machine learning are such buzzwords. There’s lots of different services coming on the market that all advertise themselves as using artificial intelligence, digging under the surface of that to find a good quality product or a dodgy one can be really difficult,” Broad said.
In terms of ethical considerations, those same questions still apply, but the focus shifts to how the system was trained and what is its impact on people.
“Your questions don’t change, but you are asking them in order to understand the impact,” Broad said.
Broad argues the level of scrutiny on AI systems will only increase. That means organisations risk reputational damage or legal action if they are unable to explain their AI’s decision-making process or prove it isn’t discriminatory or negligent.
“This is only increasing in awareness as a public policy issue, which means it could potentially become your issue,” Broad said.