Companies are being urged to consider the ethical implications of their AI ambitions to ensure that as the technology increasingly embeds itself in our lives, it doesn’t take away the values society holds dear.
“Technology may let us do new things quicker, faster and cheaper, and easier, but at the end of it the values we need to preserve are actually pretty long-standing, universal values like fairness and privacy,” said Professor Toby Walsh, Scientia Professor of artificial intelligence at UNSW.
Walsh was speaking at the Ethics of Data Science conference hosted by Sydney University last week.
He argued much of the current discussion around AI and ethics in emerging technology could be addressed by applying existing values and rigorously applying the existing laws. And technology companies shouldn’t get a free pass.
A common theme throughout the conference keynotes were practical ways to bring ethical considerations into the design and development process to help mitigate the problems that can arise when machines make decisions and take action autonomously.
Associate Partner at McKinsey, and Chief Data Scientist for QuantumBlack Australia, Nicolas Hohn’s presentation focused on strategies to help mitigate the risks of unintended, rather than malicious, consequences.
“If you are a CEO of an organisation and you want to use artificial intelligence to drive revenue and profitability, you have to be a little bit careful because deploying artificial intelligence does require very careful management to prevent unintentional, but still significant, harm. That’s not only to your brand reputation but obviously to individuals and society,” Hohn said.
Hohn argued organisations need more systematic ways to embed ethical decisions into their workflow, from coming up with the idea (should we build it?) to which data is used and how the model is built and evaluated.
“The takeaway here is that it is much easier to address those problems as you build the model than as an afterthought later on,” he said.
He suggested adopting an iterative approach, building ongoing monitoring to systems, and drawing on a diverse group of views in the design process.
CommBank’s Flagship ML Project
Later in the conference, Commonwealth Bank of Australia’s Head of Data Science, Dan Jermyn outlined how the bank approaches ethical considerations in its flagship machine learning implementation.
The bank’s customer engagement engine is a centralised platform that crunches around 200 billion data points quickly enough to suggest the next best action or conversation along the customer’s journey.
“With CEE we are plugged into 19 channels, we have thousands of different combinations of next best conversations that we can have with customers and we are processing 200 billion data points and we are returning an answer in a fraction of a second,” Jermyn said.
In practical terms that means the bank is now able to send smart alerts that are triggered by real time transaction data to tell a customer if they are running low on cash. It was also used to identify which NSW car owners might be entitled for the CTP rebate under a recent change in state government policy.
According to Jermyn, the platform aims to help customers get the most from their products and understand their finances, notifying them of benefits they might be entitled to like the CTP rebate or support packages triggered by natural disasters.
“One of the most exciting things about AI for us is that it can produce better outcomes for our customers, allowing us to serve them better. We couldn’t do this without AI,” he said.
While good intentions are a starting point, hoping to do the right thing isn’t enough to guard against potentially harmful unintended consequences.
“The second stage in the ethical implementation of AI within an organisation is not just to do things that seem like the right thing to do to you. It’s to know you are doing the right thing, to define what the right thing is and then measure that you are having the impact you are hoping to have,” Jermyn said.
Measuring Good Intentions
The bank began by defining the concept of financial wellbeing and creating a metric to measure it as part of a study in cooperation with Melbourne University.
According to Jermyn, the research found the drivers of financial wellbeing had more to do with behaviour than things like salary. In theory this means the bank has the ability to intervene or nudge customers towards better behaviours and materially affect their financial wellbeing.
“We had sets of customers which had precisely the same inputs into that model, the same kind of external factors, the same household factors but exhibiting very different financial wellbeing scores,” he said.
AI also requires new governance structures and ongoing monitoring. Jermyn said the CEE outputs are monitored for biases and fairness every single day.
“We need to be able to test the outputs against groups of customers to make sure we are not introducing micro-biases to subsets of customers. And we have created an explainable AI capability to allow us to do exactly that,” he said.
Jermyn highlighted the importance of transparency and explainability (the ability to explain how an algorithm reached the decision it came to) when it comes to building trust with customers that it is using AI in an ethical way.
The more open and transparent the bank is about why it makes decisions and how that information is presented back to you, the better, he said.
“The idea of transparency and explainable AI is going to be a big part of [building that trust],” he said.
Ethics by Design
Dr Matthew Beard is an Ethics Centre fellow and co-author of Ethical by Design: Principles for Good Technology, a guide for organisations that want to address ethics in technology design.
The goal of Ethical By Design is to translate “high minded ethical principles” to practical processes that bring ethical considerations into the design process to help to mitigate risks before they arise, Beard told conference delegates.
“We aim to build something that would help provide choices, aim direction and help make practical decisions on a design level, policy level and implementation level to do with technology.”
Beard and his co-author Simon Longstaff have developed a set of open ended questions and provocations that can be used within teams and systems in the design process, as a way of imagining some of the ways these things might go awry.
For example when thinking about fairness, designers are asked to consider, is there a way of challenging the decisions that are made? Do users have the ability to understand how a determination was made about them?
Another strategy is ethical red teaming, “bringing in people who are distant from the design process into the room and having them poke holes in everything you’ve done and see if it stands up.”
According to Beard these strategies can bring some of these ethical concerns to the fore by making them explicit points in the conversation, “rather than a check box that happens at the start or the end of a process, when we either don’t have enough information at the start or we have a whole heap of sunk cost bias by the time it is at the end.”
Compliance, Ethics & Regulation
Beard’s presentation cautioned against turning ethics into a box checking exercise.
“One of the things that makes ethics go awry, and this is not just for technology this is everywhere, is as soon as you turn ethics into compliance, as soon as you turn it into a box ticking exercise, you are not doing ethics any more,” Beard said.
“Because ethics is an imaginative process, it’s a reflective process and it’s a team sport, something that needs to be done in a group and as a society.”
These simple systems that can be built in to prevent bad things happening, but that’s only helpful if you want to be ethical. Here, Beard argued, is where laws and regulation needs to catch up.
“This whole ethics process presumes that people actually care and want to achieve ethical outcomes. If you are operating in bad faith, this stuff is very easy to break through. That’s where we need something that has a little bit more teeth, that has a little bit more recourse in order to address some of these problems.”