Australian scientists have shown artificial intelligence can exploit vulnerabilities in a person’s habits to influence their decision making.
According to the researchers involved, the study shows the need to further investigate the ethical consequences of the technology, which is already widely used, rarely regulated and has inconsistent governance in the business world.
Researchers from the CSIRO’s digital arm Data61, ran three experiments where people played games against a computer. The computer’s AI used behavioural data from the games to identify and target vulnerabilities in human’s decision making to steer them towards particular goals.
The study – conducted in partnership with the Australian National University, Germany’s University of Tübingen, and Germany’s Max Planck Institute for Biological Cybernetics – has been published in the prestigious journal Proceedings of the National Academy of Sciences.
The first two experiments involved participants clicking on red or blue coloured boxes to win a fake currency, with the AI learning the participant’s choice patterns and guiding them towards a specific choice.
The third experiment gave participants two options for financial investment – a trustee (the AI), and an investor, with the AI observing how the participant chose to distribute their fake currency, and eventually learning how to get the participant to give the investor (itself) more money.
“Basically what we found was AI and machines are able to be trained to influence human decisions and to extract social vulnerabilities that humans might have when they are making decisions,” the study’s coauthor CSIRO Scientist Dr Amir Dezfouli tells Which-50.
Dezfouli says while the study is theoretical, the results show the need or proper governance of data and the emerging technology, and the importance of ethical frameworks in AI development and use.
Dezfouli and co-authors Richard Nock, and Peter Dayan’s study is one of the few to apply a scientific approach to the boundaries and ethical implications of AI.
“Our technology can be used in a good way or in not such a good way. AI is not an exception to that,” Dezfouli said, explaining there are also positive applications to influencing human behaviour too in areas like safety, health and consumer protection.
“Ensuring AI and machine learning are used as a force for good – to improve outcomes for society – will ultimately come down to how responsibly we set them up in the first place.”
Australian experts have warned that while the transformative potential of AI is very real, it needs responsible, well regulated development. Australian people have also shown a preference for regulating AI, and have little knowledge of or trust in the technology.
So far the Australian government has avoided any heavy-handed approach, with no binding regulations for developing AI, saying it does not want to introduce a “big stick”. Instead, it has created an AI ethics framework for developing and using the technology, which big businesses began trialling in 2019 but are yet to report back publicly on.