Most Australians report knowing little about artificial intelligence, few trust it and nearly all want it regulated with government oversight, according to research by the University of Queensland and KPMG.
The consultants partnered with the University of Queensland’s business school to survey a nationally representative sample of 2,500 Australians on their trust and expectations of AI.
61 per cent said they know little about the technology, which is being pushed as a cornerstone of the fourth industrial revolution but is often a catch all term for machine learning and automated decisioning.
According to the study, “Artificial Intelligence refers to computer systems that can perform tasks or make predictions, recommendations or decisions that usually require human intelligence. AI systems can perform these tasks and make these decisions based on objectives set by humans but without explicit human instructions.”
The study, like others before it, suggests people’s trust in the emerging technology will be key to its adoption and utility.
But only one in three respondents said they trust AI today and 45 per cent are unwilling to share their data with an AI system. And only a similar minority said they would trust the results of AI systems.
Australians do appear to tolerate (28 per cent) or accept (42 per cent) AI but few approve (16 per cent) or embrace (seven per cent) it. Seven per cent said they outright reject AI.
Acceptance of AI skews towards younger, university educated Australians, particularly those with experience in computer science, according to the report.
Fostering more trust in AI is critical if Australia is to reap the benefits of the technology, according to Professor Nicole Gillespie, KPMG Chair in Organisational Trust and Professor of Management at the University of Queensland Business School
“If left unaddressed this [lack of trust] is likely to impair societal uptake and the ability of Australia to realise the societal and economic benefits of AI at a time when investment in these new technologies is likely to be critical to our future prosperity.”
Two thirds of Australians want AI regulated by the government, followed by co-regulation (60 per cent), existing regulators (59 per cent) and Industry (42 per cent), according to one survey question which allowed multiple selections. Just 4 per cent of Australians said regulation is not needed.
In a wide ranging study on AI by Australia’s top scientists last year, it was noted that regulating AI does not necessarily require new legal frameworks, regulators, or ethical guidelines because existing mechanisms, if properly applied, could provide effective regulation, with some tweaks.
So far the government has taken a hands off approach to AI funding and regulation in Australia, warning it does not want to introduce a “big stick” approach. Instead it has developed an AI ethics framework for developing and using the technology, which big businesses began trialling last year.
Australians appear much more confident in universities and security agencies to develop and use AI technologies in the public interest. The worst confidence is in state governments and the private sector, with 37 per cent of respondents saying they had low or no confidence in the latter.
“The lack of confidence in technology companies and commercial organisations is striking given that the majority of the population’s experience of AI is with applications developed and used by such organisations,” the report states.
There is a widespread lack of confidence in all organisations to develop and use AI, with 30 and 37 per cent of respondents reporting no trust in all types of entities.