The potential of AI to improve social and economic outcomes in Australia is real, according to the country’s top scientists, who equate its development to that of the industrial revolution.
However, the technology also poses several serious risks, requiring careful planning and a national strategy, according to a new report, commissioned by chief scientist Dr Alan Finkel.
The Australian Council of Learned Academies (ACOLA) released a wide ranging report, The Effective and Ethical Development of Artificial Intelligence – An Opportunity to Improve our Wellbeing, calling for new policies, regulations and a national strategy around artificial intelligence.
The interdisciplinary ACOLA examined the potential of AI across several social and economic measures, finding Australia needs more AI skills and strategic investment to properly harness the technology, including upgraded digital infrastructure and the establishment of an independently led AI body to unite various stakeholders.
“What is known is that the future role of AI will be ultimately determined by decisions taken today,” the report said.
“To ensure that AI technologies provide equitable opportunities, foster social inclusion and distribute advantages throughout every sector of society, it will be necessary to develop AI in accordance with broader societal principles centred on improving prosperity, addressing inequity and continued betterment.”
Launching the report, Dr Finkel said Australia now faces important choices on AI.
“What kind of society do we want to be? That is the crucial question for all Australians, and for governments as our elected representatives.”
On one hand, AI presents a myriad of opportunities and benefits. But it also presents global risks, according to the report. The key to positive outcomes is “responsibly developed” AI, including a measured response from government and industry.
The report urges political leaders to guide a national discussion on AI and consider a national body encompassing a broad range of stakeholders. The body could work similarly to current regulators, the report says, using ACMA and its regulation of the communication sector as an example.
“Ensuring that AI continues to be developed safely and appropriately for the wellbeing of society will be dependent on a responsive regulatory system that encourages innovation and engenders confidence in its development,” the report said.
Regulating AI does not necessarily require new legal frameworks or ethical guidelines, according to the report.
“… Existing human rights frameworks, as well as national and international regulations on data security and privacy, can provide ample scope through which to regulate and govern much of the use and development of AI systems and technologies.”
The report notes there is already much disagreement and uncertainty around AI governance and regulation.
“Our actions in these areas will shape the future of AI, so it is important that decisions made in these contexts are not only carefully considered, but that they align with the nation’s vision for an AI-enabled future that is economically and socially sustainable, equitable and accessible for all, strategic in terms of government and industry interests, and places the wellbeing of society in the centre.”
The regulating body should be supported by a national framework on that consider ethical, legal, and social issues around implementing AI, according to the report.