The Australian Human Rights Commission is calling for tighter rules on how AI and emerging technologies are developed, saying it is alarmed by some of the current use of the technology.
The Commission wants to ensure accountability and the rule of law are upheld in a world where machines and algorithms make more complex decisions.
- Which-50’s Simple Fast Easy magazine – Six great customer experience case studies from Commonwealth Bank, HSBC, NSW Dept of Planning, Industry, and Environment, NSW Department of education, Verimoto and Waterman Business Centres
The human rights group’s proposals include establishing a new AI Safety Commissioner, which would be an independent statutory body, and a moratorium on any “potentially harmful” use of facial recognition technology in Australia.
This week the Commission launched its latest discussion paper and is asking for feedback, as part of its Human Rights and Technology Project, which uses “a human rights approach and public consultation to address the impacts of new and emerging technologies like Artificial Intelligence”.
The project will culminate in a final report to be delivered to the government in 2020 and with recommendations implemented in 2020-21.
“Emerging technologies can bring great societal benefits, but people are starting to realise their personal information can also be used against them,” said Human Rights Commissioner Edward Santow.
“In the last year we’ve seen troubling examples of emerging technology being ‘beta tested’ on vulnerable members of our community, and we’ve seen AI used to make high-stakes decisions that have had serious human rights impacts on individuals both in Australia and overseas.”
The 229-page discussion paper includes a human rights and ethics framework for emerging technology, as well as approaches to regulation, legal protections, and the design of AI.
The new paper also highlights the need to ensure the accessibility of any new technologies and calls for government to comply with accessibility standards for goods and services.
AI Safety Commissioner
One significant proposal from the Human Rights Commission is a brand new statutory body to take the lead on AI governance in Australia. It would be known as the “AI Safety Commissioner” and act similarly to the current eSafety Commissioner.
The Commission says the new body should focus on protecting and promoting human rights in AI rather than innovation or economic opportunity as some stakeholders had asked for – although the Commission argues the two goals are not mutually exclusive.
Under the current proposal, which the commission notes is subject to change based on stakeholder feedback, the AI Safety Commissioner would support existing regulators (but not be a regulator itself); monitor the use of AI; build capacity; and assist in policy development.
“The AI Safety Commissioner should be an independent appointment, with core funding from government to help ensure independence and secure the public’s trust,” the paper states.
Facial Recognition Pause
Australia’s leading human rights group appears even more alarmed by facial recognition technologies than ever, calling for the government to effectively ban its use until an appropriate legal and legislative framework is in place.
After years of criticism of proposed legislation that would hand government and security agencies more facial recognition powers, the Commission is now arguing for a moratorium on the technology when it involves a certain level of risk.
“The Commission is concerned about the risks of using facial recognition technology in decision making that has a legal, or similarly significant, effect for individuals,” the paper states.
“In addition to its privacy impact, the current underlying technology is prone to serious error, and this error is often more likely for people of colour, women and people with disability, among others.”
The Commission says new legislation with robust safeguards is needed for use of facial recognition in Australia. And it is not satisfied with the government’s latest attempt, the Identity-Matching Services Bill, which was sensationally rejected by a parliamentary security inquiry in October.