Australia has joined 41 other nations in backing a set of global principles to ensure artificial intelligence systems are designed to be robust, safe, fair and trustworthy.

The non-binding OECD agreement has been signed overnight by all 36 member countries along with along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania.

The Australian Government is also circulating a discussion paper to help develop its own ethics framework for AI, with submissions closing at the end of this month.

The OECD principles were created by a group of more than 50 experts from a mix of sectors, industries and disciplines, according to the OECD. They aim to guide governments, organisations and individuals on the design and use of AI in a way that “puts people’s best interests first and ensuring that designers and operators are held accountable for their proper functioning”.

AI faces mounting criticism over its ability to amplify bias and the technology’s creators potentially shirking responsibility for its misuse. Several Australian organisations are already moving to embed ethics within their AI.

While the new principles are legally non-binding and thereby unenforceable, the OECD argues they will influence standards and national legislation similarly to previous OECD guidelines on privacy and corporate governance.

The OECD principles comprise of five principles for the responsible deployment of trustworthy AI and five recommendations for public policy and international co-operation.

The full principles have been summarised by the OECD as follows:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards –  for example, enabling human intervention where necessary – to ensure a fair and just society.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
  4. AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
  5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

The OECD recommends that governments:

  • Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  • Foster accessible AI ecosystems with digital infrastructure and technologies, and mechanisms to share data and knowledge.
  • Create a policy environment that will open the way to deployment of trustworthy AI systems.
  • Equip people with the skills for AI and support workers to ensure a fair transition.
  • Co-operate across borders and sectors to share information, develop standards and work towards responsible stewardship of AI.
LinkedIn
Previous post

Shut down Facebook until it's fixed, says LivePerson chief

Next post

Newly Merged Versa Sets Sights On International Expansion

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.