Tesla founder Elon Musk says artificial intelligence is vastly more risky than North Korea, represents a danger to public safety and needs to be regulated.
In a series of tweets Friday evening US time Musk argued that “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.”
According to Musk, “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”
Of course, as an entrepreneur Musk is not above using platforms like Twitter paired with occasionally inflammatory commentary to his own advantage.
In this case his clickbait-able concerns coincided with a little commentary on the performance of his OpenAI platform in a gaming competition.
Musk is the chairman of OpenAI which describes itself as “a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.”
As he noted in his Tweets this evening, “OpenAI first ever to defeat world’s best players in competitive eSports. Vastly more complex than traditional board games like chess & Go.
And he helpfully included a video, in case you missed the point.
While there is certainly some grandstanding to Musk’s latest announcement, he is at least consistent, having previously called the technology a fundamental existential threat to human civilization.
Nor is he alone in flagging concerns.
As we reported in a cover story recently, the issue of opaque AI, where people do not understand why the machine is doing what is it doing is a growing area for debate, particularly with regard to issues of ethics and compliance.
In that story, called Cover Story: Opaque AI, Can We Manage The Risk?“ Michael O’Keefe IoT Lead, at Microsoft Australia, told Which-50, “If an algorithm can help direct us home or serve us advertising content, that’s great. But as soon as AI is deciding whether we are approved for a home loan or life insurance or driving our car, we as a society expect transparency as to how an AI platform makes the decisions that it makes.”
The risk is that we will not necessarily know if the outputs and outcomes adhere to established guidelines or even the laws of the country they operate in.