Exponential advancements in automation and robotics are happening now, and while most are useful to mankind, they have also spelt disaster in instances where control has been lost, resulting in accidents, injuries and human harm.

As artificial intelligence (AI) becomes more sophisticated, humans will be required to develop new ways to manage AI when robots take on crucial roles — such as driving cars, operating machinery or executing highly complex medical procedures for example — that will allow them to make their own, potentially life-altering decisions, says innovation strategist, Anders Sörman-Nilsson.

He tells Which-50, given the increasing use of robots and AI in decision-making, new research commissioned by his think tank, Thinque has revealed that 79 per cent of Australians believe that morals should be programmed into robots.

When probed further to understand who the respondents believed should be responsible for programming morals into robots, 59 per cent said the original creator or software programmer, 20 per cent said the government, 12 per cent said the manufacturer and 9 per cent said the company that owns them.

Anders Sörman-Nilsson

“As AI and its capabilities become more sophisticated, concerns around how we will manage these developments continue to grow. As such, the need for humans to build an ethical code into robots is necessary if they are to take on more key roles in our lives,” says Sörman-Nilsson.

“This code must be instilled into robots and AI taking on important roles, such as machine engineers, or military personnel, to prevent adverse situations.

“If this is not executed well, we as humans will be opening ourselves up to inevitably dangerous consequences. With robots being allowed to exist in society without a moral compass, the capacity for them to hurt or fatally harm humans through not being able to make ethically sound decisions is imminent,” he warns.

Sörman-Nilsson points to developments overseas, where the US government, through its Office of Naval Research, awarded university researchers millions in 2014 over five years on developing machines that understand moral consequence.

“In order to be able to effectively programme ethics into AI, humans will have to have a collective set of ethical rules universally agreed upon — a far cry from the current state of the human world,” he says.

“Another dilemma that improvements in AI raise is that once robots advance enough to mimic human intelligence, awareness and emotions, we will need to consider if they should then also be granted human-equivalent rights, freedoms and protections,” he adds.

With the future of AI unfolding rapidly and consumer fear evident, Sörman-Nilsson shares his insights into what human citizens can expect to see from robotics companies when it comes to ethics in AI in the near future:

1. They will need to set clear ethical boundaries

As humans alongside robotics developers, must collectively determine ethical values that can be coded into and followed by robots. These values will need to encompass all potential ethical problems and the correct way to respond in each situation.

For example, driverless cars will need to have an ethics algorithm to help determine for example, that if in an unimaginable circumstance such as a car carrying two children would be swerved and kill two elderly pedestrians in its wake rather than killing the children in a head-on collision, or not. Only then will we be able to design robots that can reason through ethical dilemmas, the way humans would. Complicating matters, of course, is that humans cross-culturally don’t yet agree on these philosophical thought experiments at this point in time either.

2. They will also have to factor in the unexpected

Even after we have set out boundaries to determine ethical behaviour, there will still be numerous ethical ways to handle each situation, as well as unexpected moral dilemmas.

For example, a robot needing to make a decision of whether to stop on its way to deliver urgent medical supplies to a hospital to help an injured person, it encounters on its way, or not. To ensure robots follow a moral code, as humans do, we would be wise to provide AI with different potential solutions to moral dilemmas, and train them to evaluate and be able to make the best moral decision in any given situation.

3. They will have to constantly monitor AI

As with any technology, programmers will need to ensure that they are constantly monitoring and evaluating ethics in AI so that they are up to date and making the very best decisions possible. Mistakes will inevitably be made, yet programmers should be doing everything possible to both prevent these, as well as redevelop ethical codes to ensure AI is as morally sound as it can be.

LinkedIn
Previous post

AI needs responsible, regulated development, Australia’s top scientists warn

Next post

July 29-30, 2019: Oh right, contracts. A teenager's $3M fortnite payday. And the next round of the POPL kicks off