ethics

The Wall St. Journal wrote in a March 1, 2019 article that the Need for AI Ethicists Becomes Clearer as Companies Admit Tech’s Flaws. I’m all for ethics being applied to an uncharted technological domain that could have tremendous consequences. But what’s being described sounds more like “AI business risk

Some of Australia’s largest enterprises, including the big four banks, will soon have access to customer empathy metrics in their customer engagement and decisioning software. Pegasystems, an enterprise software company best known for its process automation and CRM, will add new empathy tools to its decisioning platform at the end

Australia has joined 41 other nations in backing a set of global principles to ensure artificial intelligence systems are designed to be robust, safe, fair and trustworthy. Sign up for Which-50’s Irregular Insights newsletter Nominate today for the Which-50 Digital Experience Awards. Simple. Fast. Easy. The non-binding OECD agreement has

The CEO of a leading global AI and conversational commerce company has called on Mark Zuckerberg to shut down Facebook until he can “fix it”, arguing social media is harming communities and demonstrating what not to do in the current AI race. “Shut it down tomorrow. You will win a

It will take an organise-wide effort to recognise and mitigate the potential risks of applying advanced analytics and artificial intelligence to business operations, according to a new report from McKinsey & Company. While the emerging technology could deliver a massive economic boost and improved customer outcomes, trusting machines to make

Artificial intelligence and machine learning technology will amplify bias and further exclude vulnerable people if not designed correctly. A point underscored by the fact several AI initiatives have had to be abandoned because the bias could not be corrected. Register now for Which-50’s webinar on the disruptive impact of chatbots Avoiding

artificial intelligence AI company

It’s a new day not very far in the future. You wake up; your wristwatch has recorded how long you’ve slept, and monitored your heartbeat and breathing. You drive to work; car sensors track your speed and braking. You pick up some breakfast on your way, paying electronically; the transaction

Amazon Web Services is adamant it won’t pick and choose what machine learning applications run on its infrastructure, despite growing concerns the emerging technology can create ethical problems and amplify biases. The world’s largest public cloud provider will, however, remove any users who violate peoples civil liberties or constitutional rights,

What do we want driverless cars to do in unavoidable fatal crashes? Today researchers published a paper The Moral Machine experiment to address this question. To create data for the study, almost 40 million people from 233 countries used a website to record decisions about who to save and who

Data and analytics have enormous potential to improve public policy and services by helping governments focus their resources in the areas where they will be most effective. However the risk of deploying machine learning systems which unfairly impact humans lives, because they’ve inherited biases from their human designers, means a