Artificial intelligence and machine learning technology will amplify bias and further exclude vulnerable people if not designed correctly. A point underscored by the fact several AI initiatives have had to be abandoned because the bias could not be corrected.

Avoiding creating an AI initiative that does more harm than good requires a rigorous analysis of the technology, the data feeding it, and the potential outcomes for the most vulnerable users.

That’s according to Fiona Tweedie, data strategy advisor at the University of Melbourne, who explained how the Australian sandstone approaches the emerging technology, during a panel session at the Gartner Data and Analytics Summit in Sydney this week.

Fiona Tweedie, data strategy advisor at the University of Melbourne.

“Rather than working from an assumption that everybody is more or less a cishet, able bodied middle class white guy — because that’s where a lot of technology assumptions do start because that’s where historically a lot of technology developers actually come from themselves — [we should be] asking questions about who are the most vulnerable users and what could the impacts be on them?” Tweedie said.

The data feeding AI and machine learning models must also be closely scrutinised, according to Tweedie.

“We know data which is collected from the real world is going to bring in some of the structural balances and inequalities which exist within the world. So [by] simply relying uncritically on big data sets you will end up, rather than eliminating the problems of human decision making, you will sometimes just reinforce those biases.”

A history of bias

Unfortunately, there have been several “conspicuous examples” of bias amplification and further marginalisation of users already, Tweedie said.

She pointed to a 2015 incident where Google came under fire for its AI powered image recognition system which was tagging black people as gorillas. Google said it was “appalled and genuinely sorry” and began working on longer term fixes. But an investigation by Wired revealed Google’s “fix” was little more than a removal of the gorilla and chimpanzee categories from tagging, suggesting that the bias may be so deeply rooted that the tech giant could not undo it comprehensively.

Other incidents have included an algorithm used to predict recidivism being accused of racial bias in America, which Tweedie said may be the result of poor data selection.

“We know that some communities tend to be over policed and that information is in the data so when you simply look at the data and ask it to make a prediction the inequalities then come through into the decision making.”

Even small scale AI initiatives can have significant impacts when not properly scrutinised. Tweedie shared an example of smart scales defaulting to congratulating users for any weight loss.

“It could be terrible for someone who is dealing with disease or an eating disorder. Or it is completely inappropriate in the case of a child who should be growing and gaining weight,” Tweedie said.

“These assumptions about what a normal user looks like are also really problematic and really limiting. So thinking about who are the most vulnerable users in your cases, in your user base, and how could your project affect them negatively?”

In her organisation, the University of Melbourne, Tweedie explained this means considering the most vulnerable student and researchers and ensuring any AI or machine learning initiative won’t marginalise them further.

Previous post

Rising cost of letter delivery weighs on Australia Post’s profit

Next post

National program launches to teach students cyber security lessons