The bots are coming.. For middle management?

This week, BloombergQuint reported on the systems within Amazon where the company is ceding more of its corporate functions, such as its human-resources operation to machines, using software not only to manage workers in its warehouses but to oversee contract drivers, independent delivery companies and even the performance of other office functions themselves.

This includes a highly automated system for hiring, ongoing support, complaint management – and termination of it’s enormous fleet of flexible drivers, based on minutely recorded performance metrics.

Many workers complain about “unfair” treatment from the automated systems, with algorithms not programmed to take into account circumstances beyond the driver’s control, such as road conditions, weather and other obstacles, like locked delivery destinations or damaged storage lockers.

This is not the first instance of programming not taking into account the real-world conditions faced by those actually using the software.

Algorithms are already making important decisions about our lives and ruling over which political advertisements we see, affecting careers by filtering job applications, influencing how police officers make decisions on criminal activities and manipulating the market at lightspeed.

All of those decisions need to be crafted by a human, at some point, carrying over an inherent bias.

Machine Learning tools are built, or ‘trained’, by consuming vast amounts of data and ‘learning’ to recognise patterns. With a continuous cycle of testing and refining, the system is supposed to learn what the ‘correct’ outcome looks like with that, be trusted to predict the future.

However, all of these elements, from the data being fed into the system, the structure of the algorithm itself and the determination of what a ‘correct’ outcome is are all determined by humans, creating algorithmic bias.

And this bias has real-world consequences. Facial recognition software in the US was banned this time last year for use by police and local agencies in several US cities, including major centres in Boston and San Francisco due to inaccuracies in the outputs and an inherent algorithmic bias.

Many of the Amazon drivers mentioned in the BloombergQuint piece said they were often penalised for conditions outside their control and couldn’t find a human to speak to who could help understand and empathise with their situation.

Amazon have said they understand there will be natural attrition in these situations and a margin of error is calculated into their overall strategies.

Fine for your bottom line, but what is the human cost of that error?

LinkedIn
Previous post

Minicast: Opportunities For Automation In APAC

Next post

The Currency Of Trust