It will take an organise-wide effort to recognise and mitigate the potential risks of applying advanced analytics and artificial intelligence to business operations, according to a new report from McKinsey & Company.

While the emerging technology could deliver a massive economic boost and improved customer outcomes, trusting machines to make decisions, and automatically trigger an action could have disastrous repercussions, the consultants argue.

In a new online paper called, Confronting the risks of artificial intelligence, authors Benjamin Cheatham, Kia Javanmardian, and Hamid Samandari argue the entire organisation needs to be equipped to recognise risks specific to AI, a level of effort which far exceeds the prevailing norms in most organisations today.

“Making real progress demands a multidisciplinary approach involving leaders in the C-suite and across the company; experts in areas ranging from legal and risk to IT, security, and analytics; and managers who can ensure vigilance at the front lines,” the authors write.

What exactly are the risks?

The report highlights potential risks caused when algorithms go awry such as privacy violations, discrimination, accidents, and manipulation of political systems. These events could result in reputational damage and revenue losses to regulatory backlash, criminal investigation, and diminished public trust.

There are also higher stakes scenarios such as medical AI misdiagnosing a patient, inadvertently causing harm them harm and loss of life.

As more processes are entrusted to machines, companies will be required to answer the question: how do you know your AI isn’t unintentionally causing harm?

“As the costs of risks associated with AI rise, the ability both to assess those risks and to engage workers at all levels in defining and implementing controls will become a new source of competitive advantage,” the authors write.

The emerging technology requires new ways of thinking about risk, compliance and public trust.

“Because AI is a relatively new force in business, few leaders have had the opportunity to hone their intuition about the full scope of societal, organisational, and individual risks, or to develop a working knowledge of their associated drivers, which range from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines,” the authors write.

The report features a case study of a European bank working to apply advanced-analytics and AI capabilities to call-center optimisation, mortgage decision making, relationship management, and treasury-management initiatives.

The European bank introduced a structured risk-identification process to recognise potential pitfalls. One risk identified by a team of leaders from business, IT, security, and risk management was the delivery of poor or biased product recommendations to consumers.

“Flawed recommendations could result in a significant amount of harm and damage, including consumer losses, backlash, and regulatory fines,” the authors write.

Once risks are identified, the consultants argue company-wide controls should be introduced to guide the development and use of AI systems, ensure proper oversight, and put into place strong policies, procedures, worker training, and contingency plans.

For example, to mitigate against the risk of poor or biased recommendation, the European Bank adopted a robust set of business principles aimed at detailing how and where machines could be used to make decisions affecting a customer’s financial health and when a human being must be in the loop.

Organisations can also turn to technology to enforce these principles, for example tools that ensure that data scientists consistently log model code, training data, and parameters chosen throughout the development life cycle.

The report also suggests introducing libraries to record explainability (how the machine came up with its decision), model-performance reporting, and monitoring of data and models in production.

The governance framework is applied to AI programs developed in-house and those provided by third parties, for example SaaS fraud model the bank had adopted.

Worker training and awareness was also a key component of the bank’s approach to AI risk.

“Employees receive comprehensive communications about where AI is being used; what steps the bank is taking to ensure fair and accurate decisions and to protect customer data; and how the bank’s governance framework, automated technology, and development tools work together,” the authors wrote.

The report notes the bank’s policies now require all stakeholders to conduct scenario planning if  “in case AI model performance drifts, data inputs shift unexpectedly, or sudden changes, such as a natural disaster, occur in the external environment”.

Previous post

KPMG appoints ex-Datarati CEO as marketing director

Next post

The modern webinar has changed dramatically, says On24 marketing chief

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.