There are three things companies can do to mitigate risk when applying Artificial Intelligence according to the latest McKinsey research. However, before applying solutions, it is vital first to understand what the risks are and the drivers behind them. 

In the report called Confronting the risks of artificial intelligence the McKinsey Global Institute authors suggest that by 2030, AI could deliver an additional global economic output of $USD13 trillion per year. While arguing that AI can improve lives and add business value in many ways, they caution the adverse knock-on effects such as discrimination and privacy violations to warrant caution.  

The article suggests there are serious potential consequences for organisations when AI falters ranging from reputational damage and revenue losses to the regulatory backlash, criminal investigation, and diminished public trust.

McKinsey research describes five main pain points that can give rise to AI risks.  The first three — data difficulties, technology troubles, and security snags — are related to what might be termed enablers of AI. The final two are linked with the algorithms and human-machine interactions that are central to the operation of the AI itself. 

Many organisations find themselves inundated by the increased amount of unstructured data collected from the web, social media, mobile devices, sensors, and the Internet of Things. Data difficulties can arise when sensitive information hidden among anonymous data is revealed. This can happen in a medical context, where a patient’s name might be redacted from one section of a medical record that is used by an AI system, could be present in the doctor’s notes section of the patient record. In this situation business leaders need to to stay in line with privacy rules, such as the European Union’s General Data Protection Regulation (GDPR) and otherwise manage reputation risk.

Another risk to organisations is technology and process issues across the entire operating landscape that negatively impact the performance of AI systems. This happens when data inputs fail or are impeded on causing AI systems to produce erroneous outputs. For example, one major financial institution ran into trouble after its compliance software was unable to spot trading issues because the data feeds no longer included all customer trades.

Security is an emerging risk that has the potential for fraudsters to exploit seemingly non-sensitive marketing, health, and financial data that companies collect to fuel AI systems. McKinsey cautions that when security precautions are insufficient, it’s possible to stitch these threads together to create false identities. Although target companies — that may otherwise be highly effective at safeguarding personally identifiable information — are unwitting accomplices, they still could experience consumer backlash and regulatory repercussions.

Two significant risks that are inherent in the operation of AI itself have incorrectly formulated models and the problems that arise when humans and machines interact.

Firstly, misbehaving AI models can create problems when they deliver biased results, become unstable, or yield conclusions for which there is no actionable recourse for those affected by its decisions. This can happen, for example, if a population is underrepresented in the data used to train the model. This potentially results in AI models discriminating unintentionally against disadvantaged groups by weaving together postcodes and income data to create targeted offerings.

Secondly, the McKinsey research identified the interface between people and machines are another critical risk area. In the data-analytics organisation, scripting errors, lapses in data management, and misjudgements in model-training data easily can compromise fairness, privacy, security, safety, and compliance. 

The research highlights that accidents and injuries are possibilities if operators of heavy equipment, vehicles, or other machinery don’t recognise when systems should be overruled or are slow to override them because the operator’s attention is elsewhere — a distinct possibility in applications such as self-driving cars. 

Moreover, these are just the unintended consequences — without rigorous safeguards, disgruntled employees or external foes may be able to corrupt algorithms or use an AI application in malfeasant ways, McKinsey warns.

AI risk management: Three core principles

Understanding the five risks above are useful for identifying and prioritising them and their root causes. The McKinsey researchers said if a firm knows where threats may be lurking, ill-understood, or simply unidentified, it will have a higher chance mitigating them. The report also found that as the costs of risks associated with AI rise, the ability both to assess those risks and to engage workers at all levels in defining and implementing controls will become a new source of competitive advantage. Below are three core risk mitigation strategies devised by McKinsey.

Clarity: Use a structured identification approach to pinpoint the most critical risks

The McKinsey researcher’s first core principle is to gain clarity in identifying the most essential AI risk within an organisation. They suggest gathering a diverse cross-section of managers focused on pinpointing and tiering problematic scenarios. This is a good way both to stimulate creative energy and to reduce the risk that narrow specialists or blinkered thinking will miss significant vulnerabilities. 

This structured risk-identification process can clarify the most worrisome scenarios, and allow a firm to prioritise the risks encompassed, to recognise the controls that are missing, and to marshal time and resources accordingly. 

The report notes that organisations need not start from scratch with this effort. Over the past few years, risk identification has become a well-developed practice, and it can be adapted with the intent to be directly deployed in the context of AI.

Breadth: Institute robust enterprise-wide controls

Secondly, the McKinsey researchers noted that it is crucial for an organisation to conduct a gap analysis, identifying areas in an existing risk-management framework that needs to be deepened, redefined, or extended. This will allow a company to apply company-wide controls to guide the development and use of AI systems, ensure proper oversight, and put into place strong policies, procedures, worker training, and contingency plans. Without broad-based efforts, the odds rise that risk factors such as the five described previously will fall through the cracks.

Nuance: Reinforce specific controls depending on the nature of the risk

Lastly, as crucial as enterprise-wide controls are, according to McKinsey they are rarely sufficient to counteract every possible hazard. Another level of rigour and nuance is often needed the research said. Organisations will need a mix of risk-specific controls, and they are best served to implement them by creating protocols that ensure they are in place, and followed, throughout the AI-development process.

The requisite controls will depend on factors such as the complexity of the algorithms, their data requirements, the nature of human-to-machine (or machine-to-machine) interaction, the potential for exploitation by bad actors, and the extent to which AI is embedded into a business process. 

The authors of this McKinsey paper stated that conceptual controls, starting with a use-case charter, sometimes are necessary. So are specific data and analytics controls, including transparency requirements, as well as controls for feedback and monitoring, such as performance analysis to detect degradation or bias.

LinkedIn
Previous post

Accountability will be key to adtech success, says CtrlShift CEO

Next post

Social commerce excels over voice: Episerver report

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.