Data privacy and security is viewed as a primary barrier to AI implementations, according to a recent Gartner survey (see Survey Analysis: Moving AI Projects From Prototype to Production). Yet few organizations face these issues head on. Risk Management is generally an afterthought when it comes to AI projects, much like it is across IT.

AI operates as a “black box” in most organizations. Gaining clarity about AI models is the first step organizations must take to gain the context needed for risk management. AI risk management poses new operational requirements that are not well understood.  Conventional controls do not sufficiently ensure AI’s trustworthiness, security, and reliability.

After extensive consultations with practitioners throughout industry, my colleagues Jeremy D’Hoinne, Anthony Mullen, and I just published Top 5 Priorities for Managing AI Risk within Gartner’s MOST Framework

Our research recommends organizations adopt Gartner’s MOST framework. (See Figure 1 below). First they must form cross-functional teams with a vested interest in AI outcomes, such those in legal, compliance, data and analytics, security and privacy, to work together to:

  1. Capture the extent of exposure by inventorying AI used in the organization and ensure the right level of explainability.
  2. Drive staff awareness across the organization by leading a formal AI risk education campaign.
  3. Eliminate exposures of internal and shared AI data by adopting data protection and privacy programs.
  4. Support model reliability, trustworthiness, and security by incorporating risk management into model operations.
  5. Adopt specific AI security measures against adversarial attacks to ensure resistance and resilience.

We are all left to wonder why more alarms didn’t ring during a long widespread Russian incursion against U.S. government agencies and enterprises.  There certainly are plenty of enterprise security systems installed that use AI to help detect abnormal behavior by users, networks, and endpoints. Perhaps if some of these AI risk management measures had been applied against existing AI security systems, these organizations would not have been caught off guard as sitting ducks.

Just because AI risk management is difficult, doesn’t mean it should be delayed. Cooperation and goal setting across business lines is a prerequisite for success, and that is always a difficult proposition, given pre-existing mandates and bandwidth. The alternative, however – not moving forward — is an even more perilous risk.

LinkedIn
Previous post

Facebook refers Trump ban to independent board

Next post

C-suite expects private sector will ‘lose control’ of AI within 5 years: Report

AI