Australian IBM cybersecurity engineers have developed an artificial intelligence (AI) system to analyse network connections and employee communications at an enterprise scale. The model detects changes in users’ behaviour and can automatically triggers investigations even if the changes occur across multiple platforms.

The method addresses one of cybersecurity’s biggest challenges, the insider threat. IBM research found the root cause for 52 per cent of data breaches in Australia was malicious or criminal attacks which often use methods like phishing and social engineering.

The new IBM solution, developed in the company’s Gold Coast cybersecurity lab as part of a hackathon, uses AI to monitor changes in employee behaviour and flags indicators of compromise. It was debuted to the industry at last week’s Australian Cyber Conference in Melbourne as a way of showing what can be done but the solution is not something that can be bought directly from IBM.

Currently known as “QRadar Insider Threat Detector with Watson” it uses IBM’s AI model, Watson, to analyse user generated content – like emails, Word documents, and Slack messages – to detect both the tone of content and employees’ typical behaviour or “personalities”.

“Taking those two [Watson] services maps really nicely to the insider threat space,” Holly Wright, a software engineer and product owner at IBM Security, told Which-50.

Holly Wright, IBM Security software engineer and product owner. LinkedIn.

IBM Watson’s Tone Analyser can detect when employees’ content is out of alignment with their typical actions or personality, identifying potential accidental or malicious threats, Wright said.

“If you see a drastic change in someone’s personality or the content that they’re outputting that could be a compromised account. Or on the malicious side … if we can detect if someone is all of a sudden writing all these angry messages maybe we want to watch their account more closely.”

The technology needs AI to operate at an enterprise scale because it is unfeasible to have a human analyst monitoring content at that level, nor, Wright says, would it be ethical to use humans for the task.

“You don’t want an analyst sitting there going through people’s individual messages. You just want to know that sort of meta data level [of tone].”

Humans become involved only when a certain threshold is met, Wright says, protecting privacy and alleviating the scale challenges.

In an early demo of the system IBM engineers were able to track tone and sentiment across several platforms, including Facebook and Gmail. When a sensitive document was uploaded via gmail an automatic offense is triggered.

“It was being able to use that [user behaviour analysis] across multiple platforms which was the really beautiful part. You’re not just writing rules for Facebook messages or Gmail or whatever it is, you’re able to analyse user behaviour across any platform.”

Wright says AI is an increasingly important tool in cybersecurity because it can help address skills challenges and works at scale. The technology can be particularly useful in augmenting analysts’ roles by distilling information and recommending procedures.

“That’s where it can be really useful,” Wright said. “Just being able to give people more efficient insight on what’s going on and what they can do next.”

LinkedIn
Previous post

Visa, Mastercard, Ebay, and PayPal pull out of Facebook's Upcoming Cryptocurrency

Next post

Recent Tool Announcements Underscore Google’s Hold on Ad Tech Ecosystem

Join the digital transformation discussion and sign up for the Which-50 Irregular Insights newsletter.