Artificial intelligence (AI) generates a lot of buzz in cybersecurity these days as a super-tool for both defense and attack. But understanding whether cyber risk is heightened or lowered by AI takes a nuanced understanding of the problem – a job for FAIR and Jack Freund, RiskLens Risk Science Director and coauthor with Jack Jones of the FAIR book Measuring and Managing Information Risk.
Jack sees two sides to the AI/cyber risk discussion: risks from attackers and risks that fall out of organizations’ own use of AI.
Risk from AI-Powered Attackers
FAIR analysis in an organization often starts with identifying threat communities and assessing their relative strengths as a first step to getting a fix on an organization’s vulnerability to attack – and AI increases both numbers and strengths of those communities. “In other words, you can move from a basic cyber criminal attacker group into a nation state level by automating and using artificial intelligence,” he says.
Risks from Deploying AI (or Not)
Jack sees AI as a double edged sword: When deploying it, “you’re far more likely to make mistakes by automating things that are broken or hard coding biases into the way you think about these things.” What’s especially tricky is that this form of risk “emerges from the code itself in ways that may not be fully comprehensible when as the code is being deployed.”
On the other hand, Jack sees as the largest risks around AI as related to competitive advantage: “If you don’t use AI in your business or find the right place to employ AI in your digital transformation efforts, then you have risks associated with not meeting business objectives which is probably the largest one that you need to worry about.”
Jack's bottom line advice to FAIR practitioners: Stay flexible in your thinking when it comes to AI. Instead of thinking of AI as a tool, you may want to analyze it as a threat community both from an external and internal point of view.
Q: Artificial intelligence (AI) is getting a lot of buzz in cybersecurity these days but it seems like it’s mostly seen as an arms race: The attackers are working with AI so the defenders need to arm up with artificial intelligence. What would be a risk-based approach to AI? From a FAIR point of view, what are the risks associated with AI?
A: I think there are two sides to the AI risk discussion.
There’s the attackers using AI and the other side of it is what are the things that we as an organization need to employ to automate and improve our organization using artificial intelligence.
So, to start with the adversarial point of view first. From a threat perspective, I think threat communities using AI definitely puts them in a higher threat capability value. In other words, you can move from a basic cyber criminal attacker group into a nation state level by automating and using artificial intelligence.
You can automate attacks, you can scale and attack more things quickly and more intelligently. So that makes that dangerous.
The other side of it that’s really important to understand is that if you don’t use AI in your business or find the right place to employ AI in your digital transformation efforts, then you have risks associated with not meeting business objectives which is probably the largest one that you need to worry about.
The other thing that’s really important is, when you think about utilizing your own artificial intelligence instances in your company, you’re far more likely to make mistakes by automating things that are broken or hard coding biases into the way you think about these things.
So a lot of mature organizations will manage AI models using a lot of their risk model management kinds of things as well – and what are the risks associated with our own models for managing AI and are we doing it the correct way and are we getting the right outcomes in looking for risk that way.
Q: It’s partly the issue of a new form of insider threat?
A: Yes, and this is hard coded and sometimes its emerging risk, too. So instead of it being risk perpetrated from an explicit attacker or an insider, it emerges from the code itself in ways that may not be fully comprehensible when you look at the code as it’s being deployed.
Q: How would you recommend that FAIR analysts learn more about AI in order to better incorporate it into their analyses?
A: This could be one of those things where you have to employ some of the advanced features of the FAIR methodology so instead of thinking about AI as a tool, sometimes because of the emergent behavior that you may see on artificial intelligence it becomes its own threat community in this regard. So on the adversarial side, it could be modeled that way and on the internal side it could be modeled that way in terms of error and internal bias and problems with the way the code is written. So, you may need to define a new threat community and associated variables with that in order to manage that appropriately.