A group of cybersecurity applications vendors, banded together as Debate Security, have released a white paper “Cybersecurity Technology Efficacy: Is Cybersecurity the New ‘Market for Lemons’?” that’s a thoughtful and thorough takedown of the cybersecurity technology market. It’s based on interviews with 100+ CISOs, vendors and other market participants, and the key conclusions are:
Cybersecurity is failing because the technology is not as effective as it needs to be.
Proof point: Spending goes up but so do loss events.
The underlying problem is economics, not technology.
The market is crowded with vendors and there’s no reliable way to tell good from bad. CISOs are too short of time, lacking in information and overly influenced by herd thinking to seriously evaluate products. The end result is a known phenomenon of markets: innovation can’t prevail and only the lemons, the ineffective products, survive.
Independent, transparent assessment of technology efficacy is proposed.
The majority of the interviewees agreed that they’d like to see some sort of rating agency to stamp approval of the “efficacy” of security products. The report suggests as a model, GSMA, the group that certifies cellphone makers for network interoperability -- or perhaps a government regulator.
However, the interviewees weren’t clear what “efficacy” would be. That clarity “doesn’t exist because most organizations don’t have the capacity to measure it,” the report states. It makes its own definition of efficacy but keeps it high-level, with four dimensions: Capability to deliver its mission, Quality of design, Practicality of use, and Provenance of the vendor to make sure it’s not a risk itself.
The Debate Security report is making some waves in the security world – the Wall Street Journal recently covered it with the headline Security Experts Alarmed by ‘Broken’ Cyber Market (requires a subscription to read). But is their advocacy on the mark?
We asked our go-to guru, Jack Jones, creator of the FAIR™ model for cyber risk quantification and Co-Founder and Chief Risk Scientist for RiskLens, for his take. Jack thinks the group got much of its criticism of the high-hype, low-information market right and “I’d like to believe that independent and transparent testing could accomplish the objective” of certifying security products. But…
“I’d use a different definition of efficacy based on my belief that the value proposition of any cybersecurity control is its effect on the frequency and/or magnitude of loss. After all, the only reason for any cybersecurity control to exist is to help organizations manage (directly or indirectly) the frequency/magnitude of loss.
“This seems to align reasonably well with the Debate Security “Capability” factor. Their other factors (Quality, Practicality, and Provenance) are important buyer/user experience considerations, but I think they may muddy the waters regarding efficacy.
“More importantly, I don’t believe cybersecurity is failing because of bad technology. I believe many security technologies are pretty effective when applied appropriately and reliably.
“Instead, I’d submit that cybersecurity is failing because:
>>”First, organizations tend to be ineffective at measuring risk, which means they aren’t good at appropriately prioritizing the problems they focus on, and
>>”Second, until we begin to evaluate the efficacy of our security efforts (whether they be technological, policy related, or procedural) in terms of their effect on the frequency and/or magnitude of loss, we will fail to understand and appropriately apply the solutions at our disposal.”
Risk measurement in financial terms is the core deliverable of FAIR analysis and powers the RiskLens platform’s Risk Treatment Analysis capability for determining which controls will provide the most risk reduction in financial terms. While trying to bring clarity to the security controls market is a worthy cause, the capability for “buyer beware” is here already through the FAIR standard.