Beyond the ENISA Score—When Professional Judgment Wins
The ENISA incident severity methodology is a useful framework. It's systematic, it forces rigor, and it gives you a structured way to explain a difficult decision to stakeholders or regulators. But it's not the whole story.
I built the ENISA Incident Severity Helper because scoring is valuable—it anchors the conversation and prevents gut-feel decisions. Yet after running dozens of incidents through the methodology, I've noticed a consistent pattern: the score is rarely the final word.
Where the Score Breaks Down
Context the Framework Misses
The ENISA methodology operates on data characteristics and technical factors. But real incidents live in organizational and human context that the numbers don't capture.
Consider two scenarios, both scoring "High" (SE ≈ 3.2):
Scenario A: A customer list with email addresses and purchase history leaked for 6 hours before a developer patched the exposure. The data was clearly visible in logs, but no evidence of access. Organization: mature security team, clear incident response playbook, transparent communication to affected parties.
Scenario B: The same data leaked for 48 hours. Less mature organization, delayed incident discovery, affected individuals found out via a third party before the company notified them.
Both hit the same severity score. The regulatory and reputational consequence? Completely different.
Unintelligibility Doesn't Always Help as Much as It Should
The framework gives strong credit for data encryption without key compromise—unintelligible data is treated as substantially lower risk. Technically correct, but...
If your encryption key is available (say, in the same repository), or if the data becomes intelligible after cross-matching with publicly available sources, you've got a serious problem that the "unintelligible" flag won't catch.
Similarly, data can be "somewhat intelligible"—partially encrypted, or obfuscated but still meaningful to someone with domain knowledge. The binary flag isn't always precise.
Mitigating Factors Are Subjective
The DPC adjustment range (-3 to +3) is useful for baking in context like volume, data age, or inaccuracy. But applying these adjustments is where the art comes in. A developer and a privacy lawyer might reasonably disagree on whether an adjustment of -1 or -2 is warranted for "already partially public data."
So Why Use It At All?
Despite these limitations, the ENISA approach is still worth using because:
1. It forces you to be explicit. Rather than a vague "this feels bad," you've documented why—DPC level, EI factors, CB elements. That clarity is valuable when explaining decisions.
2. It's defensible. If you're ever questioned by a regulator, you can show you applied a recognized methodology systematically. That's stronger than anecdotal judgment.
3. It's consistent. When multiple people run the same incident through the framework, you usually get results in the same ballpark. That consistency helps calibrate your team.
4. It's a starting point, not the ending point. The score is your structured assessment. Then you layer on judgment: organizational context, stakeholder risk tolerance, reputational factors, business continuity, and the specific regulatory environment.
The Real Skill
The real skill in incident management isn't hitting the "right" ENISA score. It's knowing how to interpret the score in light of everything else you know about your organization, your data, and your risk profile.
A severity score of 2.8 (borderline Medium/High) might demand immediate ICO notification in a financial services firm with strict governance policies. The same score at a tech startup might justify a different path—not because the framework is wrong, but because the organizational context and risk appetite are different.
What I'd Add to ENISA
If I were revising the framework, I'd add:
- A "context multiplier" for organizational factors (readiness, transparency, prior incidents) - A qualitative section for factors the score misses (brand impact, regulatory history, threat actor sophistication) - Guidance on cross-matching risk — how intelligibility changes when data is combined with public or other leaked sources
But for now, the existing framework is a solid foundation. Use it. Understand where it falls short. Then apply your judgment.
That combination—structured methodology + professional judgment—is what separates a solid incident response from one that misses the mark.