In the consulting world, the one consistent deliverable across all engagement types is the report, which often becomes a focal point for clients. It not only raises awareness for what was found, but many times can also be the justification they desperately need to take back to their leadership to justify additional resources whether it be people, processes, and/or technology. Yet, despite the uniformity in deliverable expectations, there’s a significant pain point: determining the severity of the findings in the report. This process is rife with inconsistencies and challenges that can undermine the credibility of the report and the consultancy performing the work.
Based on my experience in the industry, I have synthesized my observations over the year into these three points:
Arbitrary Severity Findings
One of the most glaring issues is that severity ratings often seem arbitrary. They appear to depend heavily on who is writing the report and their subjective opinions. Different consultants can evaluate the same vulnerability but assign wildly different severities based on their interpretation and experience. This subjectivity can lead to discrepancies that confuse clients and diminish the perceived reliability of the findings.
Blindly Following Pre-Defined Tool Severities
Another common issue is the practice of directly copying severity ratings from automated tools. These tools might label a vulnerability as critical even if there is no public exploit, no significant potential impact, or other mitigating factors. This approach ignores the nuanced context of each finding, leading to exaggerated or misplaced severity ratings that don’t accurately reflect the actual risk the client’s environment.
Lack of Context and Transparency
Clients often challenge the designated severities, and rightfully so, especially when they seem misaligned with their understanding of their own environment. When consultants can’t clearly explain or justify how they reached a particular severity conclusion, it undermines trust. This lack of context and transparency can leave consultants stumbling to defend their assessments, creating friction and dissatisfaction.
My recommendation and a potential solution to this problem?
A More Rigorous Approach: Combining CVSS 2.0 and Microsoft DREAD
To address these issues, I have adopted a more structured and defensible method for assigning severity ratings. This approach combines two well-established industry scales. the CVSS 2.0 scoring system and Microsoft DREAD. These systems provide a comprehensive framework for evaluating vulnerabilities, ensuring that assessments and their approach to determining severity of the findings, are consistent and transparent.
Understanding CVSS 2.0 and DREAD
CVSS 2.0 (Common Vulnerability Scoring System) is a standardized framework used to assess the severity of security vulnerabilities. It evaluates vulnerabilities based on several metrics, including the access vector, attack complexity, authentication required, confidentiality impact, integrity impact, and availability impact.
Microsoft DREAD is another widely used model that assesses security risks based on five factors: damage potential, reproducibility, exploitability, affected users, and discoverability. This model helps in understanding the broader impact of a vulnerability beyond just technical metrics.
Integrating Both Models
Combining CVSS 2.0 and Microsoft DREAD allows you to evaluate vulnerabilities across five independent risk factors, each assessed at one of three levels. If measurement is not possible, then values can be estimated based on expert opinions and potential real-world scenarios.
The Five Risk Factors
Access Vector: This measures the network location from which the attack originates, considering the following:
- External – the attack can be accomplished anywhere from the Internet
- Internal/Adjacent – the attack can only be accomplished within the client’s environment
- Local – the attack can only be accomplished on the system itself
Attack Feasibility: This assesses the extent to which the attack has been demonstrated or publicized, considering the following:
- Demonstrated – you carried out the attack and provided evidence
- Not Demonstrated – you have evidence to show the attack may be possible but cannot carry out due to potential negative impacts
- Theoretical – you were unable to gather all evidence needed to determine whether it can be demonstrated or not, but it can still be a potential risk
Authentication: This defines the credentials needed for the attack to be successful (if any), considering the following:
- None – no credentials are needed
- User – any valid user credentials can be used
- Privileged – only users with elevated privileges can be used
Compromise Impact: This evaluates the level of control an attacker can achieve in a successful attack, considering the following:
- Complete – complete compromise of the environment
- Partial – partial compromise of the environment
- Trivial – only compromises the one system the attack was carried out on and has no additional impacts
Business Value: Lastly, this considers the type of data at risk, or operational impact, from the vulnerability, considering the following:
- Crucial – complete business compromise and/or loss of operations, potentially compromising safety
- System – only that particular system or application are impacted
- Trivial – no real impact to business or operations, informational
Calculating the Risk Rating
By combining the total number of these factors and measuring outputs for each identified finding, the overall risk rating can be defined. For example, I’d use a simple rating of a scale of 1-3 (3 starting at the top bullet, 1 being the last bullet) for each of the risk factors. The risk ratings can then be categorized as follows:
- Critical: Total risk factor rating of 15
- High: Total risk factor rating of 14
- Medium: Total risk factor rating of 13
- Low: Total risk factor rating between 10-12
- Informational: Total risk factor rating of 9 or lower
Benefits of This Approach
This structured approach has proven to be robust and defensible, at least in my experience. Clients have rarely pushed back against the severity ratings when they see a clear, evidence-based explanation of how each rating was determined. This transparency and rigor help build trust and ensures that the findings are taken seriously and acted upon appropriately.
Conclusion
In the consulting world, adopting a more structured approach to determining severity ratings can significantly improve the consistency and credibility of reports. By combining CVSS 2.0 and Microsoft DREAD, you can provide clear, evidence-based justifications for each severity rating, reducing client pushback and ensuring that vulnerabilities are accurately assessed.