Date Published: June 8, 2022
Comments Due:
Email Questions to:
Author(s)
Peter Mell (NIST), Jonathan Spring (CERT/CC at Carnegie Mellon University), Srividya Ananthakrishna (Huntington Ingalls Industries), Francesco Casotto (Cisco), Dave Dugal (Juniper), Troy Fridley (AcuityBrands), Christopher Ganas (Palo Alto Networks), Arkadeep Kundu (Cisco), Phillip Nordwall (Dell), Vijayamurugan Pushpanathan (Schneider Electric), Daniel Sommerfeld (Microsoft), Matt Tesauro (Open Web Application Security Project), Christopher Turner (NIST)
Announcement
Calculating the severity of information technology vulnerabilities is important for prioritizing vulnerability remediation and helping to understand the risk of a vulnerability. The Common Vulnerability Scoring System (CVSS) is a widely used approach to evaluating properties that lead to a successful attack and the effects of a successful exploitation. CVSS is managed under the auspices of the Forum of Incident Response and Security Teams (FIRST) and is maintained by the CVSS Special Interest Group (SIG). Unfortunately, ground truth upon which to base the CVSS measurements has not been available. Thus, CVSS SIG incident response experts maintain the equations by leveraging CVSS SIG human expert opinion.
This work evaluates the accuracy of the CVSS “base score” equations and shows that they represent the CVSS maintainers' expert opinion to the extent described by these measurements. NIST requests feedback on the approach, the significance of the results, and any CVSS measurements that should have been conducted but were not included within the initial scope of this work. Finally, NIST requests comments on sources of data that could provide ground truth for these types of measurements.
NOTE: A call for patent claims is included on page iv of this draft. For additional information, see Information Technology Laboratory (ITL) Patent Policy – Inclusion of Patents in ITL Publications.
This work evaluates the validity of the Common Vulnerability Scoring System (CVSS) Version 3 ``base score'' equation in capturing the expert opinion of its maintainers. CVSS is a widely used industry standard for rating the severity of information technology vulnerabilities; it is based on human expert opinion. This study is important because the equation design has been questioned since it has features that are both non-intuitive and unjustified by the CVSS specification. If one can show that the equation reflects CVSS expert opinion, then that study justifies the equation and the security community can treat the equation as an opaque box that functions as described.
This work shows that the CVSS base score equation closely though not perfectly represents the CVSS maintainers' expert opinion. The CVSS specification itself provides a measurement of error called ``acceptable deviation'' (with a value of 0.5 points). In this work, the distance between the CVSS base scores and the closest consistent scoring systems (ones that completely conform to the recorded expert opinion) is measured. The authors calculate that the mean scoring distance is 0.13 points and the maximum scoring distance is 0.40 points. The acceptable deviation was also measured to be 0.20 points (lower than claimed by the specification). These findings validate that the CVSS base score equation represents the CVSS maintainers' domain knowledge to the extent described by these measurements.
This work evaluates the validity of the Common Vulnerability Scoring System (CVSS) Version 3 ``base score'' equation in capturing the expert opinion of its maintainers. CVSS is a widely used industry standard for rating the severity of information technology vulnerabilities; it is based on human...
See full abstract
This work evaluates the validity of the Common Vulnerability Scoring System (CVSS) Version 3 ``base score'' equation in capturing the expert opinion of its maintainers. CVSS is a widely used industry standard for rating the severity of information technology vulnerabilities; it is based on human expert opinion. This study is important because the equation design has been questioned since it has features that are both non-intuitive and unjustified by the CVSS specification. If one can show that the equation reflects CVSS expert opinion, then that study justifies the equation and the security community can treat the equation as an opaque box that functions as described.
This work shows that the CVSS base score equation closely though not perfectly represents the CVSS maintainers' expert opinion. The CVSS specification itself provides a measurement of error called ``acceptable deviation'' (with a value of 0.5 points). In this work, the distance between the CVSS base scores and the closest consistent scoring systems (ones that completely conform to the recorded expert opinion) is measured. The authors calculate that the mean scoring distance is 0.13 points and the maximum scoring distance is 0.40 points. The acceptable deviation was also measured to be 0.20 points (lower than claimed by the specification). These findings validate that the CVSS base score equation represents the CVSS maintainers' domain knowledge to the extent described by these measurements.
Hide full abstract
Keywords
computer; Common Vulnerability Scoring System; error; expert opinion; measurement; measuring; metrics; network; scoring; security
Control Families
None selected