Date Published: November 2022
Author(s)
Peter Mell (NIST), Jonathan Spring (CERT/CC at Carnegie Mellon University), Dave Dugal (Juniper), Srividya Ananthakrishna (Huntington Ingalls Industries), Francesco Casotto (Cisco), Troy Fridley (AcuityBrands), Christopher Ganas (Palo Alto Networks), Arkadeep Kundu (Cisco), Phillip Nordwall (Dell), Vijayamurugan Pushpanathan (Schneider Electric), Daniel Sommerfeld (Microsoft), Matt Tesauro (Open Web Application Security Project), Christopher Turner (NIST)
This work evaluates the validity of the Common Vulnerability Scoring System (CVSS) Version 3 "base score" equation in capturing the expert opinion of its maintainers. CVSS is a widely used industry standard for rating the severity of information technology vulnerabilities; it is based on human expert opinion across many sectors and industries. This study is important because the equation design has been questioned since it has features that are both unintuitive and unjustified by the CVSS specification. If one can show that the equation reflects CVSS expert opinion, then that study justifies the equation, and the security community can treat the equation as an opaque box that functions as described.
This work shows that the CVSS base score equation closely -- though not perfectly -- represents the CVSS maintainers' expert opinion. The CVSS specification itself provides a measurement of error called "acceptable deviation" (with a value of 0.5 points). This work measures the distance between the CVSS base scores and the closest consistent scoring systems (ones that completely conform to the recorded expert opinion). The authors calculate that the mean scoring distance is 0.13 points, and the maximum scoring distance is 0.40 points. The acceptable deviation was also measured to be 0.20 points (lower than claimed by the specification). These findings validate that the CVSS base score equation represents the CVSS maintainers' domain knowledge to the extent described by these measurements.
This work evaluates the validity of the Common Vulnerability Scoring System (CVSS) Version 3 "base score" equation in capturing the expert opinion of its maintainers. CVSS is a widely used industry standard for rating the severity of information technology vulnerabilities; it is based on human...
See full abstract
This work evaluates the validity of the Common Vulnerability Scoring System (CVSS) Version 3 "base score" equation in capturing the expert opinion of its maintainers. CVSS is a widely used industry standard for rating the severity of information technology vulnerabilities; it is based on human expert opinion across many sectors and industries. This study is important because the equation design has been questioned since it has features that are both unintuitive and unjustified by the CVSS specification. If one can show that the equation reflects CVSS expert opinion, then that study justifies the equation, and the security community can treat the equation as an opaque box that functions as described.
This work shows that the CVSS base score equation closely -- though not perfectly -- represents the CVSS maintainers' expert opinion. The CVSS specification itself provides a measurement of error called "acceptable deviation" (with a value of 0.5 points). This work measures the distance between the CVSS base scores and the closest consistent scoring systems (ones that completely conform to the recorded expert opinion). The authors calculate that the mean scoring distance is 0.13 points, and the maximum scoring distance is 0.40 points. The acceptable deviation was also measured to be 0.20 points (lower than claimed by the specification). These findings validate that the CVSS base score equation represents the CVSS maintainers' domain knowledge to the extent described by these measurements.
Hide full abstract
Keywords
computer; Common Vulnerability Scoring System; error; expert opinion; measurement; measuring; metrics; network; scoring; security
Control Families
None selected