The Center for Internet Security Configuration Assessment Tool (CIS-CAT) is built to support both the consensus security configuration benchmarks distributed by The Center for Internet Security and the configuration content distributed by NIST under the Security Content Automation Protocol (SCAP) program, a U.S. government multi-agency initiative to enable automation and standardization of technical security operations. Currently, XML provided by CIS is only available to CIS members. CIS-CAT reads system configuration guidance documents written in eXtensible Configuration Checklist Description Format (XCCDF) and Open Vulnerability and Assessment Language (OVAL), processes the contents, and outputs system compliance reports in HTML, text, and XML formats. The output XML is well-formed and valid XCCDF result documents containing SCAP compliance information suitable for submission to NIST, as well as additional detailed information useful for inspecting low-level evaluation check outcomes. The HTML output report contains a summary table listing the compliance status of each item, a numeric compliance score for each item and section, and a detailed report on each compliance item, including in most cases, the desired settings and the setting found on the system. The text report contains the benchmark item number, pass/fail results status, and the title of each item.
CIS-CAT was previously a validated SCAP 1.0 FDCC Scanner, providing the capability to audit and assess a target system to determine its compliance with FDCC requirements.
To exercise this capability, a user may download the “SCAP 1.0 Content…using OVAL version 5.3” resources from the NIST NVD National Checklist Program repository, or any other source of SCAP 1.0 compliant content, and perform assessments in exactly the same manner as that user would with any other CIS benchmark.
As is required by the SCAP 1.1 specifications, CIS-CAT implements/adheres to the following language/enumeration standards:
CIS-CAT provides the capability to audit and assess a target system using content conforming to the Security Content Automation Protocol, version 1.1 (SCAP 1.1).
To exercise this capability, a user may download the “SCAP 1.1 Content…” resources from the NIST NVD National Checklist Program repository, or any other source of SCAP 1.1 compliant content, and perform assessments in exactly the same manner as that user would with any other CIS benchmark.
As is required by the SCAP 1.1 specifications, CIS-CAT implements/adheres to the following language/enumeration standards:
CIS-CAT conforms to the specifications of the Security Content Automation Protocol, version 1.2 (SCAP 1.2), as outlined in NIST Special Publication (SP) 800-126 rev 2. As part of the SCAP 1.2 protocol, CIS-CAT’s assessment capabilities have been expanded to include the consumption of source data stream collection XML files and the generation of well-formed SCAP result data streams.
To exercise this capability, a user may download the “SCAP 1.2 Content…using OVAL version 5.10” resources from the NIST NVD National Checklist Program repository, or any other source of SCAP 1.2 compliant content, and perform assessments in exactly the same manner as that user would with any other CIS benchmark.
As is required by the SCAP 1.2 specifications, CIS-CAT implements/adheres to the following language/enumeration standards:
CIS-CAT’s assessment capabilities have been validated as an Authenticated Configuration Scanner (ACS), with CVE option on the following operating system platforms:
The following standards are implemented in CIS-CAT:
CIS-CAT’s capabilities include the ability to assess a target system based on rules defined using the eXtensible Configuration Checklist Description Format (XCCDF), versions 1.1.4 and 1.2. XCCDF is used throughout CIS-CAT as the required XML schema for benchmarks, as well as the checklist definition schema within SCAP source data streams. This ensures that outside compliance benchmarks/data streams, such as those provided by the NIST National Checklist Program, Federal Desktop Core Configuration (FDCC), or the US Government Configuration Baseline (USGCB), can be used alongside custom or CIS’ benchmarks. The XCCDF format specifies the required tests for one or more profiles. At run-time, a user will be able to select any of the given profiles specified in a XCCDF, and CIS-CAT will assess the configuration rules included in the selected profile. With CIS-CAT, an evaluation check can be specified in three ways:
The relevant descriptions, CCE ID’s and other related artifacts entered in the XCCDF will be preserved and included in the XML and HTML results produced by a CIS-CAT assessment.
The Open Vulnerability and Assessment Language (OVAL) is used to identify vulnerabilities and issues. Common examples of the use of OVAL files are:
The OVAL component will contain the definitions, tests, as well as the state a target system is expected to exhibit. When CIS-CAT encounters a reference to an OVAL definition, it parses the specific OVAL components/files and uses those referenced definition identifiers to look up the appropriate tests to be executed. Each OVAL definition may be comprised of one-to-many OVAL tests; the results of which may be logically combined to enumerate an overall definition result. The CIS-CAT evaluation engine is the controller for parsing the required tests, collecting the appropriate system characteristics, evaluating the collected information against the expected state, and recording the success, failure, or any error conditions of a given test. CIS-CAT supports components specified using versions 5.3, 5.8, and 5.10.1 of the OVAL language.
CIS-CAT supports the following component schema and implements the indicated OVAL tests within each:
CIS-CAT supports the use of the Asset Identification (AI) standard. Utilizing the AI standard, CIS-CAT is capable of reporting the necessary information to uniquely identify assets based on known identifiers and/or known information about the target systems being assessed.
CIS-CAT supports the use of the Asset Reporting Format (ARF) standard. ARF describes a data model for expressing information about assets and the relationships between assets and reports. When the CIS-CAT evaluation engine completes the assessment of a target system, users have the option to generate an output XML report utilizing the ARF data model. The CIS-CAT ARF report will contain component results (XCCDF, check results), information about the target asset (utilizing the Asset Identification, or AI, data model – described above), and the SCAP source data stream collection.
CIS-CAT supports the leveraging of the Trust Model for Security Automation Data (TMSAD) through its support of XML digital signatures on source data streams. A CIS-CAT assessment may be performed against both signed and unsigned data streams, and supports the validation of XML digital signatures through the –vs command-line interface option. Using the –vs option, source data stream content containing invalid XML digital signatures, or lacking XML digital signatures altogether, will be rejected and assessment halted. Note that this is an optional command-line option; digital signature validation will not be attempted by default.
CIS-CAT supports the use of the Common Configuration Enumeration (CCE) standard. CCE identifiers uniquely distinguish entries within a dictionary of security-related software (mis-) configuration issues. Source data stream collections and XCCDF benchmark documents may contain CCE references, and such references will be manifest in output reports with the associated benchmark item as links to the National Vulnerability Database (NVD) CCE database, providing an convenient path to detailed information regarding a CCE-identified configuration issue. CCE’s are useful as a key to refer to the same configuration recommendation, regardless of its context or the tool used for processing. While minor differences may be necessary depending on the context, it is useful to keep track of the underlying configuration recommendation that is being processed by use of this common configuration identifier for comparisons across multiple systems, for reporting purposes, and for organizing security configuration guidance in a structured manner for efficient data management.
CIS-CAT supports the use of the Common Platform Enumeration (CPE) standard, versions 2.2 and 2.3. CPE is a structured naming scheme for information technology systems, platforms, and applications that is similar to a URI. The advantage of using CPE is that it provides a standard naming convention for Operating Systems and other applications. CIS-CAT implements support for CPE name matching in XCCDF components of source data streams, as specified in section 4.3.1 of NIST SP800-126r2 (SCAP 1.2 Technical Specifications). The CIS-CAT evaluation engine can determine if particular XCCDF rules are applicable to the target platform, and is able to skip evaluation of rules which are not applicable; indicating a status of “Not Applicable”.
CIS-CAT supports the Common Vulnerabilities and Exposures (CVE) standard. CVE allows users of CIS-CAT to identify known security vulnerabilities and exposures, such as the presence of unpatched software. CIS-CAT assumes that a CVE will be defined in the metadata section of an OVAL definition. The CVE should be defined with a reference node and a source attribute of “CVE”. There can be one or multiple CVE ID’s for a given OVAL definition because one software patch or issue may be associated with many vulnerabilities.
CIS-CAT provides support for the Common Vulnerability Scoring System (CVSS), version 2. CIS-CAT supports a number of scoring mechanisms, including the Common Vulnerability Scoring System (CVSS). CVSS is an industry standard for assessing the weight, or severity, of system security vulnerabilities relative to other vulnerabilities. It is a means by which to establish a numeric value to a security vulnerability, so that organizations can measure overall risk to its systems, and to prioritize the correction of system vulnerabilities. The score is based on a series of vulnerability attributes including: if the vulnerability can be exploited remotely; the complexity necessary for a successful attack; if authentication is first necessary for a given exploit; if the vulnerability could lead to unauthorized access to confidential data; whether or not system integrity could be damaged via a given vulnerability; and whether or not system availability could be reduced via the vulnerability. CVSS is an evolving standard.
CIS-CAT provides support for the Common Configuration Scoring System (CCSS), version 1. Whereas CVSS represents a scoring system for software flaw vulnerabilities, CCSS addresses software security configuration issue vulnerabilities . Per NIST SP800-126r2, CCSS data is not directly useful in the same way as CVSS data. CCSS data needs to be considered in the context of each organization’s security policies and in the context of dependencies among vulnerabilities. CIS-CAT supports CCSS scores when that score is used in the @weight attribute within XCCDF rules.