Security Content Automation Protocol or SCAP (pronounced ess-Cap)
After reading this SCAP webpage, go to the DOILearn portal and take the 16 question quiz to complete the FY11 Role Based Security Training requirement.
In the beginning it was simple. There was one computer and one program. In 1969 two remote computers were connected and communicated. Now, there are multiple computers, multiple operating systems, multitudinous applications, multiple applications on each device and multiple communication options. Dozens of security programs have evolved each with their own way of handling security functions. Some security programs say they do things they don’t. No two security programs do things the same way. Some programs do the same functions but call them different things or more confusingly, do different functions named the same. Some programs perform one function, but the results of that function aren’t importable into another program to analyze further. Some programs overlap but importing results from one program to manipulate further in another isn’t possible. Some tools are automated, from minimally to completely, but there are no standard ways to identify or compare what is or isn’t automated. It’s very difficult to determine what reports come out of various security tools. It’s very difficult to determine specifications to buy products that do what they say they will. It is no longer simple. It’s chaos.
An Ideal Solution:
Wouldn’t it be nice if the vulnerability scanner product would export (express) its results in a format that an intrusion detection program, a patch remediation program, a log analysis program or a configuration evaluation program could read and manipulate? Then the patch remediation program could deliver the patches and correlate back to the other programs to identify what patches had fixed which vulnerabilities, were responsible for which log alerts, and changed which configuration settings?
Computers with the exact same hardware and software could be configured differently depending on whether it was a high, moderate or low impact security system. There would be an automated way to determine which security changes would be implemented quickly, correctly and consistently across all systems and automatically verify the security of each system.
Threats change. Software could be reconfigured or patched to mitigate newly discovered vulnerabilities and a method of prioritizing the order of remediation would be very helpful. Different tools could tell if the same vulnerability was found under a different name, or found a different vulnerability with the same name, that would have been left unpatched. It would be great if the method of standardization kept up with new and evolving hardware and software technology.
Requirements Traceability — Whether in or out of government much IT security money is spent on compliance. Automating compliance checking and a method of providing automated evidence for compliance would save a lot of money. It would be very helpful to have low level software configuration matrices that would allow information security staff to correlate or trace those configurations to high level compliance standards for PCI, HIPAA, the NIST 800-53 charts, or other high level mandates. This correlation is called requirements traceability.
Whom it would benefit:
System administrators and IT managers would know exactly which features a new security tool was capable of doing in comparison with others. Software developers would be able to verify that their software automatically reads configuration settings rather than relying on manual checks or proprietary checking mechanisms. Knowing exactly what was bought would save money on unfulfilled promises, on discarded software, on additional purchases to make up what was thought to have previously been purchased. Procurements would be easier to draft and evaluate if there were standard language to compare features across products, to write purchase order criteria, or to add to security contracts for independent evaluations, audits, cloud services contracts and other procurement items? That would make the jobs of procurement officers and contracting officers much simpler because they would know exactly what they were asking for. It would give them concrete standards for deliverables to measure contract or procurement results.
What has been done:
To escape from the chaos, since 2006, an international community of information security professionals has collaborated to build standards for determining vulnerability and configuration issues with computer systems. They have defined standardized terms to use, standardized configuration elements, standardized priority ratings, standardized flaw descriptions and a standardized XML format that organizes data of each of these components into uniformly readable and inter-exchangeable files.
Who is determining the standards:
The Security Content Automation Protocol (pronounced ess-Cap) is designed to organize, express and measure security related information in standardized ways. Mitre, a nonprofit organization working with MIT and DOD and other federal research centers, manages the SCAP project. NIST has established an SCAP product validation program and a laboratory accreditation program. Together the two programs thoroughly test and validate conformity to SCAP requirements. A product may be validated as conforming to one or more of the six SCAP components or to any one of them. SCAP validation expires after one year to assure that the product has remained aligned with evolving technology.
NIST has prepared several documents for developers and vendors detailing how to write SCAP compatible programs. Many vendors are announcing that their products “will be” SCAP compliant. Verify that they are before buying them. SCAP is defined as specifications that standardize formats and names so security tools can exchange similar information allowing an end user to compare apples to apples when evaluating tools. SCAP functions:
- Organize, express and measure security related information in standard ways
- Standardize related reference data (e.g. identifiers for flaws and for remediation)
- Allow automatic verification of patch installs
- Check security configuration settings
- Can detect signs of compromise using intrusion detection systems
- Some can be used for forensic investigations
Through community participation definitions (standards or protocols) for components of each of these functions have been identified. You are invited to become involved and participate in developing SCAP content.
How SCAP is Organized:
SCAP consists of two major elements, components and content.
Components are protocols, suites of specifications that standardize the format and naming to communicate information about software flaws and configurations. There are six components grouped into three types:
- CPE — Standardized descriptions of operating systems (Common Platform Enumeration, CPE). It is a structured naming scheme for information technology systems, software and packages. It includes a formal name format, a method of checking names against a system and a description format for binding text and tests to a name. The CPE product dictionary is provided in XML format. This is the current version (5/11/2011).
- CCE — Standardized descriptions of configurations (Common Configuration Enumeration, CCE). CCE provides unique identifiers to system configuration issues for fast and accurate correlation of configuration data across multiple information sources and tools. For example, CCE identifiers can be used to associate checks in configuration assessment tools wtih statements in configuration best practice documents. For example, CCE identifiers are included in the Windows 2008 Server Security Guide and are the main identifiers used in the Federal Desktop Core Configuration (FDCC) downloads.
- CVE — Standardized descriptions of publicly known information security vulnerabilities and exposures (Common Vulnerabilities and Exposures, CVE). Example (used in Zeus trojan): CVE-2010-3366. Summary: Mn_Fit 5.13 places a zero-length directory name in the LD_LIBRARY_PATH, which allows local users to gain privileges via a Trojan horse shared library in the current working directory. Published: 10/20/2010, CVSS Severity: 6.9 (MEDIUM)
Vulnerability Measurement and Scoring
- Prioritize standard, quantifiable and repeatable measurement of severity of software vulnerabilities (Common Vulnerability Scoring System, CVSS). CVSS is a vulnerability scoring system designed to provide an open and standardized method for rating IT vulnerabilities. DHS/USCERT CVSS Scoring Calculator
Expression and Checking Languages
- SCAP expressed checklists are written in Extensible Configuration Checklist Description Format (XCCDF). Human readable documents can be generated from XCCDF using automated tools.
- Low level testing procedures used by the checklists are written in Open Vulnerability and Assessment Language (OVAL)
Content is the reference data for standardized elements of software flaws and security configurations. Anyone can write SCAP content for organizational needs or commercial products. Expiration of SCAP validation ensures that products continue to incorporate SCAP content on an ongoing basis (e.g. lists of known security related software flaws and configuration issues within selected products) and causes products to be re-tested using new and improved testing methods.
- The National Vulnerability Database (NVD) hosts a dictionary of CPE entries, information on CVE entries
- Mitre hosts an OVAL database and maintains the CCE entries.
Other non-SCAP but Related Efforts:
- CWE (Common Weakness Enumeration) Weaknesses in source code and operational systems that can lead to better understanding and management of software weaknesses related to architecture and design
- CAPEC (Common Attack Pattern Enumeration and Classification) Attack patterns are a powerful mechanism to capture and communicate the attacker’s perspective. They are descriptions of common methods for exploiting software. They derive from the concept of design patterns applied in a destructive rather than constructive context and are generated from in-depth analysis of specific real-world exploit examples
- CEE (Common Event Expression) Log Format– standardizes the way computer events are described, logged, and exchanged. By using CEE’s common language and syntax, enterprise-wide log management, correlation, aggregation, auditing, and incident handling can be performed more efficiently and produce better results than was possible prior to CEE.
- MAEC Malware Attribute Enumeration and Characteristics By eliminating the ambiguity and inaccuracy that currently exists in malware descriptions and by reducing reliance on signatures, MAEC aims to improve human-to-human, human-to-tool, tool-to-tool, and tool-to-human communication about malware; reduce potential duplication of malware analysis efforts by researchers; and allow for the faster development of countermeasures by enabling the ability to leverage responses to previously observed malware instances.
Common Uses of SCAP
- Configuration Verification
- Download the appropriate configuration checklist for the SCAP scanner from the National Checklist Program (NCP)
- Customize them as appropriate to tailor them to specific organizational and operational requirements that, for example, are covered by other compensating controls.
- Document deviations in the form of exceptions for future review by management/auditors.
- Before systems are deployed scan them with SCAP validated configuration scanners
- As part of continuous monitoring, use the SCAP validated scanner to verify that the checklist settings have been maintained.
- As part of testing a new software package, use the checklist to verify the new package didn’t change configuration settings and that the new application functions properly with the checklist settings
- Through SCAP content, auditors can understand the rationale for security configurations
- SCAP can be used to confirm the installation of patches and identify which patches are missing.
- Scap can be used to identify signs of successful system compromise (e.g. checksums, existence of a particular service) allows incident response teams to make the information publicly available so SCAP validated tools can import the check information to identify the same problem using different tools.
- Requirements traceability can be verified by correlating elements of configurations with new executive orders, OMB memoranda, revisions to NIST 800-53, or new laws such as PCI or SOX.
- Reporting incidents using SCAP names and enumerations allows precise, fast decision making and promotes consistency among customers, vendors, US-CERT and other bodies. Such reporting means all communications precisely identify relevant details, affected products, enable correlation and integration of reports and other supplemental information.
- For vulnerability remediation, SCAP enables quantitative, repeatable measurement and scoring of software flaws across systems, facilitating consistent and repeatable mitigation procedures throughout the enterprise. Some examples:
- Establish one procedure for severe flaws within a shorter amount of time
- Separate requirements for critical applications vs non-critical
- Each CVSS score documents properties of each flaw allowing users to understand and plan mitigation strategies
- Detect artifacts or combinations of configurations that cause weaknesses. For example OVAL checks can identify both the existence of a specific vulnerability plus the signs of infection like the existence of a specific registry key.
- SCAP capabilities and content enable data collection, aggregation and summation for helpful decision making metrics.
- SCAP applies only to the version of the product that was tested. NIST recommends that organizations buy the most recent version of SCAP-validated products. Since there are several versions of SCAP (1.0, 1.1) clear documentation of which of the six SCAP components apply to a product, and when they were SCAP validated, should be researched before buying any products that claim they are SCAP compliant.
SCAP and Cyberscope
Cyberscope is the new Office of Management and Budget (OMB) mandated monthly reporting program for the largest 24 agencies in US government that fall under the Chief Financial Officers Act of 1990. Cyberscope, when fully populated will contain hardware device inventory of all USG computers, software inventory of all USG used commercial software, record configuration compliance with appropriate checklists, record numbers of vulnerabilities, patched and unpatched, on USG computers, and account and identity management statistics. According to Inspector General reports, just 25% of agencies are compliant with configuration management standards and only 21% conform to account and identity management standards. Additional Cyberscope reporting data are the numbers and types of external connections, compliance with security training requirements, and statistics on identity management and access. OMB has mandated the use of SCAP compliant products to upload this information into Cyberscope. As of 5/2011 there are only eight commercial products which are SCAP compliant and capable of uploading the required data into Cyberscope. Those products are :
- BSA Visibility (Asset Reporting) (none of CFO24 use this; company is a McAfee partner)
- IBM Bigfix (VA, EPA, DOJ)
- McAfee Policy Auditor (DHS, Commerce)
- nCircle Suite360 Intelligence Hub (Vulnerability and Asset Reporting) (DOT)
- nCircle Configuration Compliance Manager (Configuration Management and Asset Reporting)
- Symantec Risk Automation Suite (Treasury, HHS)
- Tenable Security Center (Labor)
- Triumphant Resolution Manager (OPM, NARA)
- Spawar (DOT)
- In house product (NASA, DOE)
Government agencies, and the bureaus in them, are striving to acquire products to generate these statistics that can then be rolled up from locations, summarized by bureaus, and rolled up to departments(agencies) and entered in Cyberscope automatically.
This has been a preliminary introduction to what SCAP is and how it can be used in IT organizations. The links provide resources for learning more details about the use of SCAP, writing SCAP content, finding SCAP validation tools, and SCAP validated products.
Basic Information: http://scap.nist.gov
SCAP 1.1 SP 800-126rev http://csrc.nist.gov/publications/nistpubs/800-126-rev1/SP800-126r1.pdf
SCAP 1.0 SP 800-126: http://csrc.nist.gov/publications/nistpubs/800-126/sp800-126.pdf
SCAP Validation Requirements document: http://csrc.nist.gov/publications/drafts/nistir-7511/draft-nistir-7511_rev1.pdf
Draft CPE Dictionary Specification Version 2.3 http://csrc.nist.gov/publications/PubsDrafts.html#NIST-IR-7697
Draft CPE Name Matching Specfication v2.3: http://csrc.nist.gov/publications/PubsDrafts.html#NIST-IR-7696
Draft CPE Naming Specification v2.3 http://csrc.nist.gov/publications/PubsDrafts.html#NIST-IR-7695
SCAP validated Tools Web page: http://nvd.nist.gov/scapproducts.cfm
Mitre Benchmark Development Course (Free): http://benchmarkdevelopment.mitre.org/course/class.html
- How to use free tools and industry standards to create security guidance that helps system administrators configure and operate systems securely.
- Why system administrators need clear, easy-to-use security guidance that applies to their enterprise systems before, during, and after deployment.
- Why system administrators must have security guidance that is easy to understand, manage, and apply in time for their planning, installation, configuration, and operation of their systems.
SCAP Enumerations, Languages, Repositories: http://makingsecuritymeasurable.mitre.org/index.html
Powerpoint with more detail: http://cio.energy.gov/documents/Technical_Introduction_to_SCAP_-_Charles_Schmidt.pdf
Common Weakness Enumeration (CWE): https://www.vte.cert.org/vteweb/go/3730.aspx
Common Weakness Scoring Ssytem (CWSS): https://www.vte.cert.org/vteweb/go/3722.aspx
Common Attack Pattern Enumeraton CAPEC): https://www.vte.cert.org/vteweb/go/3729.aspx
Malware Attribute Enumeration (MAEC): https://www.vte.cert.org/vteweb/go/3724.aspx