Cybersecurity Threats,Malware Trends,and Strategies
上QQ阅读APP看书,第一时间看更新

Vulnerability Management Primer

Before we dive into the vulnerability disclosure trends for the past couple of decades, let me provide you with a quick primer on vulnerability management so that it's easier to understand the data and analysis I provide, and how some vulnerability management teams use such data.

The National Vulnerability Database (NVD) is used to track publicly disclosed vulnerabilities in all sorts of software and hardware products across the entire industry. The NVD is a publicly available database that can be accessed at https://nvd.nist.gov.

In this context, a vulnerability is defined as:

"A weakness in the computational logic (e.g., code) found in software and hardware components that, when exploited, results in a negative impact on confidentiality, integrity, or availability. Mitigation of the vulnerabilities in this context typically involves coding changes, but could also include specification changes or even specification deprecations (e.g., the removal of affected protocols or functionality in their entirety)."

—(NIST, n.d.)

When a vulnerability is discovered in a software or hardware product and reported to the vendor that owns the vulnerable product or service, the vulnerability will ultimately be assigned a Common Vulnerability and Exposures (CVE) identifier at some point.

The exact date when a CVE identifier is assigned to a vulnerability is a function of many different factors, to which an entire chapter in this book could be dedicated. In fact, I co-wrote a Microsoft white paper on this topic called Software Vulnerability Management at Microsoft, which described why it could take a relatively long time to release security updates for Microsoft products. It appears that this paper has disappeared from the Microsoft Download Center with the sands of time. However, the following are some of the factors explaining why it can take a long time between a vendor receiving a report of a vulnerability and releasing a security update for it:

  • Identifying the bug: Some bugs only show up under special conditions or in the largest IT environments. It can take time for the vendor to reproduce the bug and triage it. Additionally, the reported vulnerability might exist in other products and services that use the same or similar components. All of these products and services need to be fixed simultaneously so that the vendor doesn't inadvertently produce a zero-day vulnerability in its own product line. I'll discuss zero-day vulnerabilities later in this chapter.
  • Identifying all variants: Fixing the reported bug might be straightforward and easy. However, finding all the variations of the issue and fixing them too is important as it will prevent the need to re-release security updates or to release multiple updates to address vulnerabilities in the same component. This can be the activity that takes the most time when fixing vulnerabilities.
  • Code reviews: Making sure the updated code actually fixes the vulnerability and doesn't introduce more bugs and vulnerabilities is important and sometimes time-consuming.
  • Functional testing: This ensures that the fix doesn't impact the functionality of the product—customers don't appreciate it when this happens.
  • Application compatibility testing: In the case of an operating system or web browser, vendors might need to test thousands of applications, drivers, and other components to ensure they don't break their ecosystem when they release the security update. For example, the integration testing matrix for Windows is huge, including thousands of the most common applications that run on the platform.
  • Release testing: Make sure the distribution and installation of the security update works as expected and doesn't make systems unbootable or unstable.

It is important to realize that the date that a CVE identifier is assigned to a vulnerability isn't necessarily related to the date that the vendor releases an update that addresses the underlying vulnerability; that is, these dates can be different. The allure of notoriety that comes with announcing the discovery of a new vulnerability leads some security researchers to release details publicly before vendors can fix the flaws. The typical best-case scenario is when the public disclosure of a vulnerability occurs on the same date that the vendor releases a security update that addresses the vulnerability. This reduces the window of opportunity for attackers to exploit the vulnerability to the time it takes organizations to test and deploy the update in their IT environments.

An example of a CVE identifier is CVE-2018-8653. As you can tell from the CVE identifier, the number 8653 was assigned to the vulnerability it was associated with in 2018. When we look up this CVE identifier in the NVD, we can get access to a lot more detail about the vulnerability it's associated with. For example, some details include the type of vulnerability, the date the CVE was published, the date the CVE was last updated, the severity score for the vulnerability, whether the vulnerability can be accessed remotely, and its potential impact on confidentiality, integrity, and availability.

It might also contain a summary description of the vulnerability, like this example: "A remote code execution vulnerability exists in the way that the scripting engine handles objects in memory in Internet Explorer, aka "Scripting Engine Memory Corruption Vulnerability." This affects Internet Explorer 9, Internet Explorer 10, and Internet Explorer 11. This CVE ID is unique from CVE-2018-8643."

—(NIST)

Risk is the combination of probability and impact. In the context of vulnerabilities, risk is the combination of the probability that a vulnerability can be successfully exploited and the impact on the system if it is exploited. A CVE's score represents this risk calculation for the vulnerability. The Common Vulnerability Scoring System (CVSS) is used to estimate the risk for each vulnerability in the NVD. To calculate the risk, the CVSS uses "exploitability metrics", such as the attack vector, attack complexity, privileges required, and user interaction (NIST, n.d.). To calculate an estimate of the impact on a system if a vulnerability is successfully exploited, the CVSS uses "impact metrics", such as the expected impact on confidentiality, integrity, and availability (NIST, n.d.).

Notice that both the exploitability metrics and impact metrics are provided in the CVE details that I mentioned earlier. The CVSS uses these details in some simple mathematical calculations to produce a base score for each vulnerability (Wikipedia).

Vulnerability management professionals can further refine the base scores for vulnerabilities by using metrics in a temporal metric group and an environmental group.

The temporal metric group reflects the fact that the base score can change over time as new information becomes available; for example, when proof of concept code for a vulnerability becomes publicly available. Environmental metrics can be used to reduce the score of a CVE because of the existence of mitigating factors or controls in a specific IT environment. For example, the impact of a vulnerability might be blunted because a mitigation for the vulnerability had already been deployed by the organization in their previous efforts to harden their IT environment. The vulnerability disclosure trends that I discuss in this chapter are all based on the base scores for CVEs.

The CVSS has evolved over time—there have been three versions to date. The ratings for the latest version, version 3, are represented in the following diagram (NIST, n.d.). NVD CVSS calculators for CVSS v2 and v3 are available to help organizations calculate vulnerability scores using temporal and environmental metrics (NIST, n.d.).

The scores can be converted into ratings such as low, medium, high, and critical to make it easier to manage than using granular numeric scores (NIST, n.d.).

Table 2.1: Rating descriptions for ranges of CVSS scores.

Vulnerabilities with higher scores have higher probabilities of exploitation and/or greater impacts on systems when exploited. Put another way, the higher the score, the higher the risk. This is why many vulnerability management teams use these scores and ratings to determine how quickly to test and deploy security updates and/or mitigations for vulnerabilities in their environment, once the vulnerabilities have been publicly disclosed.

Another important term to understand is "zero-day" vulnerability. A zero-day vulnerability is a vulnerability that has been publicly disclosed before the vendor that is responsible for it has released a security update to address it. These vulnerabilities are the most valuable of all vulnerabilities, with attackers and governments willing to pay relatively large sums for them (potentially a million dollars or more for a working exploit).

The worst-case scenario for vulnerability management teams is a critical rated zero-day vulnerability in software or hardware they have in their environment. This means the risk of exploitation could be super high and that the security update that could prevent exploitation of the vulnerability is not publicly available. Zero-day vulnerabilities aren't as rare as you might think. Data that Microsoft released recently indicates that of the CVEs that were known to be exploited in Microsoft products in 2017, the first time they were exploited, 100% were zero-day vulnerabilities and, in 2018, 83% were zero-day vulnerabilities (Matt Miller, 2019).

Here is a fun fact for you. I created a large, sensational news cycle in 2013 when I coined the term "zero day forever" in a blog post I wrote on Microsoft's official security blog. I was referring to any vulnerability found in Windows XP after official support for it ended. In this scenario, any vulnerability found in Windows XP after the end of support would be a zero day forever, as Microsoft would not provide ongoing security updates for it.

Let me explain this in a little more detail. Attackers can wait for new security updates to be released for currently supported versions of Windows, like Windows 10. Then they reverse engineer these updates to find the vulnerability that each update addresses. Then, they check whether those vulnerabilities are also present in Windows XP. If they are, and Microsoft won't release security updates for them, then attackers have zero-day vulnerabilities for Windows XP forever. To this day, you can search for the terms "zero day forever" and find many news articles quoting me. I became the poster boy for the end of life of Windows XP because of that news cycle.

Over the years, I have talked to thousands of CISOs and vulnerability managers about the practices they use to manage vulnerabilities for their organizations. The four most common groups of thought on the best way to manage vulnerabilities in large, complex enterprise environments are as follows:

  • Prioritize critical rated vulnerabilities: When updates or mitigations for critical rated vulnerabilities become available, they are tested and deployed immediately. Lower rated vulnerabilities are tested and deployed during regularly scheduled IT maintenance in order to minimize system reboots and disruption to business. These organizations are mitigating the highest risk vulnerabilities as quickly as possible and are willing to accept significant risk in order to avoid constantly disrupting their environments with security update deployments.
  • Prioritize high and critical rated vulnerabilities: When high and critical rated vulnerabilities are publicly disclosed, their policy dictates that they will patch critical vulnerabilities or deploy available mitigations within 24 hours and high rated vulnerabilities within a month. Vulnerabilities with lower scores will be patched as part of their regular IT maintenance cycle to minimize system reboots and disruption to business.
  • No prioritization – just patch everything: Some organizations have come to the conclusion that given the continuous and growing volume of vulnerability disclosures that they are forced to manage, the effort they put into analyzing CVE scores and prioritizing updates isn't worthwhile. Instead, they simply test and deploy all updates on essentially the same schedule. This schedule might be monthly, quarterly, or, for those organizations with healthy risk appetites, semi-annually. These organizations focus on being really efficient at deploying security updates regardless of their severity ratings.
  • Delay deployment: For organizations that are acutely sensitive to IT disruptions, who have been disrupted by poor quality security updates in the past, delaying the deployment of security updates has become an unfortunate practice. In other words, these organizations accept the risk related to all publicly known, unpatched vulnerabilities in the products they use for a period of months to ensure that security updates from their vendors aren't re-released due to quality issues. These organizations have decided that the cure is potentially worse than the disease; that is, disruption from poor quality security updates poses the same or higher risk to them than all potential attackers in the world. The organizations that subscribe to this school of thought tend to bundle and deploy months' worth of updates. The appetite for risk among these organizations is high, to say the least.

To the uninitiated, these approaches and the trade-offs seem might not make much sense. The primary pain point that deploying vulnerabilities creates, besides the expense, is disruption to the business. For example, historically, most updates for Windows operating systems required reboots. When systems get rebooted, the downtime incurred is counted against the uptime goals that most IT organizations are committed to. Rebooting a single server might not seem material, but the time it takes to reboot hundreds or thousands of servers starts to add up. Keep in mind that organizations trying to maintain 99.999% (5 "9s") uptime can only afford to have 5 minutes and 15 seconds of downtime per year. That's 26.3 seconds of downtime per month. Servers in enterprise data centers, especially database and storage servers, can easily take more than 5 minutes to reboot when they are healthy. Additionally, when a server is rebooted, this is a prime time for issues to surface that require troubleshooting, thereby exacerbating the downtime. The worst-case scenario is when a security update itself causes a problem. The time it takes to uninstall the update and reboot yet again, on hundreds or thousands of systems, leaving them in a vulnerable state, also negatively impacts uptime.

Patching and rebooting systems can be expensive, especially for organizations that perform supervised patching in off hours, which can require overtime and weekend wages. The concept of the conventional maintenance window is no longer valid, as many businesses are global and operate across borders, 24 hours per day, seven days per week. A thoughtful approach to scheduled and layered patching, keeping the majority of infrastructure available while patching and rebooting a minority, has become common.

Reboots are the top reason that organizations decide to accept some risk by patching quarterly or semi-annually, so much so that the MSRC that I worked closely with for over a decade used to try to minimize the number of security updates that required system reboots to every second month. To do this, when possible, they would try to release all the updates that required a reboot one month and then release updates that didn't require reboots the next month. When this plan worked, organizations that were patching systems every month could at least avoid rebooting systems every second month. But the "out of band" updates, which were unplanned updates, seemed to spoil these plans frequently.

When you see how vulnerability disclosures have trended over time, the trade-offs that organizations make between risk of exploitation and uptime might make more sense. Running servers in the cloud can dramatically change this equation—I'll cover this in more detail in Chapter 8, The Cloud – A Modern Approach to Security and Compliance.

There are many other aspects and details of the NVD, CVE, and CVSS that I didn't cover here, but I've provided enough of a primer that you'll be able to appreciate the vulnerability disclosure trends that I provide next.

Vulnerability Disclosure Data Sources

Before we dig into the vulnerability disclosure data, let me tell you where the data comes from and provide some caveats regarding the validity and reliability of the data. There are two primary sources of data that I used for this chapter:

  1. The NVD: https://nvd.nist.gov/vuln/search
  2. CVE Details: https://www.cvedetails.com/

The NVD is the de facto authoritative source of vulnerability disclosures for the industry, but that doesn't mean the data in the NVD is perfect, nor is the CVSS. I attended a session at the Black Hat USA conference in 2013 called "Buying into the Bias: Why Vulnerability Statistics Suck" (Brian Martian, 2013).

This session covered numerous biases in CVE data. This talk is still available online and I recommend watching it so that you understand some of the limitations of the CVE data that I discuss in this chapter. CVE Details is a great website that saved me a lot of time collecting and analyzing CVE data. CVE Details inherits the limitations of the NVD because it uses data from the NVD. It's worth reading how CVE Details works and its limitations (CVE Details). Since the data and analysis that I provide in this chapter is based on the NVD and CVE Details, they inherit these limitations and biases.

Given that the two primary sources of data that I used for the analysis in this chapter have stated limitations, I can state with confidence that my analysis is not entirely accurate or complete. Also, vulnerability data changes over time as the NVD is updated constantly. My analysis is based on a snapshot of the CVE data taken months ago that is no longer up to date or accurate. I'm providing this analysis to illustrate how vulnerability disclosures were trending over time, but I make no warranty about this data – use it at your own risk.

Industry Vulnerability Disclosure Trends

First, let's look at the vulnerability disclosures in each year since the NVD was started in 1999. The total number of vulnerabilities assigned a CVE identifier between 1999 and 2019 was 122,774. As Figure 2.1 illustrates, there was a large increase in disclosures between 2016 and 2018. There was a 128% increase in disclosures between 2016 and 2017, and a 157% increase between 2016 and 2018. Put another way, in 2016, vulnerability management teams were managing 18 new vulnerabilities per day on average. That number increased to 40 vulnerabilities per day in 2017 and 45 per day in 2018, on average.

Figure 2.1: Vulnerabilities disclosed across the industry per year (1999–2019)

You might be wondering what factors contributed to such a large increase in vulnerability disclosures. The primary factor was likely a change made to how CVE identifiers are assigned to vulnerabilities in the NVD. During this time, the CVE anointed and authorized what they call "CVE Numbering Authorities (CNAs)" to assign CVE identifiers to new vulnerabilities (Common Vulnerabilities and Exposures, n.d.). According to Mitre, who manages the CVE process that populates the NVD with data:

"CVE Numbering Authorities (CNAs) are organizations from around the world that are authorized to assign CVE IDs to vulnerabilities affecting products within their distinct, agreed-upon scope, for inclusion in first-time public announcements of new vulnerabilities. These CVE IDs are provided to researchers, vulnerability disclosers, and information technology vendors.

Participation in this program is voluntary, and the benefits of participation include the ability to publicly disclose a vulnerability with an already assigned CVE ID, the ability to control the disclosure of vulnerability information without pre-publishing, and notification of vulnerabilities in products within a CNA's scope by researchers who request a CVE ID from them."

—MITRE

CVE Usage: MITRE hereby grants you a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute Common Vulnerabilities and Exposures (CVE®). Any copy you make for such purposes is authorized provided that you reproduce MITRE's copyright designation and this license in any such copy.

The advent of CNAs means that there are many more organizations assigning CVE identifiers after 2016. As of January 1, 2020, there were 110 organizations from 21 countries participating as CNAs. The names and locations of the CNAs are available at https://cve.mitre.org/cve/cna.html. Clearly, this change has made the process of assigning CVE identifiers more efficient, thus leading to the large increase in vulnerability disclosures in 2017 and 2018. 2019 ended with fewer vulnerabilities than 2018 and 2017, but still significantly more than 2016.

There are other factors that have led to higher volumes of vulnerability disclosures. For example, there are more people and organizations doing vulnerability research than ever before and they have better tools than in the past. Finding new vulnerabilities is big business and a lot of people are eager to get a piece of that pie. Additionally, new types of hardware and software are rapidly joining the computer ecosystem in the form of Internet of Things (IoT) devices. The great gold rush to get meaningful market share in this massive new market space has led the industry to make all the same mistakes that software and hardware manufacturers made over the past 20 years.

I talked to some manufacturers about the security development plans for their IoT product lines several years ago, and it was evident they planned to do very little. Developing IoT devices that lack updating mechanisms takes the industry back in time, to when personal computers couldn't update themselves, but on a much, much larger scale. Consumers simply are not willing to pay more for better security and manufacturers are unwilling to invest the time, budget, and effort into aspects of development that do not drive demand. If the last 3 years are any indication, this increased volume of vulnerability disclosures appears to be the new normal for the industry, leading to much more risk and more work to manage.

The distribution of the severity of these CVEs is illustrated in Figure 2.2. There are more CVEs rated high severity (CVSS scores between 7 and 8) and medium severity (CVSS scores between 4 and 5) than CVEs with other ratings. The weighted average CVSS score is 6.6. More than a third of all vulnerabilities (44,107) are rated critical or high. For organizations that have vulnerability management policies dictating the emergency deployment of all critical rated vulnerabilities and the monthly deployment of CVEs rated high, that's potentially more than 15,000 emergency deployments and over 25,000 monthly patch deployments over a 20-year period. This is one reason why some organizations decide not to prioritize security updates based on severity—there are too many high and critical severity vulnerabilities to make managing them differently than lower-rated vulnerabilities an effective use of time. Many of these organizations focus on becoming really efficient at testing and deploying security updates in their environment so that they can deploy all updates as quickly as possible without disrupting the business, regardless of their severity.

Figure 2.2: CVSS scores by severity (1999–2019)

The vendors and Linux distributions that had the most CVEs according to CVE Details' Top 50 Vendor List (CVE Details, 2020) on January 1 2020 are listed in Figure 2.3. This list shouldn't be all that surprising as some vendors in this list are also the top vendors when it comes to the number of products they have had in the market over the last 20 years. The more code you write, the more potential for vulnerabilities there is, especially in the years prior to 2003 when the big worm attacks (SQL Slammer, MS Blaster, and suchlike) were perpetrated.

After 2004, industry leaders like the ones on this list started paying more attention to security vulnerabilities in the wake of those attacks. I'll discuss malware more in Chapter 3, The Evolution of the Threat Landscape – Malware. Additionally, operating system and web browser vendors have had a disproportionate amount of attention and focus on their products because of their ubiquity. A new critical or high rated vulnerability in an operating system or browser is worth considerably more than a vulnerability in an obscure application.

Figure 2.3: Top 10 vendors/distributions with the most CVE counts (1999–2019)

At this point, you might be wondering what type of products these vulnerabilities are in. Categorizing the top 25 products with the most CVEs into operating systems, web browsers, and applications, Figure 2.4 illustrates the breakdown. In the top 25 products with the most CVEs, there are more CVEs impacting operating systems than browsers and applications combined.

But interestingly, as the number of products is expanded from 25 to 50, this distribution starts to shift quickly, with 5 percent of the total CVEs shifting from the operating system category to applications. I suspect that as the number of products included in this analysis increases, applications would eventually have more CVEs than the other categories, if for no other reason than the fact that there are many, many more applications than operating systems or browsers, despite all the focus operating systems have received over the years. Also keep in mind that the impact of a vulnerability in a popular development library, such as JRE or Microsoft .NET, can be magnified because of the millions of applications that use it.

Figure 2.4: Vulnerabilities in the 25 products with the most CVEs categorized by product type (1999–2019)

The specific products that these vulnerabilities were reported in are illustrated in the following list (CVE Details, n.d.). This list will give you an idea of the number of vulnerabilities that many popular software products have and how much effort vulnerability management teams might spend managing them.

Table 2.2: The top 25 products with the most CVEs (1999–2019)

Back in 2003, when the big worm attacks on Microsoft Windows happened, many of the organizations I talked to at the time believed that only Microsoft software had vulnerabilities, and other vendors' software was perfect. This, even though thousands of CVEs were being assigned each year before and after 2003 for software from many vendors.

A decade and a half later, I haven't run into many organizations that still believe this myth, as their vulnerability management teams are dealing with vulnerabilities in all software and hardware. Note that there are only two Microsoft products in the top 10 list.

But this data is not perfect and counting the total number of vulnerabilities in this manner does not necessarily tell us which of these vendors and products have improved over the years or whether the industry has improved its security development practices as a whole. Let's explore these aspects more next.

Reducing Risk and Costs – Measuring Vendor and Product Improvement

How can you reduce the risk and costs associated with security vulnerabilities? By using vendors that have been successful at reducing the number of vulnerabilities in their products, you are potentially reducing the time, effort, and costs related to your vulnerability management program. Additionally, if you choose vendors that have also invested in reducing attackers' return on investment by making exploitation of vulnerabilities in their products hard or impossible, you'll also be reducing your risk and costs. I'll now provide you with a framework that you can use to identify such vendors and products.

In the wake of the big worm attacks in 2003, Microsoft started developing the Microsoft SDL (Microsoft, n.d.). Microsoft continues to use the SDL to this day. I managed marketing communications for the SDL for several years, so I had the opportunity to learn a lot about this approach to development. The stated goals of the SDL are to decrease the number and severity of vulnerabilities in Microsoft software.

The SDL also seeks to make vulnerabilities that are found in software after development harder or impossible to exploit. It became clear that even if Microsoft was somehow able to produce vulnerability-free products, the applications, drivers and third-party components running on Windows or in web browsers would still render systems vulnerable. Subsequently, Microsoft shared some versions of the SDL and some SDL tools with the broader industry for free. It also baked some aspects of the SDL into Visual Studio development tools.

I'm going to use the goals of the SDL as an informal "vulnerability improvement framework" to get an idea of whether the risk (probability and impact) of using a vendor or a specific product has increased or decreased over time. This framework has three criteria:

  1. Is the total number of vulnerabilities trending up or down?
  2. Is the severity of those vulnerabilities trending up or down?
  3. Is the access complexity of those vulnerabilities trending up or down?

Why does this seemingly simple framework make sense? Let's walk through it. Is the total number of vulnerabilities trending up or down? Vendors should be working to reduce the number of vulnerabilities in their products over time. An aspirational goal for all vendors should be to have zero vulnerabilities in their products. But this isn't realistic as humans write code and they make mistakes that lead to vulnerabilities. However, over time, vendors should be able to show their customers that they have found ways to reduce vulnerabilities in their products in order to reduce risk for their customers.

Is the severity of those vulnerabilities trending up or down? Given that there will be some security vulnerabilities in products, vendors should work to reduce the severity of those vulnerabilities. Reducing the severity of vulnerabilities reduces the number of those emergency security update deployments I mentioned earlier in the chapter. It also gives vulnerability management teams more time to test and deploy vulnerabilities, which reduces disruptions to the businesses they support. More specifically, the number of critical and high severity CVEs should be minimized as these pose the greatest risk to systems.

Is the access complexity of those vulnerabilities trending up or down? Again, if there are vulnerabilities in products, making those vulnerabilities as hard as possible or impossible to exploit should be something vendors focus on. Access complexity or attack complexity (depending on the version of CVSS being used) is a measure of how easy or hard it is to exploit a vulnerability. CVSS v2 provides an estimate of access complexity as low, medium or high, while CVSS v3 uses attack complexity as either high or low. The concept is the same—the higher the access complexity or attack complexity, the harder it is for the attacker to exploit the vulnerability.

Using these measures, we want to see vendors making the vulnerabilities in their products consistently hard to exploit. We want to see the number of high access complexity CVEs (those with the lowest risk) trending up over time, and low complexity vulnerabilities (those with the highest risk) trending down or zero. Put another way, we want the share of high complexity CVEs to increase.

To summarize this vulnerability improvement framework, I'm going to measure:

  • CVE count per year
  • The number of critical rated and high rated CVEs per year. These are CVEs with scores of between 7 and 10
  • The number of CVEs per year with low access complexity or attack complexity

When I apply this framework to vendors, who can have hundreds or thousands of products, I'll use the last five years' worth of CVE data. I think 5 years is a long enough period to determine whether a vendor's efforts to manage vulnerabilities for their products has been successful. When I apply this framework to an individual product, such as an operating system or web browser, I'll use the last 3 years (2016-2018) of CVE data so that we see the most recent trend. Note that one limitation of this approach is that it won't be helpful in cases where vendors and/or their products are new and there isn't enough data to evaluate.

Now that we have a framework to measure whether vulnerability disclosures are improving over time, I'll apply this framework to two decades of historical CVE data for some select vendors, operating systems, and web browsers to get a better idea of the state of popular software in the industry. Just to add an element of suspense and tension, like you'd find in a Mark Russinovich cybersecurity thriller novel, I'll reveal Microsoft's CVE data last!

Oracle Vulnerability Trends

Since Oracle is #2 in the top 10 list of vendors with the most CVEs, let's start with them. There are CVEs for Oracle products dating back to 1999. Figure 2.5 illustrates the number of CVEs published each year for Oracle products between 1999 and 2018.

During this period, 5,560 CVEs were assigned, of which 1,062 were rated as critical or high and 3,190 CVEs had low access complexity. There were 489 CVEs disclosed in 2019, making a grand total of 6,112 CVEs in Oracle products between 1999 and 2019 (CVE Details, n.d.).

Note that Oracle acquired numerous technology companies and new technologies during this period, including MySQL and Sun Microsystems. Acquisitions of new technologies can lead to significant changes in CVE numbers for vendors. It can take time for acquiring vendors to get the products they obtain into shape to meet or exceed their standards. In Oracle's case, some of the technologies they acquired turned out to have the most CVEs of any of the products in their large portfolio; these include MySQL, JRE and JDK (CVE Details, n.d.).

Figure 2.5: Number of CVEs, critical and high CVEs, and low complexity CVEs in Oracle products (1999–2018)

Taking a view of just the last five full years, starting at the beginning of 2014 and ending at the end of 2018, the number of CVEs increased by 56%. There was a 54% increase in the number of CVEs with low access complexity or attack complexity. However, the number of critical and high score (with scores of between 7 and 10) CVEs decreased by 48% during this same period. This is impressive given the big increase in the number of vulnerabilities during this time. This positive change is illustrated by Figure 2.6 this illustrates the number of critical and high severity CVEs as a percentage of the total CVEs for each year between 1999 and 2018. It also shows us CVEs with low access complexity as a percentage of all CVEs during the same period.

Figure 2.6: Critical and high severity rated CVEs and low complexity CVEs in Oracle products as a percentage of total (1999–2018)

Long-term trends like this don't happen by accident. Oracle likely implemented some changes in people (such as security development training), processes, and/or technology that helped them reduce the risk for their customers. Older products that reach end of life can also help improve the overall picture. Oracle likely also made progress addressing vulnerabilities in many of the technologies it had acquired over the years. There's still a relatively high volume of vulnerabilities that vulnerability management teams need to deal with, but lower severity vulnerabilities are helpful as I discussed earlier.

According to CVE Details, the Oracle products that contributed the most to the total number of CVEs between 1999 and 2018 included MySQL, JRE, JDK, Database Server, and Solaris.

Apple Vulnerability Trends

Next on the list of vendors with the highest number of CVEs is Apple. Between 1999 and 2018, there were 4,277 CVEs assigned to Apple products; of these CVEs, 1,611 had critical or high scores, and 1,524 had access complexity that was described as low (CVE Details, n.d.). There were 229 CVEs disclosed in Apple products in 2019 for a total of 4,507 CVEs between 1999 and 2019 (CVE Details, n.d.). As you can see from Figure 2.7 there have been big increases and decreases in the number of CVEs in Apple products since 2013.

Looking at just the 5 years between 2014 and the end of 2018, comparing the start and end of this period, there was a 39% reduction in the number of CVEs, a 30% reduction in CVEs with CVSS scores of 7 and higher, and a 65% reduction in CVEs with low access complexity. However, vulnerability management teams had their work cut out for them in 2015 and 2017 when there were the largest increases in CVE numbers in Apple's history.

Figure 2.7: Number of CVEs, critical and high CVEs, and low complexity CVEs in Apple products (1999–2018)

Figure 2.8: Critical and high severity rated CVEs and low complexity CVEs in Apple products as a percentage of total (1999–2018)

The Apple products that contributed the most CVEs to Apple's total, according to CVE Details, include macOS, iOS, Safari, macOS Server, iTunes, and watchOS (CVE Details, n.d.).

IBM Vulnerability Trends

IBM is ranked fourth on the list of vendors with the most vulnerabilities, with just slightly fewer CVEs than Apple between 1999 and 2018, with 4,224 (CVE Details, n.d.), incredibly, a difference of only 53 CVEs over a 19-year period between these two vendors. But Big Blue had nearly half the CVEs rated critical or high compared to Apple. However, IBM had significantly more CVEs with low access complexity compared to Apple.

Figure 2.9: Number of CVEs, critical and high score CVEs and low complexity CVEs in IBM products (1999–2018)

Focusing on just the last 5 years between 2014 and the end of 2018, IBM saw a 32% increase in the number of CVEs. There was a 17% decrease in the number of critical and high score CVEs, while there was an 82% increase in CVEs with low access complexity. That decrease in critical and high rated vulnerabilities during a time when CVEs increased by almost a third is positive and noteworthy.

Figure 2.10: Critical and high severity rated CVEs and low complexity CVEs in IBM products as a percentage of total (1999–2018)

The products that contributed the most to IBM's CVE count were AIX, WebSphere Application Server, DB2, Rational Quality Manager, Maximo Asset Management, Rational Collaborative Lifecycle Management and WebSphere Portal (CVE Details, n.d.).

Google Vulnerability Trends

Rounding out the top five vendors with the most CVEs is Google. Google is different from the other vendors on the top 5 list. The first year that a vulnerability was published in the NVD for a Google product was 2002, not 1999 like the rest of them. Google is a younger company than the others on the list.

During the period between 2002 and 2018, there were 3,959 CVEs attributed to Google products. Of these CVEs, 2,078 were rated critical or high score (CVE Details, n.d.). That's more than double the number of critical and high score vulnerabilities versus IBM and Oracle, and significantly more than Apple. Google has more critical and high severity vulnerabilities than any vendor in the top five list, with the exception of Microsoft. 1,982 of the CVEs assigned to Google products during this period had low access complexity (CVE Details, n.d.).

Figure 2.11: The number of CVEs, critical and high CVEs and low complexity CVEs in Google products (2002–2018)

Looking at the trend in the 5 years between 2014 and the end of 2018, there was a 398% increase in CVEs assigned to Google products; during this same period there was a 168% increase in CVEs rated critical or high and a 276% increase in low complexity CVEs (CVE Details, n.d.). The number of CVEs in 2017 reached 1,001, according to CVE Details (CVE Details, n.d.), a feat that none of the top 5 vendors has ever achieved.

Figure 2.12: Critical and high severity rated CVEs and low complexity CVEs in Google products as a percentage of total (2002–2018)

According to CVE Details, the Google products that contributed the most to Google's overall CVE count included Android and Chrome (CVE Details, n.d.).

Microsoft Vulnerability Trends

Now it's time to look at how Microsoft has been managing vulnerabilities in their products. They top the list of vendors with the most CVEs, with 6,075 between 1999 and the end of 2018 (CVE Details, n.d.).

Of the aforementioned 6,075 CVEs, 3,635 were rated critical or high, and 2,326 CVEs had low access/attack complexity (CVE Details, n.d.). Of the 5 vendors we examined, Microsoft had the highest total number of vulnerabilities, the highest number of vulnerabilities with CVSS scores of 7 and higher, and the most CVEs with low access complexity.

Figure 2.13: The number of CVEs, critical and high CVEs and low complexity CVEs in Microsoft products (1999–2018)

Focusing on the 5 years between 2014 and the end of 2018, there was a 90% increase in CVEs assigned to Microsoft products. There was a 14% increase in critical and high score vulnerabilities and a 193% increase in low access complexity CVEs. If there is a silver lining, it's that Microsoft has made it significantly harder to exploit vulnerabilities over the long term. Microsoft released compelling new data recently on the exploitability of their products that is worth a look to get a more complete picture (Matt Miller, 2019).

Figure 2.14: Critical and high severity rated CVEs and low complexity CVEs in Microsoft products as a percentage of total (1999–2018)

The products that contributed the most to Microsoft's overall CVE count include Windows Server 2008, Windows 7, Windows 10, Internet Explorer, Windows Server 2012, Windows 8.1, and Windows Vista (CVE Details, n.d.). Some operating systems on this list were among the most popular operating systems in the world, at one time or another, especially among consumers. This makes Microsoft's efforts to minimize vulnerabilities in these products especially important. I'll discuss vulnerability disclosure trends for operating systems and web browsers later in this chapter.

Vendor Vulnerability Trend Summary

All the vendors we examined in this chapter have seen dramatic increases in the number of vulnerabilities in their products over time. The volume of vulnerability disclosures in the 2003–2004 timeframe seems quaint compared to the volumes we have seen over the past 3 years. Big increases in the number of vulnerabilities can make it more challenging to reduce the severity and increase the access complexity of CVEs.

Figure 2.15: CVE count for the top five vendors (1999–2018)

Figure 2.16: The counts of critical and high rated severity CVEs for the top five vendors (1999–2018)

Only one of the industry leaders we examined has achieved all three of the goals we defined earlier for our informal vulnerability improvement framework. Focusing on the last five full years for which I currently have data (2014–2018), Apple successfully reduced the number of CVEs, the number of critical and high severity CVEs and the number of CVEs with low access complexity. Congratulations Apple!

Table 2.3: The results from applying the vulnerability improvement framework (2014–2018)

It's super challenging to drive these metrics in the right direction across potentially hundreds of products for years at a time. Let's examine how individual products have performed over time. Next, we'll look at select operating systems and web browsers.

Operating System Vulnerability Trends

Operating systems have garnered a lot of attention from security researchers over the past couple of decades. A working exploit for a zero-day vulnerability in a popular desktop or mobile operating system is potentially worth hundreds of thousands of dollars or more. Let's look at the vulnerability disclosure trends for operating systems and look closely at a few of the products that have the highest vulnerability counts.

Figure 2.17 illustrates the operating systems that had the most unique vulnerabilities between 1999 and 2019, according to CVE Details (CVE Details, n.d.). The list contains desktop, server, and mobile operating systems from an array of vendors including Apple, Google, Linux, Microsoft, and others:

Figure 2.17: Operating systems with the most unique vulnerabilities by total number of CVE counts (1999–2019)

Microsoft Operating System Vulnerability Trends

Since we covered Microsoft last in the previous section, I'll start with their operating systems here. After working on that customer-facing incident response team at Microsoft that I mentioned earlier, I had the opportunity to work in the Core Operating System Division at Microsoft. I was a program manager on the Windows Networking team. I helped ship Windows Vista, Windows Server 2008, and some service packs. Believe it or not, shipping Windows was an even harder job than that customer facing incident response role. But that is a topic for a different book.

Let's look at a subset of both client and server Microsoft operating systems. Figure 2.18 illustrates the number of CVEs per year for Windows XP, Windows Server 2012, Windows 7, Windows Server 2016, and Windows 10.

Figure 2.18: CVE count for select versions of Microsoft Windows (2000–2018)

Figure 2.18 gives us some insight into how things have changed with vulnerability disclosures over time. It shows us how much more aggressively vulnerabilities have been disclosed in the last 4 or 5 years compared with earlier periods. For example, in the 20 years that vulnerability disclosures were reported in Windows XP, a total of 741 CVEs were disclosed (CVE Details, n.d.); that's 37 CVEs per year on average. Windows 10, Microsoft's latest client operating system, exceeded that CVE count with 748 CVEs in just 4 years. That's 187 vulnerability disclosures per year on average. This represents a 405% increase in CVEs disclosed on average per year.

Server operating systems have also seen an increasingly aggressive vulnerability discovery rate. A total of 802 vulnerabilities were disclosed in Windows Server 2012 in the 7 years between 2012, when it was released, and 2018 (CVE Details, n.d.); that's 114 CVEs per year on average. But that average jumps to 177 CVEs per year for Windows Server 2016, which represents a 55% increase.

Given that the newest operating systems, Windows 10 and Windows Server 2016, shouldn't have any of the vulnerabilities that were fixed in previous operating systems before they shipped and they have had the benefit of being developed with newer tools and better trained developers, the pace of disclosures is incredible. However, with other operating systems reaching end of life, and Windows 10 being the only new client version of Windows, it is likely getting more attention from security researchers than any other Windows operating system version ever.

Let's now take a deeper look at some of these versions of Windows and apply our vulnerability improvement framework to them.

Windows XP Vulnerability Trends

Windows XP no longer received support as of April 2014, but there were 3 CVEs disclosed in 2017 and 1 in 2019, which is why the graph in figure 2.19 has a long tail (CVE Details, n.d.). Although the number of critical and high severity CVEs in Windows XP did drop from their highs in 2011 by the time support ended in early 2014, the number of CVEs with low access complexity remained relatively high. I don't think we can apply our vulnerability improvement framework to the last few years of Windows XP's life since the last year, in particular, was distorted by a gold rush to find and keep new zero-day vulnerabilities that Microsoft would presumably never fix. These vulnerabilities would be very valuable as long as they were kept secret.

Figure 2.19: The number of CVEs, critical and high rated severity CVEs and low complexity CVEs in Microsoft Windows XP (2000–2019)

Why did Microsoft release security updates for Windows XP after it went out of support? It's that "zero day forever" concept I mentioned earlier. Facing new, critical, potentially worm-able vulnerabilities in Windows XP, Microsoft made the decision to offer security updates for Windows XP after the official support lifetime ended.

The alternative was potentially thousands or millions of compromised and infected "zombie" Windows XP systems constantly attacking the rest of the internet. Microsoft made the right decision releasing updates for Windows XP after its end of life given how many enterprises, governments, and consumers still use it.

Figure 2.20 illustrates the critical and high severity CVEs and low complexity CVEs as a percentage of the total number of CVEs in Windows XP. The erratic pattern in 2017 and 2019 is a result of very few CVEs disclosed in those years (3 in 2017 and 1 in 2019) (CVE Details, n.d.).

Figure 2.20: Critical and high severity rated CVEs and low complexity CVEs in Microsoft Windows XP as a percentage of all Microsoft Windows XP CVEs (2000–2019)

Windows 7 Vulnerability Trends

Next, let's examine the data for the very popular Windows 7 operating system. Windows 7 went out of support on January 14, 2020 (Microsoft Corporation, 2020). Windows 7 was released in July 2009, after the poorly received Windows Vista. Everyone loved Windows 7 compared to Windows Vista. Additionally, Windows 7 enjoyed a "honeymoon" when it was released from a CVE disclosure perspective as it took a couple of years for CVE disclosures to ramp up, and in recent years, they have increased significantly.

Windows 7 had 1,031 CVEs disclosed between 2009 and 2018. On average, that's 103 vulnerability disclosures per year (CVE Details, n.d.). That's not as high as Windows 10's average annual CVE disclosure rate, but is nearly 3 times the average number of CVEs disclosed in Windows XP per year. Windows 7 had 57 critical or high rated vulnerabilities per year on average.

Figure 2.21: The number of CVEs, critical and high rated severity CVEs and low complexity CVEs in Microsoft Windows 7 (2009–2018)

If we focus on just the last 3 years between 2016 and 2018 (a period for which we have data for several Windows versions for comparison purposes), the number of CVEs increased by 20% from the beginning of 2016 and the end of 2018, while the number of critical and high severity CVEs decreased by 44%, and the number of low complexity CVEs increased by 8% (CVE Details, n.d.). A significant decrease in vulnerability severity is helpful to vulnerability management teams, but this doesn't achieve the goals of our vulnerability improvement framework for this 3-year period.

Figure 2.22: Critical and high severity rated CVEs and low complexity CVEs in Microsoft Windows 7 as a percentage of all Microsoft Windows 7 CVEs (2009-2018)

Windows Server 2012 and 2016 Vulnerability Trends

Let's now look at a couple of Windows Server SKUs – Windows Server 2012 and 2016. Windows Server 2012 was released in September 2012. Windows Server 2016 was released in September 2016, so we don't have a full year's worth of data for 2016. This will skew the results of our framework because it will appear that our metrics all had large increases compared to 2016.

Figure 2.23: The number of CVEs, critical and high rated severity CVEs and low complexity CVEs in Microsoft Windows Server 2012 (2012–2018)

By the end of 2018, Windows Server 2012 had 802 CVEs in the NVD. Across the 7 years in Figure 2.23, on average, there were 115 CVEs per year, of which 54 CVEs were rated critical or high (CVE Details, n.d.). For the period between 2016 and the end of 2018, Windows Server 2012's CVE count increased by 4%, while critical and high severity CVEs decreased by 47%, and low complexity CVEs decreased by 10%. It comes very close to achieving the goals of our vulnerability improvement framework. So close!

Unfortunately, the story isn't as straightforward for Windows Server 2016. We simply do not have enough full year data to see how vulnerability disclosures are trending. There is a huge increase (518%) in CVE disclosures between 2016 and 2018, but that's only because we only have one quarter's data for 2016. However, the number of disclosures between 2017 and 2018 is essentially the same (251 and 241, respectively).

Windows Server 2012 had 235 disclosures in 2018 and 162 in 2018 (CVE Details, n.d.). That's an average of 199 CVEs per year for those 2 years, where Windows Server 2016's average was 246 for 2 full years. However, 2 years' worth of data simply isn't enough data; we need to wait for more data in order to understand how Windows Server 2016 is performing.

Figure 2.24: The number of CVEs, critical and high rated severity CVEs, and low complexity CVEs in Microsoft Windows Server 2016, (2016–2018)

Windows 10 Vulnerability Trends

The final Windows operating system I'll examine here was called "the most secure version of Windows ever" (err…by me (Ribeiro, n.d.)), Windows 10. This version of Windows was released in July 2015. At the time of writing, I had a full three years' worth of data from 2016, 2017 and 2018. By the end of 2018, Windows 10 had a total of 748 CVEs in the NVD; on average, 187 CVEs per year and 76 critical and high severity vulnerabilities per year (CVE Details, n.d.).

During this 3-year period the number of CVEs in Windows 10 increased by 48%, while the number of critical and high score CVEs decreased by 25% and the number of low access complexity CVEs increased by 48%.

Figure 2.25: The number of CVEs, critical and high rated severity CVEs and low complexity CVEs in Microsoft Windows 10 (2015–2018)

Figure 2.26: Critical and high severity rated CVEs and low complexity CVEs in Microsoft Windows 10 as a percentage of all Microsoft Windows 10 CVEs (2015–2018)

2019 ended with 357 CVEs in Windows 10, a 33% increase from 2018, and the highest number of CVEs than any year since it was released (CVE Details, n.d.). One important factor this data doesn't reflect is that Microsoft has become very good at quickly patching hundreds of millions of systems around the world. This is very helpful in reducing risk for their customers. Let's now examine whether some other popular operating systems managed to meet our criteria.

Linux Kernel Vulnerability Trends

According to CVE Details, at the time of writing, Debian Linux and Linux Kernel have the highest numbers of CVEs of all the products they track. Let's examine the CVE trends for Linux Kernel. The cumulative total number of CVEs from 1999 to 2018 is 2,163, or about 108 CVEs per year on average (CVE Details, n.d.). This is 3 times the annual average of Windows XP, just under the annual average for Windows Server 2012 (114), and well under the annual average for Windows Server 2016 (177). There were 37 critical and high rated CVEs in the Linux Kernel per year on average.

Looking at the same three-year period between 2016 and the end of 2018, we can see from the following graph in figure 2.28, that there was a large increase in CVE disclosures between 2016 and 2017. This is consistent with the trend we saw for the entire industry that I discussed earlier in the chapter. This appears to be a short-term increase for Linux Kernel. 2019 ended with 170 CVEs in Linux Kernel, down from 177 in 2018 (CVE Details, n.d.).

Figure 2.27: The number of CVEs, critical and high rated severity CVEs and low complexity CVEs in Linux Kernel (1999­–2018)

Between 2016 and the end of 2018, the number of CVEs decreased by 18%, while the number of CVEs with scores of 7 and higher decreased by 38%. During the same period, the number of low complexity CVEs decreased by 21%. Linux Kernel appears to have achieved the goals of our vulnerability improvement framework. Wonderful!

Figure 2.28: Critical and high severity rated CVEs and low complexity CVEs in Linux Kernel as a percentage of all Linux Kernel CVEs (1999–2018)

Google Android Vulnerability Trends

Let's look at Android, a mobile operating system manufactured by Google. Android's initial release date was in September 2008 and CVEs for Android start showing up in the NVD in 2009. On average, there were 215 CVEs filed for Android per year, with 129 CVEs per year rated critical or high severity; Android only had 43 CVEs in the 6 years spanning 2009 and 2014 (CVE Details, n.d.). The volume of CVEs in Android started to increase significantly in 2015 and has increased since then.

Figure 2.29: The number of CVEs, critical and high rated severity CVEs and low complexity CVEs in Google Android (2009–2018)

In the 3 years between 2016 and the end of 2018, the number of CVEs in Android increased by 16%, while the number of critical and high score CVEs decreased by 14%, but the number of low complexity CVEs increased by 285%.

The total number of CVEs filed for Android between 2009 and the end of 2018 was 2,147 according to CVE Details (CVE Details, n.d.).

Figure 2.30: Critical and high severity rated CVEs and low complexity CVEs in Google Android as a percentage of all Google Android CVEs during (2009–2018)

Apple macOS Vulnerability Trends

The final operating system I'll examine here is Apple's macOS. Between 1999 and 2018, 2,094 CVEs were entered into the NVD for macOS (CVE Details, n.d.). That's 105 CVEs per year on average, with about 43 critical and high severity CVEs per year. This is very similar to Linux Kernel's average of 108 CVEs per year. You can see from Figure 2.31 that there was a large increase in CVEs in 2015.

Figure 2.31: Number of CVEs, critical and high rated severity CVEs, and low complexity CVEs in macOS (1999–2018)

During the period spanning from the start of 2016 to the end of 2018, the number of CVEs for MacOS X declined by 49%. The number of critical and high severity CVEs decreased by 59%. Low access complexity CVEs decreased by 66%. MacOS X achieved the objectives of our vulnerability improvement framework. Well done again, Apple!

Figure 2.32: Critical and high severity rated CVEs and low complexity CVEs in macOS as a percentage total of all CVEs (1999–2018)

Operating Systems Vulnerability Trend Summary

The operating systems we examined in this chapter are among the most popular operating systems in history. When I applied our vulnerability improvement framework to the vulnerability disclosure data for these operating systems, the results were mixed.

None of the Microsoft operating systems I examined met the criteria set in our vulnerability improvement framework. Windows Server 2012 came very close, but CVEs for it did increase by 4% during the period I examined. Adjusting the timeframe might lead to a different conclusion, but all the operating systems' CVE trends I examined were for the same period. Microsoft has released exploitation data that shows that the exploitability of vulnerabilities in their products is very low due to all the memory safety features and other mitigations they've implemented in Windows (Matt Miller, 2019). This is bittersweet for vulnerability management teams because although the vast majority of vulnerabilities cannot be successfully exploited, they still need to be patched. However, in terms of mitigating the exploitation of unpatched vulnerabilities, it's good to know Microsoft has layered in so many effective mitigations for their customers.

Google Android did not meet the goals in the vulnerability improvement framework during the 2016–2018 timeframe. There was a small increase in CVEs and a 285% increase in low complexity CVEs during this period. (CVE Details, n.d.)

macOS and Linux Kernel did meet the criteria of the vulnerability improvement framework, and these vendors should be congratulated and rewarded for their achievement of reducing risk for their customers.

Table 2.4: Application results for the vulnerability improvement framework (2016–2018)

In Table 2.5, I am providing you with an interesting summary of the CVE data for the operating systems I have examined. The Linux Kernel and Apple macOS stand out from the others on the list due to the relatively low average number of critical and high severity CVEs per year.

Table 2.5: Operating systems' vital statistics (1999– 2018)

Before I examine web browsers, I want to point out one of the limitations of the data I presented in this section. While I was able to split out CVE data for each individual version of Windows, I didn't do that for macOS releases. Similarly, I didn't dig into the granular details of different Linux distributions to examine data for custom kernels and third-party applications. Comparing an individual version of Windows, such as Windows 7, for example, with all macOS releases isn't like comparing apples with apples, if you can forgive the pun. More research is required to uncover trends for specific non-Windows operating system releases.

The trend data for individual operating system releases could be quite different from the results for all releases as a group. However, the data I did present still illustrates something more fundamental than trends for specific operating system versions, many of which are out of support. It illustrates how the development and test processes of these operating system vendors have performed over a period of many years. Put another way, it illustrates what vendors' security standards look like and whether they've been able to improve continuously over time. From this, we can draw conclusions about which of these vendors is adept at potentially reducing the costs of vulnerability management for enterprises, while also reducing risks for them.

Next let's look at vulnerability trends in web browsers, which also get a lot of scrutiny from security researchers around the world.

Web Browser Vulnerability Trends

Web browsers attract a lot of attention from security researchers and attackers alike. This is because they are hard to live without. Everyone uses at least one browser on desktops, mobile devices and servers. Operating systems' development teams can bake layers of security features into their products, but web browsers tend to bring threats right through all those host-based firewalls and other security layers. Web browsers have been notoriously difficult to secure and, as you'll see from the data in this section, there has been a steady volume of vulnerabilities over the years in all popular browsers.

Just an additional warning about the web browser data that I share with you in this section. Of all the NVD, CVE and CVSS data that I analyzed for this chapter, I have the least confidence in the accuracy of this data. This is because, over time, different product names were used for CVEs in the NVD, making it challenging to ensure I have a complete data set. For example, some CVEs for Internet Explorer were labeled as "IE" instead. I did my best to find all the variations using nicknames that I could, but I can't guarantee that the data is complete and accurate.

The number of CVEs between 1999 and April 2019 is illustrated in Figure 2.33 (CVE Details, n.d.).

Figure 2.33: Total number of CVEs in popular web browsers (1999–2019)

I'll dig into the data and apply our vulnerability improvement framework to a few of these products to give you an idea of how these vendors have been managing vulnerabilities in some of the world's most popular web browsers.

Internet Explorer Vulnerability Trends

Let's start by examining Microsoft Internet Explorer (IE). IE has been around for many years with different versions getting released for different operating systems. I was able to find 1,597 CVEs for Internet Explorer between 1999 and 2018 (CVE Details, n.d.). This is an average of 80 vulnerabilities per year and 57 critical and high severity CVEs per year.

Figure 2.34 illustrates the number of CVEs, the number of critical and high rated CVEs, and the number of low complexity CVEs for each year between 1999 and 2018. You can see a big increase in the number of CVEs, and the number of critical and high score CVEs during the period 2012–2017.

Figure 2:34: The number of CVEs, critical and high severity CVEs and low complexity CVEs in IE (1999–2018)

A noteworthy data point, illustrated by figure 2.37, is just how many critical rated CVEs have been found in IE over the years. Remember that many organizations will initiate and perform an emergency update process for every critical rated vulnerability that is disclosed, because the risk is so high. Of the 1,597 CVEs in IE, 768 of them, that's 48%, were rated critical (CVE Details, n.d.). The years that saw the largest number of these CVEs were 2013, 2014, and 2015. Microsoft moved to a quarterly security update release model where they release cumulative security updates instead of individual updates in order to minimize the disruption all of these CVEs would otherwise cause.

Figure 2.35: Distribution of CVSS scores for CVEs in IE (1999–2018)

Despite the high volume of CVEs and the large number of critical and high rated CVEs, IE fairs well when we put this data into our vulnerability improvement framework focusing on the 3 years between 2016 and the end of 2018. The effort to drive down CVEs from their highs in 2014 and 2015 shows up as a 44% decline in CVEs and a 41% decline in critical and high rated CVEs between 2016 and 2018. Additionally, there were zero low complexity CVEs in 2018. Microsoft has met the criteria in our vulnerability improvement framework and, more importantly, the goals of the SDL. Nice work, Microsoft!

Next, let's examine the Edge browser.

Microsoft Edge Vulnerability Trends

Edge is the web browser that Microsoft released with Windows 10 in 2015. Microsoft made numerous security enhancements to this browser based on the lessons they learned from IE (Microsoft Corporation, n.d.).

According to CVE Details, there were 525 CVEs for Edge between 2015 and the end of 2018 (CVE Details, n.d.). On average, this is 131 vulnerabilities per year and 95 critical and high severity CVEs per year. Figure 2.36 illustrates the volume of these CVEs per year along with the number of critical and high severity vulnerabilities, and the number of low complexity CVEs. The number of CVEs climbed quickly in the first few years as vulnerabilities that weren't fixed before Edge was released were found and disclosed. This means that Edge won't meet the criteria for our vulnerability improvement framework. However, the decline in CVEs in 2018 continued into 2019 with a further 57% reduction. If I included 2019 in my analysis, Edge could potentially meet the criteria.

Figure 2.36: The number of CVEs, critical and high severity CVEs and low complexity CVEs in Microsoft Edge (2015–2018)

This analysis is likely moot, because in December 2018 Microsoft announced that they would be adopting the Chromium open source project for Edge development (Microsoft Corporation, n.d.). We'll have to wait for a few years to see how this change is reflected in the CVE data.

Let's examine Google Chrome next.

Google Chrome Vulnerability Trends

The Google Chrome browser was released in 2008, first on Windows and then later on other operating systems. There were 1,680 CVEs for Chrome between 2008 and the end of 2018, an average of 153 vulnerabilities per year. 68 vulnerabilities per year, on average, were rated critical or high severity (CVE Details, n.d.). As illustrated in Figure 2.37, there was a dramatic increase in CVEs for Chrome between 2010 and 2012. In the three 3 years between 2016 and the end of 2018, there was a 44% reduction in CVEs, and 100% reductions in low complexity CVEs, as well as critical and high severity CVEs.

Figure 2.37: The number of CVEs, critical and high severity CVEs and low complexity CVEs in Google Chrome (2008–2018)

Figure 2.38: Critical and high severity rated CVEs and low complexity CVEs as a percentage total of all Google Chrome CVEs (2008–2018)

Chrome satisfies the criteria we have in our vulnerability improvement framework. Excellent work Google!

Mozilla Firefox Vulnerability Trends

Mozilla Firefox is a popular web browser that was initially released in 2002. CVEs started showing up in the NVD for it in 2003. Between 2003 and the end of 2018, there were 1,767 CVEs for Firefox, edging out Google Chrome for the browser with the most CVEs. Firefox had, on average, 110 CVEs per year during this period, 51 of which were rated critical or high severity (CVE Details, n.d.).

As illustrated by Figure 2.39, Firefox almost accomplished the aspirational goal of zero CVEs in 2017 when only a single CVE was filed in the NVD for it. Unfortunately, this didn't become a trend as 333 CVEs were filed in the NVD in 2018, an all-time high for Firefox in a single year. In the 3 years between 2016 and the end of 2018, CVEs increased by 150%, critical and high severity vulnerabilities increased by 326%, while low complexity CVEs increased by 841%. The number of CVEs decreased from 333 to a more typical 105 in 2019 (CVE Details, n.d.).

Figure 2.39: The number of CVEs, critical and high severity CVEs and low complexity CVEs in Firefox (2003–2018)

Had Mozilla been able to continue the trend in vulnerability disclosures that started in 2015, Firefox would have met the criteria for our vulnerability improvement framework. The spike in Figure 2.40 in 2017 is a result of having a single CVE that year that was rated high severity with low access complexity (CVE Details, n.d.).

Figure 2.40: Critical and high severity rated CVEs and low complexity CVEs as a percentage total of all Firefox CVEs (2003–2018)

Apple Safari Vulnerability Trends

The last web browser I'll examine is Apple Safari. Apple initially released Safari in January 2003. On average, Safari had 60 vulnerabilities per year, with 17 CVEs rated critical or high per year on average. Between 2003 and the end of 2018 a total of 961 CVEs were disclosed in Safari.

Figure 2.41: The number of CVEs, critical and high severity CVEs and low complexity CVEs in Apple Safari (2003–2018)

As illustrated by Figure 2.41, there were relatively large increases in CVEs in Safari in 2015 and 2017. Between 2016 and the end of 2018, there was an 11% decline in CVEs, a 100% decline in critical and high rated CVEs, and an 80% decline in low complexity vulnerabilities (CVE Details, n.d.). Apple once again meets the criteria of our vulnerability improvement framework.

Figure 2.42: Critical and high severity rated CVEs, and low complexity CVEs as a percentage total of all Apple Safari CVEs (2003–2018)

Web Browser Vulnerability Trend Summary

Three of the web browsers that I examined met the goals of our vulnerability improvement framework. Microsoft Internet Explorer, Google Chrome, and Apple Safari all made the grade.

Table 2.6: Results of applying the vulnerability improvement framework for the period (2014–2018)

Table 2.7: Web browser vital statistics (1999–2018)

Table 2.7 provides a summary of some interesting CVE data for the web browsers we examined (CVE Details, n.d.). Apple Safari stands out based on the low number of average CVEs per year and an average number of critical and high severity CVEs that is well below the others.

After presenting this type of data and analysis on web browsers to people that are really passionate about their favorite browser, they are typically in disbelief, sometimes even angry, that their favorite browser could have so many vulnerabilities. Questions about the validity of the data and analysis usually quickly follow. Some people I've shared this type of data with also feel that the number of vulnerabilities in their least favorite browser has somehow been under-reported. It's like arguing about our favorite make of car! But remember that this data is imperfect in several respects. And there certainly is an opportunity to dive deeper into the data and analyze CVE trends for specific versions and service packs and releases to get a more granular view of differences between browsers. You can do this using the vulnerability improvement framework that I've provided in this chapter. But perhaps more importantly, remember that this data illustrates how the development and test processes of these vendors have performed over many years and whether they have been continuously improving.

After all, every version of IE was developed by Microsoft, and every version of Safari was developed by Apple, and so on. Their customers don't just use a version of their browsers; they use the outputs of their vendors' development, test, and incident response processes. The key question to answer is which of these vendors has managed their vulnerabilities in a way that lowers the costs to your organization while reducing risk. Let me finish this chapter by providing some general guidance on vulnerability management.

Vulnerability Management Guidance

A well-run vulnerability management program is critical for all organizations. As you've seen from the data and analysis in this chapter, there have been lots of vulnerabilities disclosed across the industry and the volumes have been increasing, not decreasing. At the end of 2019, there were over 122,000 CVEs in the NVD. Attackers know this and understand how challenging it is for organizations to keep up with the volume and complexity of patching the various hardware and software products they have in their environments. Defenders have to be perfect while attackers just have to be good or lucky once. Let me provide you with some recommendations regarding vulnerability management programs.

First, one objective of a vulnerability management program is to understand the risk that vulnerabilities present in your IT environment. This is not static or slow moving. Vulnerabilities are constantly being disclosed in all hardware and software. Because of this, data on the vulnerabilities in your environment gets stale quickly. The organizations that I have met that decided they would deploy security updates once per quarter, or every six months, have an unusually high appetite for risk; although, paradoxically, some of these same organizations tell me they have no appetite for risk. It is always interesting to meet people that believe their highest priority risks are their vendors, instead of the cadre of attackers who are actively looking for ways to take advantage of them. Attackers who, given the chance, will gladly encrypt all their data and demand a ransom for the decryption keys.

When I meet an organization with this type of policy, I wonder whether they really do have a data-driven view of the risk and whether the most senior layer of management really understands the risk that they are accepting on behalf of the entire organization.

Do they know that on average in 2019, 33.4 new vulnerabilities were disclosed per day, and in 2018, there were 45.4 disclosures per day? If they are patching quarterly, that is equivalent to 4,082 vulnerabilities potentially unpatched for up to 90 days in 2018 and 3,006 in 2019. Double those figures for organizations that patch semi-annually. On average, more than a third of those vulnerabilities are rated critical or high. Attackers only require one exploitable vulnerability in the right system to successfully initially compromise an environment. Instead of avoiding patching and rebooting systems to minimize disruption to their business, most of these organizations need to focus on building very efficient vulnerability management programs with the goal of reducing risk in a more reasonable amount of time. Attackers have a huge advantage in environments that are both brittle and unpatched for long periods.

For most organizations, my recommendation is that vulnerability management teams scan everything, every day. Read that line again if you have to. Remember the submarine analogy I used in the preface section of this book. Your vulnerability management program is one of the ways in which you look for defects in the hull of your submarine. Scanning every asset you have in your environment for vulnerabilities every day will help identify cracks and imperfections in the hull that, if exploited, would sink the boat. Scanning everything every day for vulnerabilities and misconfigurations provides the organization with important data that will inform their risk decisions. Without up-to-date data, they are managing risk in an uninformed way.

However, it's important to note that mobile devices, especially of the BYOD variety, pose a significant challenge to vulnerability management teams. Most organizations simply can't scan these devices the same way they scan other assets. This is one reason why many cyber security professionals refer to BYOD as "bring your own disaster". Instead, limiting mobile devices' access to sensitive information and HVAs is more common. Requiring newer operating system versions and minimum patch levels in order to connect to corporate networks is also common. To this end, most of the enterprises I've met with over the years leverage Mobile Device Management (MDM) or Mobile Application Management (MAM) solutions.

For some organizations, scanning everything every day will require more resources than they currently have. For example, they might require more vulnerability scanning engines than they currently have in order to scan 100% of their IT assets every day. They might also want to do this scanning in off hours to reduce network traffic generated by all this scanning during regular work hours. This might mean that they have to scan everything, every night during a defined number of hours. To accomplish this, they'll need a sufficient number of vulnerability scanning engines and staff to manage them. Once they have up-to-date data on the state of the environment, then that data can be used to make risk-based decisions; for example, when newly discovered vulnerabilities and misconfigurations should be addressed. Without up-to-date data on the state of the environment, hope will play a continual and central role in their vulnerability management strategy.

The data generated by all this vulnerability scanning is gold dust for CISOs, especially for security programs that are relatively immature. Providing the C-suite and Board of Directors with data from this program can help CISOs get the resources they need and communicate the progress they are making with their security program. Providing a breakdown of the number of assets in inventory, how many of them they can actually manage vulnerabilities on, the number of critical and high severity vulnerabilities present, and an estimate of how long it will take to address all these vulnerabilities, can help build an effective business case for more investment in the vulnerability management program. Providing senior management with quantitative data like this helps them understand reality versus opinion. Without this data, it can be much more difficult to make a compelling business case and communicate progress against goals for the security program.

The cloud can change the costs and effort related to vulnerability management in a dramatically positive way. I'll discuss this in Chapter 8, The Cloud – A Modern Approach to Security and Compliance.