Verizon Breach Report CriticizedExperts: Top 10 Vulnerabilities List Could Mislead Administrators
Verizon's annual report into data breaches has triggered an avalanche of criticism that the company made critical errors when studying the most frequently exploited software vulnerabilities.
The 2016 Data Breach Investigations report, released on April 27, is considered one of the most comprehensive annual guides on data breach trends, compiling data contributed by a wide range of computer security companies, law enforcement and government agencies. It also draws on more than 3,100 confirmed data breaches, an impressive sampling of attacks (see Verizon's Latest Breach Report: Same Attacks, More Damage).
But since the release of the report this year, computer security experts have taken issue with a top 10 list of vulnerabilities that Verizon claims were responsible for 85 percent of successful exploit traffic throughout 2015.
They assert that the list of vulnerabilities could mislead administrators into devoting remediation efforts toward long-known flaws that don't reflect the real attack landscape.
Eight of the vulnerabilities on the list were reported in 2003 or earlier. Oddly, the list did not contain any vulnerabilities for Adobe Systems applications such as Flash Player, which is one of the world's most frequently targeted pieces of software.
Kasper Lindgaard, director of research and security for Flexera Software, says the list is likely the result of flawed methodology.
"When discussing traffic of successfully exploited vulnerabilities, I would definitely expect that the vulnerabilities on the top 10 list would be younger than 13 years, and in more commonly used products," Lindgaard says.
In contrast, the U.S. Computer Emergency Readiness Team published a list of the 30 most commonly exploited vulnerabilities just three days after Verizon's report. None of the top 10 vulnerabilities listed by Verizon are in US-CERT's list.
One of the more recently discovered vulnerabilities that did make Verizon's list is FREAK, a vulnerability in the SSL/TLS protocol that could force applications to use weaker encryption keys.
But experts contend that it is almost impossible for FREAK to be one of the most frequently exploited vulnerabilities.
"It's a man-in-the-middle attack," wrote Robert Graham, CEO of Errata Security in a blog post. "In other words, you can't attack a web server remotely by sending bad data at it. Instead, you have to break into a network somewhere and install a man-in-the-middle computer. This fact alone means it cannot be the most widely exploited attack."
Verizon: 'Open to Feedback'
In an email statement, Verizon didn't directly address the controversy, saying "we welcome and are open to feedback from the security community, which we continually evaluate in order to make each successive DBIR better than the next."
Michael Roytman, the author of the section on vulnerabilities, offered an explanation shortly after the criticism surfaced. Roytman is chief data scientist for Kenna Security, which was one of Verizon's partners for the report.
Two data sets were used for the study of vulnerabilities, Roytman wrote in his blog post. One set was composed of vulnerabilities in more than 2.4 million devices and another of 3.6 million "successful" exploitation events. Kenna Security along with Beyond Trust, Qualys and Tripwire contributed vulnerability scanning data.
A successful exploitation was counted if a machine was found to have an unpatched vulnerability, an attack was recorded for that vulnerability by an intrusion detection system and an indicator of compromise was also detected.
"It is not necessarily a loss of a data, or even root on a machine," Roytman wrote. "It is just the successful use of whatever condition is outlined in a CVE" number, referring to Common Vulnerabilities and Exposures, a system for indexing software flaws.
One of the first critiques of Verizon's report came from Brian Martin, director of vulnerability intelligence with Risk Based Security. Martin pointed out that CVEs only cover about half of all disclosed vulnerabilities. In addition, detection signatures for intrusion detection systems are often flawed, which leads to both false positives and false negatives, he added.
Roytman acknowledges Martin's points: "On the whole, Brian is correct: IDS alerts generate a ton of false positives, vulnerability scanners often don't revisit signatures, CVE is not a complete list of vulnerability definitions."
Roytman contends, however, that vulnerability scans and logs are what most administrators have at their disposal to analyze attacks and vulnerabilities.
Further muddying the waters, Roytman then published a new list of the top 10 vulnerabilities based solely on data from customers of Kenna Security. Five of those top 10 vulnerabilities were from 2001 and 2002, once again triggering doubt.
False Positive Warning
Experts question Roytman's methodology, contending that matching up the results of a vulnerability scan with events recorded by intrusion detection systems is an inaccurate way to conclude if an organization was actually exploited, or even how.
Dan Guido, CEO of the security company Trail of Bits, writes that vulnerability scanners are notorious for detecting flaws where there are none. Further, he says intrusion detection systems are "often triggered by benign security scans and network discovery tools."
Guido points out that two vulnerabilities in Roytman's new list, CVE-2001-0877 and CVE-2001-0876, are likely "false positives from vulnerability scanning data."
Both of the vulnerabilities involve the Universal Plug and Play protocol on Windows computers running XP and earlier operating systems.
"It is highly unlikely that a 14-year old denial-of-service attack would be one of the most exploited vulnerabilities across corporate networks," Guido writes.
The Devil is in the Details
An intrusion detection system is a good tool if administrators understand why it sends alerts and that every alert doesn't necessarily mean an intrusion has occurred, Graham writes.
"Verizon didn't pay attention to the details," Graham says. "They simply dumped the output of an IDS inappropriately into some sort of analysis. Since the input data was garbage, no amount of manipulation and analysis would ever produce a valid result."
Guido is even more scathing: "They skipped important steps while collecting data over the past year, jumped to assumptions based on scanners and IDS devices, and appeared to hope that their conclusions would align with what security professionals see on the ground.
"Above all, this incident demonstrates the folly of applying data science without sufficient input from practitioners. The resulting analysis and recommendations should not be taken seriously," he writes.