The Expert's View with Jeremy Kirk

Application Security , Governance & Risk Management , Next-Generation Technologies & Secure Development

Zero-Day Facts of Life Revealed in RAND Study

Lifespan of Zero Day Averages 6.9 Years, Groundbreaking Report Reveals
Zero-Day Facts of Life Revealed in RAND Study
RAND headquarters in Santa Monica, Calif.

The average zero-day vulnerability - rare and often highly prized exploits used to hack computers and other digital devices - works for nearly seven years before losing effectiveness.

See Also: Live Webinar | Navigating Identity Threats: Detection & Response Strategies for Modern Security Challenges

That's just one of many surprising findings contained in a groundbreaking new study from RAND Corporation, a nonprofit institution that aims to help improve policy and decision-making through research and analysis.

The study, "Zero Days, Thousands of Nights," has been welcomed by computer security analysts due to the previous paucity of data relating to "zero days," which are flaws that have not been patched by a software vendor. But such flaws can remain useful long after patches get issued, if manufacturers fail to offer the patches to users of their products, or if those users fail to upgrade.

The study is also timely, as it arrives on the heels of WikiLeak's release of purported network exploitation techniques used by the CIA. That data dump has revived an ethical debate over how the U.S. government discloses - or chooses not to disclose - software flaws that it learns about, to vendors.

"To those dismissing this RAND report: ignore it at your own peril. This is the best data ever released on real exploit development, period," writes Dan Guido, CEO of security firm Trail of Bits, whose researchers have unearthed more than a few zero-day flaws in the past, for example in Apple's iOS mobile operating system.

The impetus for RAND's zero-day study was to better frame the risks associated with the debate over whether the U.S. government should promptly disclose software flaws to vendors or hold onto the information for intelligence-gathering reasons.

Either decision relies in part on making murky public-safety judgment calls - and Lillian Ablon, an information scientist with RAND who co-authored the study, tells me that "our hope and goal was our findings could be used by those who are making these tough decisions."

It's a difficult area to analyze. Zero-day vulnerabilities are sometimes publicly disclosed, and software vendors distribute patches. But many active flaws lurk in the underground, where they're bought, sold or traded, and remain unknown to vendors or the general public. There's also a thriving but opaque market in zero days being developed and sold by private research groups.

Unprecedented Access

RAND studied a set of 200 zero-day vulnerabilities discovered from 2002 through 2016. Some 40 percent of the vulnerabilities still aren't public, which makes this an unprecedented tranche of data.

The information came from an unnamed vulnerability group, nicknamed "Busby" by RAND, after a military hat popular in the early 20th century. Busby researchers are highly skilled and have supplied vulnerabilities and exploits to nation states.

A pressing question RAND wanted to answer was how long zero days remained effective before a patch was distributed. Also, it sought to quantify the likelihood that another group would discover the same flaw and begin exploiting it. Answers to those questions would help guide the ethical debates around "stockpiling" vulnerabilities and balancing risks to the public.

On average, RAND found that a zero-day vulnerability has an extremely long, average life span: 6.9 years. It also found that only 5.7 percent of any given stockpile of vulnerabilities will be found by someone else after one year - a statistic referred to as the "collision rate."

A quarter of exploits, however, will not be viable for longer than 18 months, while another quarter will survive for more than 9.5 years, the report says.

Vulnerabilities found by Busby researchers each year, by current life status

"Some data - an estimated 20 to 30 exploits - are omitted in the latter years (and all data for those found in 2016 are omitted) because of sensitivities of current operations," according to RAND.

RAND acknowledges a gap in its reporting, relating to having visibility into every zero-day flaw that gets discovered in the world. In particular, a more accurate study would have also encompassed discoveries of zero days by adversarial groups, rather than just Busby. But that's a reporting gap that is virtually impossible to overcome.

Still, the findings already have policy implications. Notably, if two opposing groups are shown to have the same caches of vulnerabilities, "then some argue that there is little point to keeping [vulnerabilities] private - whereas a smaller overlap might justify retention," writes Ablon and co-author Andy Bogart of RAND.

That's because a smaller overlap would mean that one group retains an advantage over another, without incurring the risk that the same vulnerability is also being used against them.

Disclosure is Complicated

The study provides fuel for both sides of the "prompt disclosure versus stockpiling" debate, with RAND staying in the middle about either option. "The decision of whether or not to stockpile is quite complicated," Ablon says.

If a zero-day vulnerability is hard to find, it would keeping it under wraps, the study says. Furthermore, some vulnerabilities are related and found in clumps. Hence if a group finds one flaw, but thinks that another group might find a related one, it might be best to not disclose the first flaw, since it wouldn't meaningfully diminish consumer risk, goes one line of thinking.

Conversely, if there's a good chance others will find the same zero-day vulnerability, it might be better to publicly disclose it. "In this line of thought, the best decision may be to stockpile only if one is confident that no one else will find the zero-day," the paper reads.

Impact on Offense and Defense

RAND's study is especially timely given the purported CIA files released this week by WikiLeaks. Some of the documents appear to confirm that the CIA and other U.S. intelligence organizations either developed or purchased zero-day vulnerabilities for use in offensive operations, which has been long suspected (see WikiLeaks Dumps Alleged CIA Malware and Hacking Trove).

Many of the exploits and vulnerabilities in the documents have been patched. WikiLeaks has not yet released the code for vulnerabilities and exploits. But on March 9, the group's founder Julian Assange said the information would be shared with software vendors to help them develop patches.

WikiLeaks claims the source for the CIA documents was motivated by a desire to prompt discussion about offensive cyber weapons. However, the group's claims have been increasingly viewed with suspicion. U.S. intelligence agencies believe Russia used WikiLeaks as a means of promoting material its operatives stole amidst the turbulent 2016 U.S. presidential campaign.

The Vulnerabilities Equities Process Debate

The U.S. government has a program called the Vulnerabilities Equities Process in which it is supposed to inform private companies of software vulnerabilities. But the program has often been criticized as being broken and not meaningfully improving security. VEP allows the government to withhold vulnerability information if it is useful for intelligence purposes.

But Robert Graham, CEO of security research firm Errata Security, says in a blog post that it doesn't make sense for the government to stockpile vulnerabilities. It either needs a flaw or it doesn't, and the high price that it will pay - for example for a new zero-day flaw in iOS - makes the question of disclosure a financially moot point.

"When activists claim the government should disclose the zero day they acquire, they are ignoring the price the zero day was acquired for," Graham says. "It's an absurd argument to make that the government should then immediately discard that money, to pay 'use value' prices for 'patch' results."

If the government's purchase of zero days was determined to be unethical, it would simply stop buying the information. That would put the U.S. at a severe disadvantage against countries with no qualms about employing such exploits, Graham says.

"Either the government buys zero day to use, or it stops buying zero day," he says. "In neither case does patching happen."

Please Worry About Bugs

What are the RAND study result's implications for software developers and vendors? Simply put, they're going to have to take the hard road, RAND's Ablon says, and get smarter about every aspect of their information security and bug-discovery and remediation practices.

Simply trying to spot flaws in your own code before bug hunters come calling isn't a good strategy, Ablon says. Indeed, the study finds that there's a low chance that once someone finds a flaw, someone else - be they a vendor or independent bug hunter - will find it too, within the same year. And sometimes, individual flaws remain at large for years before they're found by a second researcher.

Unsurprisingly, fallback bromides such as "patch and pray" and "just bolt-on security" aren't a great strategy either, Ablon says. Instead, organizations must assume they're compromised and investigate ways to improve the overall architecture of their systems.

"Companies might not want to hear that," she says. "That's because it's costly to have to start from scratch and build infrastructure from the ground up, thinking about security every step of the way."

But that's the zero-day vulnerability world in which we now live.



About the Author

Jeremy Kirk

Jeremy Kirk

Executive Editor, Security and Technology, ISMG

Kirk was executive editor for security and technology for Information Security Media Group. Reporting from Sydney, Australia, he created "The Ransomware Files" podcast, which tells the harrowing stories of IT pros who have fought back against ransomware.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.