Posted: 3 Min ReadFeature Stories

RSAC 2019: As Vulnerability Disclosure Matures, New Challenges Rise Up

Despite broader adoption, vulnerability disclosure remains messy, muddying what ought to be a best practice

When it comes to vulnerability disclosure policies, if your company doesn’t have one, you’re doing security wrong. But as more businesses in cyber security and beyond adopt vulnerability disclosure policies, the road towards broader adoption remains mined with complications.  For example, see Symantec's policy here.

That’s according to a panel of experts representing different stakeholders in the cyber security community here at RSAC 2019. Ideally, an independent researcher will find a bug on her own, notify the vendor of the product with the vulnerability, who will respond to her, fix the bug, and, as Art Manion, senior vulnerability analyst at CERT put it from the podium, “everybody’s happy.”

The reality is rarely as clear-cut as Manion would like. One of the most recent examples of a messy disclosure occurred on January 3, 2018, when the Meltdown and Spectre hardware vulnerabilities were detailed by several independent researchers. Some researchers decided to publish their reports six days before the agreed-to date—time which could have been used to finish software patches.

A year later, and the response to the Spectre and Meltdown vulnerabilities remains a case study in how even coordinated vulnerability disclosures with multiple stakeholders can go astray. Panel moderator Paul Kocher, the president and chief scientist of Cryptography Research, Inc. and one of the original researchers who discovered Meltdown and Spectre, noted that some experts feel that vulnerabilities should not be disclosed to the public because they can only serve to enable “black hat” hackers. Instead, they want the vulnerabilities fixed behind the scenes.

Ideally, an independent researcher will find a bug on her own, notify the vendor of the product with the vulnerability, who will respond to her, fix the bug, and everyone's happy.

That dissent, Manion said, muddies what should be a best practice: Even though knowledge of a vulnerability does increase the risk of being exploited, when compared to hiding a vulnerability from the public, and potentially a vendor who might use the software or hardware, public disclosure is “the least bad” option.

“I’d rather know than not,” he said.

After Netscape introduced what’s considered the first vulnerability disclosure policy in 1995, the process of announcing those vulnerabilities to the public slowly took root in the tech world, finally gaining wider adoption as Google, Facebook, Microsoft, and other tech giants adopted the practice. But now that vulnerability disclosures, such as those that Spectre and Meltdown required, can involve thousands of companies, the processes that had been accepted as best practices are being questioned again.

The Spectre and Meltdown situation raises difficult questions for the security community, said Kocher. Not only how to handle disclosure for hardware vulnerabilities, which are more difficult to fix than software, but also how to coordinate hundreds or thousands of vendors, who should coordinate the disclosure, who suffers the consequences of the unpatched zero-day vulnerability, and what happens when there’s no clean, rapid fix?

There are no easy answers to those questions, said panelist Alex Rice, CTO and co-founder of bug bounty and vulnerability disclosure company HackerOne. He described it as “earning trust, coordinating trust, and not doing anything to destroy that trust.”

At QualComm, which deals with both software and hardware bugs, 'the most important part of the process is making sure that the vulnerability report reaches the right researchers," said the company’s vice-president of product security, Alex Gantman.

“You want to be able to fix the issue as sufficiently as possible. You have to have a mechanism for once the issue is identified to leading into this overall process of the RCA, to feeding it to the development team that develops the fix, to the security team that assesses the fix and makes sure it’s correct, and then to propagate it to the product and notify the customers,” Gantman said.

The disclosure process challenges can become even more fraught when the vendor or owner of the product with the vulnerability is a government agency. In those cases, adhering to the ideal workflow isn’t always possible. To help address those cases and foster trust, Manion said he wants to see the adoption of a minimum 24-hour drop notice.

“It’s probably a case-by-case basis. The point is not that they can fix it, but there’s probably a public safety, critical infrastructure protection element that some parts of some governments do,” Manion said. “Reducing their surprise by 24 hours, by one working day, it’s a much softer landing for your disclosure.”

Symantec Enterprise Blogs
You might also enjoy
2 Min Read

RSAC 2019: A Security Gateway War is Brewing

Symantec’s Nico Popp takes a look at the future of cloud security for unmanaged devices

About the Author

Seth Rosenblatt

Journalist

Seth Rosenblatt is a security writer who has worked in online journalism since 1999. He is also the founding editor of The Parallax.

Want to comment on this post?

We encourage you to share your thoughts on your favorite social platform.