In the simplest terms, a vulnerability is a flaw in the technical design or implementation of information technology products or systems that could be used to exploit or penetrate a product or system, either hardware or software (see the complete CERT definition here). Vulnerabilities of either type are common and will always exist; the important thing is how we respond when we discover them.
Several large-scale attacks over the past few years, such as the devastating WannaCry attack, were conducted by leveraging vulnerabilities that governments already knew about but kept secret, to stockpile for use against enemies. We can help stem the tide of attacks like this, but only if governments make by a stronger commitment to a vulnerability disclosure process that appropriately prioritises defense of our digital security, vital systems, and infrastructure.
At Access Now, we advocate that governments not only have a vulnerability disclosure process for when they find or become aware of technology flaws, but also that governments facilitate coordinated vulnerability disclosure (CVD). The latter approach is systemic, responding to the issues that create a less secure environment for everyone. It includes making changes that support disclosure and patching of vulnerabilities, such as avoiding criminalizing security research (like Argentina tried to do) and instead giving leeway to prosecutors in related cases (the way that Netherlands has).
Last week the Center for European Policy Studies (CEPS) hosted a meeting of the multi-stakeholder Task Force on Software Vulnerability Research in Europe, of which Access Now is a member, at a roundtable at the European Parliament. There, the group discussed its research and presented its preliminary conclusions on vulnerability disclosure policies in Europe.
The expert group’s initial findings define guidelines to harmonise the process of CVD in Europe, outlining specific principles for government vulnerability disclosure programmes. If properly implemented, the recommendations could help improve the state of security research at the national and EU level, and enhance cybersecurity in Europe and around the world.
By keeping vulnerabilities secret, governments create an incentive to put people at risk
When governments find software or hardware flaws and keep them secret to exploit for offensive hacking operations, they put at risk our global digital security. What is deliberately kept secret is not likely to be patched. This leaves us vulnerable to attack so that governments can undertake surveillance or other offensive activities. That has impact even beyond the devastation of attacks like WannaCry. When governments develop exploits that can be inserted into user-facing technology or internet infrastructure through updates, or even built into products, it undermines user trust in the internet, chipping away at the foundations for global communications, commerce, and the digital economy.
Furthermore, the stockpiling of vulnerabilities, both by governments and black market actors, increases the market price for exploits, making it harder for “bug bounty” programmes — the programmes through which companies offer incentives for people to report vulnerabilities to them — to compete. In other words, not only are governments prioritising offense over defense, they are actively adding incentives to keep the vulnerabilities secret and sell them at a high price. They are creating an environment that rewards leaving us exposed and at risk.
To decrease these incentives, governments that discover or find out about vulnerabilities should promptly disclose them to the developer/vendor. Any delay in the disclosure of a vulnerability should be time-limited and extraordinary, and permitted only where immediate disclosure would directly harm users. Routine public reports should identify the number of vulnerabilities withheld, and provide an explanation for the withholding.
A better path forward: coordinated vulnerability disclosure
In July 2017, Hungary launched prosecution of a young hacker who reported a vulnerability in the BKK’s (Budapest main city transport provider) new e-ticket system. This ignited a media storm on social media. The hacker had essentially followed the best practices of CVD in existence in Europe; as soon as he discovered the vulnerability, he reported it to the company. Yet he was immediately questioned by the police and accused of carrying out a cyber attack on the company’s infrastructure. Ultimately the charges were dropped, thanks to public outrage and legal representation for the hacker provided by the Hungarian Civil Liberties Union.
This chain of events should not have taken place. Europe does not have a uniform reporting process for vulnerabilities, nor is there sufficient leniency in the legal system to protect those who report vulnerabilities. That means we do not yet have the legal protections in place to reduce the incentives for keeping flaws secret and support a healthy security research environment in the EU — one that better protects us.
Fending off cyber attacks requires a systematic approach and stakeholders need to establish high standards for CVD, which should support an environment where companies quickly and effectively patch reported vulnerabilities and users routinely install the latest security updates. The Dutch model is now the gold standard for the EU, but bodies such as ENISA have also advanced recommendations for European states to follow.
What’s next for the EU? The task force final report
The CEPS task force is continuing its work, and the full report will be published in spring of 2018. It will include an overview of current approaches to both CVD and government disclosure in EU member states, as well as recommendations for national and EU-level legislation to improve the security research environment. We will keep you updated on that work. Stay tuned!