You wouldn’t leave your backdoor unlocked: the danger of intentional vulnerabilities

Among the many revelations to come out of this summer, The New York Times recently announced that the NSA has been conducting a systematic and well-funded effort to install “backdoors” in consumer electronic devices, known as “Project Bullrun.” To better understand their history, how they can work, and the risks associated, here are three things you ought to know about backdoors:

1) The NSA has been working on backdoors for years, with varying degrees of public knowledge. 

The NSA publically proposed its first massive backdoor effort, the “Clipper chip,” in 1993. Had it succeeded, every device that performs encryption for communication purposes — your phone, your laptop, the ATM — would have had a government-registered ‘Clipper chip’. The chip offered the wide variety of encryption functions to the end user, but with one important catch. Within each Clipper chip was a hard-coded special ‘device key,’ unique to each chip and held in escrow by the US federal government. When an encrypted session of communication commenced, each Clipper-compliant device would broadcast a special “Law Enforcement Access Field” (LEAF) containing the current session key encrypted by the ‘device key.’ If a law enforcement agency needed to wiretap the communication, they could simply scoop up the LEAF, obtain a warrant for the device key (in theory), decrypt the session key, and then proceed to decrypt the entire channel of communication.

The LEAF acted as a backdoor into encrypted communications for U.S. law enforcement, and its proposal spawned widespread opposition. To make matters worse, the classified encryption algorithm on the chip ultimately proved to be insecure and the Clipper protocol easy to circumvent. Following public backlash and insecurities, U.S. law enforcement saw its efforts to solve its wiretapping problem fail on the national stage — from then on out, the NSA’s and law enforcement backdoor efforts have been far less public.

2) There are many ways to implement a backdoor. 

The Clipper chip represents how a backdoor can be implemented on a protocol-level, but there’s more than one way to skin a surveillance cat. Backdoors can also be what we can call “pure” backdoors, which exploit some feature of the fundamental math of an encryption algorithm, giving someone with special knowledge of the algorithm the ability to decrypt it.

For several years, many believed that the NSA had somehow influenced the National Institute of Standards and Technology (NIST) — the institute responsible for developing much of the U.S. government’s most frequently used encryption — to introduce this sort of backdoor into a cryptographic algorithm they developed. The algorithm in question was called the Dual Elliptical Curve Deterministic Random Bit Generator (Dual EC DRBG), which, when properly used, gives the user numbers that should be seemingly random to anyone but the user of the function. The security of the algorithm relies on the fact that the output cannot be reverse engineered, but certain design modifications can be made to the algorithm to grant the user the ability to predict the output and break its security.

The algorithm relies on two special values, points along a curve called P and Q, which are supposed to be randomly generated. These two points are actually related: there is a some value ‘a’ for which Q = aP. The security of the algorithm comes from the fact that the math surrounding elliptical curves makes it extremely difficult to solve for ‘a’ if you only know P or Q. But if an adversary like the NSA is able to design the algorithm with advance knowledge of ‘a’ and P, then they can easily calculate Q, give P and Q to NIST, and use the now ‘magic a’ to gain a backdoor into the algorithm.

So when the NIST published the Dual EC DRBG standard with two constant points for P and Q, rather than new random ones for every use, it seemed a little suspicious. Selecting constant values is itself a common practice in encryption standards, to protect against bad luck from choosing points that could be easily exploited by an attacker. In this case, however, people suspected — and the New York Times recently confirmed — that the NSA had influenced NIST to choose those values, allowing the NSA to hold the magic value ‘a’ all along. Using even the most trusted hardware in the world, on an open-source operating system, a user could still fall victim to this attack — a good reminder that backdoors can be anything and can be found anywhere.

3) Backdoors intrinsically weaken a product’s security.

One of the major problems with backdoors is that they can be exploitable by anyone — not only the agency installing them in the name of law enforcement. In the example of Dual EC DRBG, if an outside party could compromise the NSA server storing the ‘magic’ value, then the attacker could break the encryption of any product using that algorithm. Failure for all users would be catastrophic and widespread.

The backdoor in the protocol for the Clipper chip, though it didn’t exploit the actual math of the algorithm it used, could have been exploited through similar means if whichever government agency responsible for holding the keys experienced compromise or simply mismanagement. This is to say nothing of the classified algorithm that the Clipper chip used for encryption, which, once declassified, was easily broken when scrutinized by cryptographers.

Backdoors at their most fundamental level introduce a hole into secure communications where none existed previously, so whether they are purely algorithmic or protocol-based, their exploitation only takes a dedicated attacker and time.

The recent disclosures of “Project Bullrun” have betrayed the NSA’s systematic effort to install backdoors and “covertly influence” the tech industry to include vulnerabilities that can be exploited by the NSA, ostensibly to advance their national security goals. These sorts of vulnerabilities can be even less sophisticated than the design of the Clipper chip or the math behind the Dual EC DRBG. They could include, for instance, something as simple as getting a popular website to use encryption with known implementation issues.

Enforcing the use of a defective product highlights the contradiction of backdoors: they give users an insecure product in the name of the user’s security. The public rejected that contradiction twenty years ago, and it’s a problem we must still contend with today.