A New Threat to Generativity
posted by Steven Bellovin
The symposium is over, but when I saw an important news item on a major threat to generativity, Danielle graciously urged me to post one last message to this blog.
A big player — one of the very biggest, Intel — has embarked on a new strategy, including a major corporate acquisition, that poses major threats to generativity. Specifically, according to a news report on Ars Technica, Intel is planning to add hardware support for “known good only” execution. That is, instead of today’s model of anti-virus software, which relies on a database of known-bad patterns, Intel wants to move to a hardware model where only software from known-good sources will be trusted. For a number of reasons, including the fact that it won’t work very well, this could be a very dangerous development. More below the fold.
When only known-good sources can produce software that people can run, the first question to ask is who selects those sources. I personally doubt that Intel itself will do it directly — they’ve had enough antitrust problems with the FTC and the EU without the accusation that they now control the entire software market — but I’ll let those with more expertise in antitrust law discuss that. The next obvious answer is the operating system vendors: Microsoft, Apple, all the myriad Linux vendors, etc. Some organizations may wish to install their own; I discuss this below.
Let me back up and explain exactly what appears to be going on. (I say “appears to be” because we don’t have technical details yet.) One or more privileged entities, known generically as trust anchors and referred to more specifically as certificate authorities, issue cryptographic certificates to as many parties as they wish. These certificate owners in turn can digitally sign code — i.e., make the cryptographically-verifiable assertion that only they could have produced that code — or (if permitted by their own certificates) issue subcertificates to other parties, ad infinitum. I’ll give a concrete example. Suppose that there is one ultimate trust anchor, Intel. Intel issues a digitally-signed code-signing certificate to Microsoft, which in turn issues a certificate to Xyzzy Corp. Xyzzy in turn issues certificates to its browser division and to its hardware device driver division. When you, on your desktop, try to run their browser, it goes through a recursive validation process. First, it checks if the browser is properly signed by some certificate; if the code has been tampered with, that check will fail. If it succeeds, the system checks the validity of the signature on the certificate. That certificate was issued by Xyzzy. Its certificate was signed by Microsoft, which in turn has a certificate signed by Intel. And how does the PC know that that certificate is valid? It’s embedded in the hardware, courtesy of Intel’s new strategy.
If everyone is playing honestly, there’s some protection here; no code not authorized by this chain leading up to Intel can run. If anyone tampers with a legitimate program or certificate, the digital signature will fail; similarly, it is believed to be impossible to forge a signature without breaking the cryptography or stealing someone’s key. But it all depends on who the trust anchors are. If they’re too tightly controlled, there is too much central control; if there are too many, who’s to stop EvilHackerDudez.com from getting a code-signing certificate?
A crucial question, then, is how many trust anchors will exist. Verifying a signature, whether of a program or of a certificate, is an expensive operation, though hardware assists can help tremendously. It’s possible to cache signature verifications, so that if you’ve recently verified the certificate of the Xyzzy browser division all the way to the trust anchor you don’t have to do so again. All of that works better, though, if there’s a reasonably small number of certificates (or programs) to verify. Perhaps, depending on Intel’s design decisions, there would be room for only a very few trust anchors.
It’s instructive to look at how browsers handle the same problem. Every mainstream browser ships with a built-in set of trust anchors; for Firefox, there are about 180, selected by Mozilla. These are used to authenticate secure web sites, web sites to which you set up encrypted connections. Trying to override this list is painful, difficult, and accompanied by blood-thirsty warning messages. It is fair to say that most consumers and small businesses will never change this list. Large companies may add their own, either for code developed in-house or for trusted vendors. Conversely, they may delete trust anchors, to prevent unauthorized code from running on corporate machines. This is a dream of many IT managers, but would likely impede corporate agility; most interesting new software developments, including the web itself, were pushed from the bottom up.
Suppose that we get the same set of about 180 trust anchors. What does this mean?
We probably won’t get much real security. Matt Blaze observed a long time ago that “commercial certificate authorities protect you from anyone from whom they are unwilling to take money“. If they don’t vet their clientele for anything other than their corporate names, there’s no protection; EvilHackerDudez can easily create a subsidiary named Advanced Integrated Software Security Research Corporation and let it get the certificate. In the web model, any trust anchor can issue a fake certificate with any given corporate name. This has become an issue in the Web world, since Firefox now includes a Chinese company on its trust anchor list; this company, perhaps at the behest of the Chinese government, could do things like issue fake Gmail certificates to help capture dissidents’ email passwords.
Will governments get their own code-signing certificates? Which ones? Years ago, there was the accusation — valid, in my opinion — that Microsoft added a certificate at the behest of the NSA. Which governments do you trust?
There’s another danger: cryptographic code-signing keys can be stolen. This has been happening; in at least two recent cases, the Stuxnet worm and a very recent exploit against Adobe’s PDF viewer, the perpetrators were able to sign their malware to make it appear that it came from very legitimate sources. I should add that both of these attacks were extremely sophisticated and dangerous.
To put this all in legal terms, signed code no more protects against malware than a signed contract guarantees certain performance. At most, both provide accountability. In event of malware or non-performance, you can seek recourse — if you can afford it, and if they can pay up, and if the signature on the code or contract wasn’t forged in the first place. But here, we have a considerable downside: our computers will only execute code from someone on the approved list. This will likely pose a considerable hurdle to legitimate but small firms, and will certainly inhibit experimentation.
There’s one more potential danger I want to point out. The exact format of an executable file is a complex matter and strongly tied to the particular operating system it runs on. The more the signature verification hardware knows about the format, the better job it can do of blocking malware. (A detailed explanation of why this is so is highly technical, and well beyond the scope of even this post.) But this may mean that Intel will favor its biggest partners, Microsoft and Apple. Will this act to discourage new OS vendors that have a very different model of what an executable file looks like?
To sum up: this new scheme will provide minimal protection, but will deter innovation and generativity. Worse yet, the degree of protection provided is inversely proportional to the damage done.