Site Meter

A New Threat to Generativity

You may also like...

8 Responses

  1. Abed BenBrahim says:

    I fail to see how this will provide any security, real or otherwise. Everyone who develops software (IT departments, shareware authors, consultants or anyone who claims to do so) will be obtain a code signing certificate, as is the case today. The fact that a program is signed by “Acme Software Solutions” or “John Smith Software” will not inform me in any way whether the “cool” screensaver I downloaded is a trojan, in the same way that Windows asking me “do I want to allow a program I use every day if I want to allow it to modify the system ” increases the security of my system in any way.

  2. A.J. Sutter says:

    Sounds like the revenge of “financial innovation” — the rating agency paradigm is backwashing into tech. There have been some less than successful attempts to analogize IP and finance recently; but this seems like the killer app of ironic symmetry.

  3. Steven Bellovin says:

    Abed: it can provide real security but only under certain conditions. If a corporation configured its machines to only run software from, say, Microsoft, Adobe, and their favorite hardware vendor, by installing just those three trust anchors, code from John Smith Software won’t install. Of course, that assumes that the list is short enough to be manageable (it probably won’t be) and that code from trusted vendors is safe (which it isn’t — Adobe’s software has been a lot of very bad press of late, to name just one among very many). But a general model — you’re right; no chance. See http://www.mail-archive.com/cryptography@metzdowd.com/msg11801.html for a very recent technical rant on why the very concept of these certificate chains — known as PKI, or public key infrastructure — doesn’t work very well.

  4. Andy Steingruebl says:

    Steve,

    One feature these schemes do provide in terms of accountability is revocation.

    In the case of many of the exploits that target vulnerable software, many of these do require subsequently running their own executables. Depending on the nature of the attack, code signing does offer a significant hurdle to some forms of malware because many of its side-effects on the system cause either the new binary to fail to verify, or their new binary installed cannot execute.

    Also, whether they are perfect or not, they represent one control in the ecosystem. Perfect security – no. Nothing is though, so critizing this for not preventing all attacks isn’t really fair.

  5. Steve Tate says:

    Steve,

    Maybe I’m missing something, and the linked news story is pretty much worthless, but what exactly is the big deal here? How is this different from Microsoft’s Authenticode? Since the OS is responsible for anything that loads, this seems like an OS/software issue, not a hardware issue (unlike trusted computing group style measurements, which really do require hardware support).

    Is there an implication that blocks of binary code will have to be authenticated and verified at the hardware level? Seems like a bit of an overreach to me, but here I’m doing just what annoyed me about the Ars Technica article: making wild speculation with no basis in actual facts.

  6. Steven Bellovin says:

    Andy: thanks for the comments.

    It’s unclear that revocation actually works properly; in http://www.mail-archive.com/cryptography@metzdowd.com/msg11324.html, Peter Gutmann discusses some of the philosophical issues. Among them is the fact that the compromised key was used to sign a lot of legitimate, crucial pieces of code; revoking it would “brick” a lot of systems that weren’t at risk from Stuxnet. This seems hard to fix, even in principle; while one could have, say, a separate key per legitimate product, we’re dealing here with a stolen key, one that was used for some legitimate purpose as well as for the worm. The author notes ‘So alongside “too big to fail” we now have “too widely-used to revoke”.’

    You are certainly correct that in some cases, signed code can raise the bar. I alluded to the Stuxnet worm; a recent news story (http://www.computerworld.com/s/article/9185919/Is_Stuxnet_the_best_malware_ever_?taxonomyId=17) quotes researchers as speculating that a nation state was behind it. It is the most sophisticated attack ever found, according to the story, by contrast, the attack that Google linked to the Chinese government was “child’s play”. It is not the standard against which we should judge other attacks. The new attack against Adobe seems less sophisticated, though.

    I use two metrics when evaluating a proposed security solution: how does the cost of the mechanism compare to the harm it prevents, and how does the cost of the mechanism compare to what it will cost the attackers to counter it? Here, the cost in terms of lost generativity is, I think, quite high. And countering it? That depends on the details, and in particular just how the set of acceptable certificates is defined. If it’s like the web security model, it fails trivially, as Abed points out above. More restricted lists? The question then turns on how easy it is to steal keys. Gutmann points out that there are many pieces of malware already in existence that steal keys. So — are we fighting the last war? (I think we are, but that’s the subject of an entirely different post — article, more likely — that currently exists only as a slide deck on my web site.)

  7. Steven Bellovin says:

    Steve: Yes, I think it was talking about hardware verification of executables. Apart from the fact that Intel is a chip company, not a software company, the article spoke of making changes to the “x86 ISA”. “x86″ is geek-speak for the series of Intel chips that started with the original IBM PC (the 80286) through the Pentium (80586) and later variants. “ISA” is “instruction set architecture”; when the article says “Otellini went on to briefly describe the shift in a way that sounded innocuous enough–current A/V efforts focus on building up a library of known threats against which they protect a user, but Intel would love to move to a world where only code from known and trusted parties runs on x86 systems”, to me it means changing the chips to enforce it.

    You’re certainly correct that one can do this just with software. That is, after all, what Apple does for the iPhone. But hardware restrictions are more difficult to evade, which is presumably Intel’s goal.

  8. Bill Cheswick says:

    Your arguments against the signed code are not convincing: the fact that we don’t
    edit down the trust entries in Firefox doesn’t mean we couldn’t or shouldn’t. In fact,
    I would like to see certificate usage information and reporting (perhaps there
    is already a plugin) to help those that care edit the list down.

    A shorter list should be a clear goal to those running a corporate intranet. Also,
    the weekend sys admins taking care of grandma ought to be able to find
    and install recommended trust lists. She doesn’t need generativity.

    I agree that there are problems with implementation (I support static
    binaries, which should help), revocation, and lost keys.

    Your downsides are a concern, but you don’t mention the other side: virus
    protection is a mugs game, eventually doomed to failure in theory, and
    already failing in practice.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*
To prove you're a person (not a spam script), type the security word shown in the picture. Click on the picture to hear an audio file of the word.
Anti-spam image