Most people have probably heard of Citirix’s GoToMeeting. It’s collaboration software that allows you to remotely view the screen of another person. It’s prinicpal competition is WebEx from Cisco and free services like DimDim. They completely undermine the notion of security on the web by telling their users “click yes if prompted.” They’re not alone, though. Microsoft, Cisco, and others all do this.
The problem, of course, is that Java programs can’t do things without having permissions. And they want you to trust their code, so they sign it. Unlike the SSL certificates used by web sites, the code-signing certificates don’t chain back to some authority you trust. Thus, they ask you to trust their self-signed code.
Now, signatures were invented to solve the problem of someone on the Internet saying “trust me.” Of course you ultimately have to trust someone. We solved that (for some definition of “solved”) in SSL by pre-loading a bunch of trusted certificates in web browsers. But here in the code signing world, lots of sites choose not to go that route. Instead, they put out self-signed code.
Microsoft makes a different choice. During probably the most security-significant operation you do—operating system updates—they tell the user to make sure the ActiveX control is signed by Microsoft before letting it run. Now, as a lay person (heck, even as a highly-experienced security person) how do I do that? There’s a reason why this advice from Microsoft is so short: It’s either impossible or entirely too complicated. Some will argue from a theoretical point of view that it is impossible. Others will point out that if you allow for an approximate notion of authentic code, you can get some good confidence, but only by following some extremely complex procedures. Not the kind of everyday thing that a user will do when updating Windows.
The problem with self-signed code (or code whose certificate is not trusted) is that anyone can make a signature just as realistic-looking as the real one. And then they can say “make sure you click Yes if prompted.” They can post a web page that says “make sure the software is authentic from Microsoft” and then send you a load of rubbish. You won’t be able to tell their signatures from the real ones.
This dialog points at a broken security model. Any time you use an out-of-band technique (text on the web page where you download the “secure” code) to tell the user to ignore the in-band security controls, you are dealing with a broken security model. This security theatre creates a false sense of security and it trains users to believe that this kind of security is acceptable, normal, and doing something for them. It’s not. It also “teaches” enterprises who consume and ISVs who provide software that this is security when it isn’t.
One of the most frustrating aspects of “click yes for security” is how it violates a fundamental priniciple at the outset. We in the security community have lots of attacks against SSL and know lots of weaknesses in the whole chaining of certificates and trust. So even if code signers were doing it right, we’d still have difficulties making really strong statements about the security of that code. But to use self-signatures or untrusted signatures and then tell users “click yes,” just laughs at the whole principle of trying to trust mobile code.