Hey Ron...

I think you've hit upon a fundamental issue here. Let me tell you what we're currently investigating in Spoon.

First off... yes, it's a very valid concern. Some of the fundamental security objectives we're working towards in Spoon are:
1. Sensitive data is to remain sensitive.
2. Operations on data that are sensitive, are to be reserved to privileged entities.
3. Requests to perform sensitive operations are to honored only if there is a high degree of assurance in the request's origin (and the originating entity is privileged to make the request.)

All of these objectives are affected if the image isn't protected somehow. There are a couple of different ways to protect the image from malicious modification (or to protect the project or even class files and changesets.)

An attacker might want to replace data or code in an image file to complete his nefarious plans. One way to protect against this is for a trusted source to create image files for distribution on the Squeak.Org site. I don't know whose job it is to do this right now, but it's a simple thing to have them generate a key pair used to sign and verify the complete image. The VM would include code to verify the image at load time, generating a failure if signature verification didn't work out.

But the moment the user modifies something in the image and saves it, you've invalidated the signature. So signing and verifying the whole image probably won't work.

The approach we're experimenting with in Spoon is to define a set of classes that define "the core" of the language. These classes are then signed by a trusted third party (like the person who builds the image). That person's public key is included in the VM (or if you want to get fancy, you can add a collection of trusted root certificates from which the trusted third party's certificate was issued.) At boot time, the integrity of these classes are checked by the VM. So you now have a situation where users can modify any non-core class without invalidating the signature on the core part of the image. As long as you put sensitive things (like network and file access and crypto implementation) inside the "core," you can be assured that it hasn't been modified to do bad things.

There are drawbacks here, of course. One of the things that some people really love about Smalltalk is the ability to modify anything and everything, including core classes. The system should be flexible enough to allow an end user to generate a key pair and have their public key certified as being allowed to modify core classes _FOR THEIR INSTANCE_. In other words, you'll want to set it up so that there is... well... I don't want to use the word PKI or CA... let's just say an image signing authority. Someone in the Squeak project who's responsible for identifying which members of the community can create images that are distributed on Squeak.Org. This person maintains a certificate signing key and the certificate including the public key associated with that signing key is distributed along with the VM. If a user wants to modify core classes, they can do this, but they have to spin a key pair, generate a self signed cert, and put the cert in the list of trusted core signers.

There are several implementation challenges here, not the least of which is how to verify all these classes in a tractable amount of time. And you still have the problem that people who want to distribute classes that depend on changes they've made to the core are going to have to get the people they distribute to to install their own key pair so that when they modify their core classes, they don't break the signature check. And if everyone does this without examining code changes that come their way from anonymous sources on the internet, what's the point of signing things in the first place?

One of the things we're exploring in Spoon is the idea of attestation. If you create a class whose instances require changes in 'the core', you can tell the system something to the effect of... "allow such and such a class, signed by some person to make a call to this core class, but disallow everyone else." The idea here is that if you have a sensitive operation in the core, it might be nice to limit who can perform it. This way, you could distribute signed code that may have elevated privileges based on the identity of the signer. End users would then be able to say things like, "Oh... I trust Ian P. and Dan I. (and hopefully Matt H.) but I don't trust this random guy over here...)"

I'm working on a Security overview for Spoon that will hopefully come out sometime before the RSA Conference this year, more info and analysis should be available then.

But here are a few things to think about:

1. You're probably going to want to have some kind of Image authentication in the VM. If you depend on code in the image to authenticate code in the image, then yes, as you pointed out there's nothing to stop a malicious attacker from modifying the image to not actually perform the authentication. One of the objectives of "trusted computing" ( if you can ignore all the DRM and spyware the big guys are trying to enable under this name ) is to provide authentication, access control and privacy mechanisms by trusting and reviewing a small code core where code outside the core depends on features inside the core to work properly. This is why that Java VM verifies signatures on JAR files and why the Windoze OS authenticates authenticode signatures applied to executables and why the NGSCB guys at MS are trying to get the computing hardware to authenticate signatures on the OS. The idea is if you get it right in the hardware, you can use the trust provided by the hardware to authenticate the OS which then authenticates the VM which then authenticates code in the Image which then makes decisions about access control, authentication of entities involved in application layer protocols and privacy.

2. You probably want to have a flexible code authentication system. In other words, you don't want to hardwire a verification key inside the image or inside the VM. Whit Diffie has a quote that I love... "A secret that is hard to change is a vulnerability." I think the idea here is that if your system depends on the obscurity of the location of a key for security, when attackers learn where that key lives, they'll be able to access it which will obviate the trust given to that key. If your admin staff or your users can't easily change that key, then you've locked them into using a version of your software that the attackers know how to hack.

So what's the practical guidance I'm going for here? Browsers and OSes that have lists of trusted root keys, generally allow admins to add or remove root keys and modify the trust of root keys in the root key database. The guidance is... use a key database that allows privileged users to add, remove or edit trust settings for verification keys. Implementing such a database as a simple file is probably do-able as long as the file is protected from illicit access by the underlying operating system. If the underlying operating system doesn't provide file access protections, then you've got all sorts of other things to worry about.

3. You probably want the VM authentication capabilities to be a subset of those available to the runtime. The idea here is you might at some point want to use tools in the image to produce the core trusted root database. As an example, MacOS X uses a "keychain" format and splits keys between three primary classes of keychains: the X.509 root database, system global verification keys, and user-specific keys.

Okay.. I think there's "more to it," but I've run out of time... So I'd be interested to hear anyone else's comments?

-Cheers!
-Matt H.

On Jan 11, 2006, at 7:24 AM, Ron Teitelbaum wrote:

All,

 

I’ve been thinking though some of the problems with cryptography.  I have a question.  How do we protect the image?  It seems to me that the easiest way to circumvent the cryptography features of a system would be to change the code to send off plain text after it was decrypted.  So the question is how do we protect the image from being tampered with?  Is this even a valid concern, or do we assume that once someone has access to the machine, and therefore the image, that the system is compromised in general?

 

Ron

_______________________________________________
Cryptography mailing list
Cryptography@lists.squeakfoundation.org
http://lists.squeakfoundation.org/cgi-bin/mailman/listinfo/cryptography