Reflective Support in Spoon

Matthew S. Hamrick matthew.hamrick at gibson.com
Wed Mar 23 17:42:49 UTC 2005


Okay... I did misunderstand the question.

The revised question is a very good one and cuts to the heart of how to 
implement system security. If you're interested in a little background, 
read the section in the <ignore> tag. Otherwise just skip it.

<ignore>
Allow me to reflect upon the nature of software security.

It's probably a good idea to talk about the different types of 
"software security." The term is used by a bunch of different 
communities, and sometimes people will use terms that have slightly 
different meanings. I've been going to the Crypto, NDSS, BlackHat, and 
RSA conferences for over a decade, and I think I've got the terminology 
down. But it is true that cryptologists, system security experts, and 
marketing people all use the same words to mean sometimes radically 
different things.

When you say "security" to the IT community: system administrators and 
the like, you generally mean hardware and software security solutions 
that you can buy out of the box. This community is generally concerned 
with setting up systems to provide strong authentication to enforce 
access control provisions. Privacy and confidentiality are also 
important, but it's surprising how quickly I've seen corporate users 
ditch privacy if there's anything resembling cost associated with the 
solution. These guys really like to secure communications at the link 
layer or network layer. They like to work with Cylink link-encryptors, 
SSL, and IPSec. And for good reason. They generally assume that 
everyone inside the corporation is trustworthy, and everyone outside 
the corporation is a threat. (This isn't especially true, but it's the 
working assumption they start with.) Encrypting link-layer and 
network-layer traffic solves privacy and authentication problems for a 
number of applications: email passwords, web passwords, CVS pservers, 
etc. Access control is usually at a coarse grained level, and generally 
used to enforce who has access to resources like files, execute 
permissions on programs, and tables in databases. More complex models 
have yet to really take off in the corporate IT world because of a 
perceived difficulty in administering rights for large groups.

On occasion, someone in this community will remember that laptops, 
PDAs, and mobile phones get stolen, sometimes with sensitive data 
on-board. The solution is usually to license a commercial product that 
"automagically" encrypts everything on the hard drive (or flash memory) 
or what have you. System components from Microsoft and Apple and 
products from WinMagic and Certicom do a respectable job at this, by 
the way, until you start talking about key management, and then it 
becomes a bit of a nightmare. But it's a manageable nightmare, and even 
though these products aren't completely secure, they're secure enough 
to deter low-tech or less-knowledgable attackers, and you can tell your 
corporate overlords that there's a policy in place for protecting 
critical corporate information assets.

When you say "security" to Cryptographers, they think you're talking 
about how easy it is to break crypto algorithms. Breaking a crypto 
algorithm for a cryptographer is not always the same thing as breaking 
it for a system attacker. A good example is the recent SHA1 attack 
devised in part by my former co-worker Lisa Yin. The team developed a 
theoretical attack on SHA1 that demonstrated an attacker could find a 
hash collision in less time than it would take to "brute force" the 
input space, looking for two messages that would hash to the same 
value. Statistically, someone should be able to, on average, find a 
hash collision in SHA1 after calculating 2^80 hashes. In the attack 
they recently published, they showed how clever selection of input 
messages would require an attacker to only look at something like (and 
this is from memory, so don't crucify me if I'm wrong) 2^68 messages. 
2^68 is still a pretty big number, so from a practical perspective, you 
don't have to worry about the Verisign root certificates that are 
signed with RSA+SHA1 just yet.

However... the history of such things is that once someone demonstrates 
a theoretical weakness in an algorithm, a whole raft of talented people 
start looking for practical attacks. So I'm starting to look at hash 
algorithms other than those in the MD4/MD5/SHA family. I'm convinced 
we'll see a practical attack against SHA1 within two years. But I 
digress...

</ignore>

When you say "security" to software developers, you get one of three 
reactions:
	"Oh, you're talking about how to use security toolkits to enable 
security features like SSL,"
	"Oh, you're talking about how not to introduce security holes like 
buffer over-runs,"
or	"Oh, you're talking about how to do access control inside an 
application."

This discussion is one of the last types. Stéphane's becomes: example 
is a good one because it poses the question, given a dynamic language, 
how do you prevent certain methods from being called on certain objects 
by untrusted parties? I can't give you the answer, but I can help flesh 
out the question:

In the most general sense you have two problems:

1. how to restrict a particular method from being invoked on a 
particular object by a particular principal,
and
2. how to restrict access to an "instance variable" (attribute, member, 
whatever you want to call them) from a subclass.

For the first problem, solutions that people have worked with in the 
past include a security manager that maintains a list of classes, 
methods, and principals that are explicitly allowed. You then add the 
concept of a principal to your execution context to understand who it 
is that's running a piece of code. If you want to be fancy, you can 
have two principals: the person who wrote the code, and the person who 
is running the code. This way you can do things like prevent code from 
Microsoft from accessing your address book.

Using this type of solution, before a method is dispatched, the 
security manager is consulted and asked, does this user have privilege 
to execute this method on this object or class?

Privileges are usually stored in the system somewhere. If you want to 
get fancy, you can have a distributed privilege database. I worked for 
a while with putting application privileges in LDAP directories, but 
they could just as easily be placed in a HTTP fronted XML engine. 
Pondering this for a while should get you moderately scared for 
performance and security reasons. What happens if every message send 
requires me to wrap up a query and send it to a HTTP based privilege 
database? Or, heaven forbid, the authenticated link between the client 
and the privilege database is cracked? Yikes!

At some point, you've got to trust some code on your system. We've been 
talking about namespaces (NAIAD.) In Squeak, you can subclass off any 
other class in the system. The Java security guys had some problems 
with this. The canonical example of this is what happens in a system 
where you store peoples passwords as Strings (granted, you shouldn't 
ever do this, but for the sake of argument let's say we have such a 
system.) You might have some code that looks like this:

// returns true if the username and password are correct
public boolean validatePassword( String username, String password ) {
	String db_password = null;

	// insert some code to lookup the username and grab their password 
from the
	// password database. The users' password is now in db_password.

	return( password.equals( db_password ) );
}

Doesn't look too bad until you think about something like this:

class EvilString extends String {
	public boolean equals( String compareTo ) {
		EvilMonitor.storePasswordOnDisk(  compareTo );
		return( super.equals( compareTo ) );
	}
}

// Fragment to do evil things:

String password = InteractionManager.getPasswordFromUser( );
EvilString ePassword = new EvilString( password );
SecurityManager.validatePassword( "whomever", ePassword );

In this example we see one of the reasons the Java guys wanted to have 
a final keyword. Disallowing subclassing off String prevents this from 
happening.

The solution to many of these problems is to carefully select which 
objects, methods, and permissions are required. Perhaps it would be a 
good idea to speculate what types of methods could be corrupted. We've 
been talking a little bit about namespaces, maybe there's something we 
can do there.

<conclusion/>

Yes. This is a smashing good idea.

I can't say that I really know where we use becomes: methods.

I would say that having a general permission-based or 
capabilities-based access control system to represent which methods can 
be called by which principals on which objects (or classes) would be a 
good idea; especially one that could be used by kernel methods to 
ensure that chicanery is detected.

At some point we have to have a trusted code base. It wouldn't do to 
have the access control system call itself for every method send it 
does internally.

In keeping with the design goals of Spoon, we should see how small we 
can make the trusted code base, and how to make the access control 
system a low-overhead component.

The Java security guys worried a lot about how to prevent attackers 
from subclassing off "trusted" classes. Thus the final keyword was 
invented. Personally, the inability to subclass off String was one of 
the things I didn't like. It made it uncomfortably difficult to add 
semantics to certain types of strings. We could potentially avoid 
security problems with subclassing off trusted classes by adding a 
capabilities-based system and disallowing certain methods to be 
overridden outside namespace boundaries (think about 'final' meets 
'package'.)


On Mar 23, 2005, at 5:22 AM, Alexandre Bergel wrote:

>> So the question of alex was what can be the minimal kernel and how can
>> we reintroduce reflection
>> and control the way it is done or the actions. Can we have a secure
>> system in which we
>> still have reflection but without impacting security? I think that 
>> this
>> is the underlying question of alex.
>
> Yep, exactly. I feel that reflection support is mainly useful with a 
> programming environment. And for most of the case, when I want to 
> deploy an application, reflection is not needed anymore.
>
> As Stef said, I would like to experiment with a minimal kernel and see 
> how reflection can be reintroduce...
>
> cheers,
> Alexandre
>
> -- 
>
> _______________________________________________
> spoon mailing list
> spoon at netjam.org
> http://netjam.org/mailman/listinfo/spoon_netjam.org





More information about the Spoon mailing list