Puzzle: Adding domain-based security to Squeak.

Howard Stearns hstearns at wisc.edu
Wed Aug 9 13:57:58 UTC 2006



Klaus D. Witzel wrote:
> On Tue, 08 Aug 2006 18:04:07 +0200, Howard Stearns wrote:
>> Klaus D. Witzel wrote:
>>> On Tue, 08 Aug 2006 16:53:46 +0200, Howard Stearns wrote:
>>>
>>>> Imagine that a magic fairy comes and creates a system that works 
>>>> exactly as you prescribe.
>>>>      Now, how will you or your users guess that 64MB RAM / 100MB 
>>>> disk / 100MB traffic/day is appropriate for one application, while 
>>>> others use different figures?
>>>
>>> Would you say that the above is any different from a single computer 
>>> system with exactly the capacity limits you gave? If so, mind to 
>>> explain?
>>
>> I agree that they're the same. And I would never order a computer 
>> saying that it must not have more than 64MB, 100MB disk, nor allow 
>> more than 100MB traffic/day. Nor would I attempt to implement 
>> "safety" that way on a single or partitioned computer.
>
> Then, how would you "implement" it?

Rats, I thought I answered that in my first message on this thread. ;-)

I don't know of a silver bullet to implementing safety. No one thing 
with a catchy name that has been articulated as a necessary and 
sufficient strategy.  Sandboxes and Access Control Lists have already 
been shown to be neither necessary nor sufficient.  (And to me, 
Michael's original formulation of "Domains" are just ACLs by another 
name.) Through the thread, I and others have mentioned some principles 
that I do feel are helpful enough to be significant (if not necessarily 
"solving" the problem). These include defense-in-depth, end-to-end, 
capabilities, and the thing I mentioned in my first message, which I 
don't have a name for, in which you make the user pick a resource when 
you really need one so that the user knows what's happening, rather than 
just barring access altogether.

In addition to using the above to figure out what behavior should be 
controlled and how to manage that control, there's still the issue of 
enforcing that control. I suspect that in a class-based system, this is 
precisely equivalent to versioned classes: you want to ensure that an 
instance is using a version of a class that implements the behavior 
safely, and dynamic changes to the class create new versions. I think 
there are only two strategies for versioning classes: either you include 
the class with the mobile code, or you have a form of linking in which 
you specify (in a secure way) what "version" of the class is required.  
Both Andreas and Craig are working in this area. (For my part, my faith 
in the religion of classes has been shaken over the last year and a 
half, and I'm toying with converting to prototype/delegationism. This 
makes it easier for different objects in the same application to 
implement versions of behaviors. But I don't know yet how this all fits 
together with enforcing, e.g., capabilities.)

>
>>> BTW: in ancient (computer age) times we had to implement contingency 
>>> systems (sometimes mistakenly called accounting systems) because 
>>> there was only one computer for 1,000's of users, like for example 
>>> here:
>>
>> Indeed, and there is good reason that computers are no longer 
>> implemented that way.
>
> I don't believe that. Do you mean that you can upload your malicious 
> code to one of the grids
>
> - http://en.wikipedia.org/wiki/Grid_computing
>
> and it (the grid) will immediately allocate all its resources and all 
> its processing power that your code asks for?
>
> Indeed, there *are* good reasons that computer (systems) *are* 
> implemented that way (regardless of the # of CPU's etc). And the grid 
> is not an exception: every OS constrains *all* available resources, 
> one way or the other, even its own resources. There is no way out.
>
> The question in the original post was, how to do that with Squeak 
> (cross platform, of course!).
No. It is easy to confuse a goal with an implementation, and I think 
that's what's happening here.

Klaus, what you and Danil are quite rightly describing is the the goal 
of assuring that you have the resources to do your work.
Michael's original post is about a particular technique (essentially, 
implementing ACLs for objects/processes/islands/whatever to bind them to 
a particular sandbox). While this technique is an interesting puzzle 
(the part I like about the subject line of the original message), it is 
not, IMHO, a good strategy for providing the goal of a meaningful, 
practical measure of security in a mobile code system (the part that I 
and others have not liked about the subject line of the original message).

 I think (?) that most of the large scale grid system, including Seti 
and my university's own Condor, take a trivial approach to "level of 
service": they DO take over the whole processor, but only when there's 
no keyboard/mouse activity for a long enough time. They checkpoint 
periodically, and lose all their work since the last checkpoint when you 
interrupt it by typing again. It's really all quite crude.

Ensuring "enough resources" is a "hard problem." ("Here's a Turing 
machine," said the professor to the student. "Please make sure there's 
enough tape in it to run this program. But don't run the program 
first.")  I think it's pretty clear that I feel that asking users to 
pick limits (configure the tape) isn't going to get you anywhere.

>
>> While I certainly don't feel that "no body does it that way" is a 
>> valid argument that something should not be done, I do feel it is 
>> instructive to note that, while there are many projects to build 
>> distributed systems that allow resources to be shared among 
>> computers, the opposite does not appear to be true (e.g., hardware 
>> systems that segregate or cap resources).
>
> But the physical "hard" limits of computer systems are 
> indistinguishable from "soft" (administrated) limits. There is no 
> difference observable by any piece of software of any kind.
>
>> It's rather suspect that the original project spec of how to limit 
>> resource use on a single processor
>
> My example below was about a *single computer system* with multiple 
> CPUs and tons of time-shared terminals for use by the students.
>
>> should come out of a problem in distributed computing, which by 
>> definition is an attempt to gain overall system power and access by 
>> sharing resources between computers. There's a good heuristic: Don't 
>> create a feature requirement that contravenes the overriding project 
>> goal!
>
> The project goal is that no student can crash the university's 
> computer system(s) and also can not dominate available resources. 
> Every student must be given the compiler, disk space, etc, in order 
> that they can work and can produce their malicious code (by chance or 
> by using their free will). This is the (typical) situation when you 
> give Squeak to a user (regardless of the institution and of the 
> application).
>
>>>
>>> - http://www.unibw.de/
>>>
>>> on a B7800/B7900, the successor of the B5000.
>>>
>> Thanks for making my point!
>
> NP.
>
> /Klaus
>
>

-- 
Howard Stearns
University of Wisconsin - Madison
Division of Information Technology
mailto:hstearns at wisc.edu
jabber:hstearns at wiscchat.wisc.edu
voice:+1-608-262-3724




More information about the Squeak-dev mailing list