[Message by Fred, reposted to e-lang, users@mozart, and squeak-e by MarkM. I wanted to be sure y'all saw this. --MarkM]
Toby,
Three weeks ago Mark Miller, Jonathan Shapiro, Peter Van Roy and myself submitted a paper for POPL05, that is to be regarded as a first step in the direction you're pointing to. It does not yet deal with _automatic_ mapping of policy specifications towards relied-upon abstractions of behavior, but it contains a proposal for a formalism to express in which way relied-upon ("trusted") entities reduce their authority-providing behavior when interacting with other entities, and to consequently examine the resulting boundaries in authority propagation. We called it "Authority Reduction Systems" in the paper, and we derived its mechanisms and features from an analytical comparison between H.R.U. Protection Systems and Take-Grant Systems.
I consider your remark as (one of) the next step(s) to be done now: 1) build tools that proof the "safety properties" of an arbitrary configuration that contains some entities of which (part of) their behavior restrictions are known or relied upon. 2) build tools that propose the injection of reliable entities at strategic places in a configuration, to assure a given set of "safety requirements" for a given class of configurations.
I hope we will then also be able to translate arbitrary security paradigms into such "classes of initial configurations", with a minimal set of initial conditions, and reliable entities put at strategic places to enforce the policy. ( I'm not sure I make myself clear, I suggest you read the paper for more details)
I've just started working towards such a tool, using techniques from constraint programming which are relatively new to me (all help is very welcome!). I think it is possible to build a tool that can find the minimum number of reliable entities to be injected, the minimum set of restrictions each of them should respect, and the best places amongst the other agents to install them, as to guarantee the requirements of an arbitrary security policy.
You can find our paper "Authority Reduction in Protection Systems" on the MILOS project site: http://renoir.info.ucl.ac.be/twiki/bin/view/INGI/MILOSProject (last entry under the topic "Capability Theory")
All comments on the paper are very welcome on this mailing list. I hope it will point you in the right direction.
cheers,
Fred. ------------------- Fred Spiessens UCL Louvain-la-Neuve Belgium http://www.info.ucl.ac.be/people/fsp/fred.html ------------- you're invited to: ------------------------- the Second International Mozart/Oz Conference (MOZ 2004) Charleroi, Belgium, Oct. 7-8, 2004 http://www.cetic.be/moz2004 -------------------------------------------------------------
On Aug 11, 2004, at 3:22 AM, Toby Murray wrote:
In my reading of literature regarding capability systems and implementations I'm yet to find any work that deals with the automatic mapping of an abstract policy specification (in whatever appropriate paradigm) to rules regarding capability propagation between entities, and trusted abstractions built over the base system. While it has recnetly been explicitly recognised in the literature (at least from my understanding) that trusted abstractions enforce security as well (and therefore are an embodiment of the policy) -- with the decisions regarding distribution of capabilities between entities being the other embodiment of policy -- it appears that we'll have to be able to build these abstractions and generate propagation rules automatically that can be enforced by trusted code, if capability systems are to become practical. Perhaps capability systems make this problem harder because they allow almost arbitrary security paradigms (in which any policy must be framed) to be mapped onto the base system. (eg. we can do RBAC or Bell-LaPadula if we want to build the right abstractions and come up with the right rules etc.) We have seen very small examples here and there (eg. indirection for temporal revocation, and the example used by Shapiro et. al to show that the *-property can be enforced), but all of these must be crafted by a human who "knows" that the abstraction and associated rules actuall embody and enforce the abstract policy. Has any work been done in this area and if so could someone point me in the right direction?
Thanks, Toby
-- Toby Murray Software Engineer Advanced Computer Capabilities Unit Information Networks Division DSTO, Australia
IMPORTANT: This e-mail remains the property of the Australian Defence Organisation and is subject to the jurisdiction of section 70 of the Crimes Act 1914. If you have received this e-mail in error, you are requested to contact the sender and delete the e-mail.
cap-talk mailing list cap-talk@mail.eros-os.org http://www.eros-os.org/mailman/listinfo/cap-talk
_______________________________________________ cap-talk mailing list cap-talk@mail.eros-os.org http://www.eros-os.org/mailman/listinfo/cap-talk
squeak-e@lists.squeakfoundation.org