Hi Kyle,
From: Kyle Hamilton Sent: Tuesday, October 17, 2006 11:26 PM
Ron,
'large number of tasks' is an understatement. :) The question is this: which validation are you looking for, first? Two (well, three, really, but only two classes of) validations have been mentioned, and my question relates to which of them resources will be allocated to -- which, in turn, defines what should go on the task list and with what priority.
FIPS 140-2 is the Trusted Cryptographic Module, which in and of itself is a huge undertaking -- a high-level overview of the things required for it are "make sure that the binary representation of the code cannot be altered", "make sure that calls into the binary representation of the code cannot be diverted", "make sure that, once in FIPS mode, only FIPS-approved algorithms can be used", "ensure that the random number generator has a good enough source of entropy AND is then stirred by a FIPS-approved pseudo-random function", and various and sundry other things. This is the validation that OpenSSL received, this is the validation that Windows CryptoAPI received, this is the validation that those two specific binary versions of Crypto++ received, this is the mandatory validation for cryptographic software that is to be used by the US government. (VM changes will be required for a validatable pure-Squeak FIPS implementation.)
Yes this is what I had in mind. My understanding though was that following common criteria was a good first step since receiving official FIPS 140-2 validation was a very large task. If CC is not possible without meeting the specifications of FIPS 140-2 (even without having an official validation) then we should start there. The things that you just mention have been the pieces that I've been considering. If they are the only pieces that are still outstanding then we should take them as tasks, assume that we will find a way around them and move onto the other validation tasks. Since they will be a long process and there is no guarantee that what we come up with will be adequate without having an official lab help to set the development agenda for qualifying our software.
As for the specific tasks you mention:
Code isolation and exclusive use of validated code, I agree with you that the right way to accomplish this is by embedding the code into the VM, and signing that VM, or having some sort of crc code checking and a signed VM. We should try to come up with what we think are options and maybe find someone connected to a lab that might be willing to consult to this group. I will take finding that person on as a task. What has me confused is that I see that Java and Eclipse have gained CC validation without FIPS 140-2 validation so it appears to me that CC is less strict, and it must be possible to gain CC validation since Java and Eclipse are both VM type environments. It might be a good idea for me to find someone from one of those teams also. I will reach out to the Eclipse community to see if I can get some help understanding the differences.
PRNG I plan to work on Bruce's FORTUNA implementation for squeak.
Common Criteria assurance validation is mandatory for entire systems that are to be used by the US government, and for information processing systems used by financial institutions. It also provides (to my understanding) assurance that is good enough for HIPAA's requirements. One of its requirements is that it use FIPS-validated cryptographic software, but it doesn't stop there -- this validation applies to entire systems and not merely the cryptographic component. In any case, the project eventually will need a FIPS-validated cryptographic component in its architecture
Ahhh this may be the point that I am missing, are you saying validation of the individual components need to meet FIPS criteria?
, be it through using a
platform-native library, a smart-card interface, or a pure-Squeak component. (VM changes will also be needed for this validation.)
You can loosen the requirements now, as long as an eye is kept on the long view. If the system is mis-architected, it will require substantial rebuilding of the VM to bring it into validatable conformance with either the Common Criteria or FIPS.
Agreed
This is the
reason why I felt that we should bring in a VM hacker now -- as it stands there is no way that, even plugging in an external FIPS library, the system could meet anything but the most basic assurance.
I've started discussing this with Klaus who agreed to help if he could, but I had to drop the ball because of some other critical commitments. But I hope to reengage that effort soon. I am very happy that Craig has also volunteered to help. My biggest concern right now is building something that we think is acceptable just to find that it misses the mark. I do like the idea of brainstorming solutions, so that we have things we can present. But I would not want to build any of them without some gut check validation with people that know.
This is going to be a long-term, many-component project, and I don't envy Krishna the management of it nor you the executive role. The least that I and anyone else can do is help you manage it. This includes making sure that every team knows at the outset what the eventual parameters will be, so no one codes themselves into a corner.
I thank you very much for your participation. I know that you will be an extremely valuable part of the team. My only concern is that we need to take this one step at a time in order to make sure that everyone is on the same page. Nothing should be assumed, and no steps should be skipped. I think that it is very important to flush out the big picture, but I want to as quickly as possible focus on manageable detail tasks. As we move forward we need feedback as to the big picture but I don't want the big picture to get in the way of actual progress. That is why I felt that the Validation Officer position was needed and why it is so important. Someone needs to manage the big picture.
A problem exists with loosening the documentation requirements for too long -- which is that if it is left for too long, the criteria simply cannot be met. Krishna suggested a Protection Profile as a target ("Single-Level Operating Systems in Medium Robustness Environments PP") which requires that the system must meet EAL 4 with some augmentations. In order to do that, there are some requirements, including partially-automated configuration management (I'm thinking that Monticello may be good enough for this, though I have not looked at it or the CC in sufficient detail to be able to state definitively), that require that only authorized changes can be made to any configuration item.
I quote paragraph 217 from the Common Criteria v2.3 Part 3 document:
217 EAL4 also provides assurance through the use of development environment controls and additional TOE configuration management including automation, and evidence of secure delivery procedures.
All of the relevant information is contained in the CC v2.3 documentation (the administrative requirements are contained in part 3). Suffice it to say, though, that one of the requirements is that no developer should be able to insert unauthorized or malicious code or information undetected -- and since the profile includes that Administrator and User documentation will be created and delivered, those documents must be kept under Configuration Management as well.
I agree with these points and agree that they are very important once we freeze development and identify a platform that will be used for validation. Since we are no where near ready for this freezing, controlling the environment is not needed. This was how OpenSSL validation worked. Once they started they picked a version and set it up for the lab. Once that was done any changes needed to be very closely tracked. I've done FDA documentation and I know the mantra, "A CHANGE IS A CHANGE IS A CHANGE". Even code COMMENTS needed to be documented in the journals for the inspectors. So once we decide that we can pass CC we will have to set this up and strictly follow all of the guidelines under the supervision of the lab.
If a wiki is used to write and edit those documents, that wiki must be able to ensure that only authorized people can modify them. It must also be able to backtrack the various changes to find when any given piece was added, removed, or altered, and who did the alteration of the documentation at that time.
Again once the document is presented in its final form.
I apologize for the length of this, but I do want to ensure that everyone is on the same page and understands the rationale behind some of my statements.
No need to apologize, I very much appreciate your contribution and encourage you to participate as much as possible. Having you on the team will help a great deal towards our goal of validation! Thank you!
You and/or Krishna are going to make decisions as
to the exact procedures that the team and project are going to use -- however, I'm an agoraphobic bachelor, and I thus have more time to research the requirements much more in-depth than you two can. This suggests that I should do so, and make recommendations based on what I have found.
Your contributions and research are very much appreciated. Sorry about your phobias. Are you working towards facing those fears?
I liken this to the role of a research librarian -- sift through mountains of data, to provide a concise report. As I said, I defer to you and Krishna -- but if you want to know the rationale behind something I suggest, I will quote you book, chapter and paragraph of everything relevant that I have found so that you can make up your own minds without having to sift through the mountains yourselves.
That will be very helpful for us to understand!
So, to sum it all up: One of the tasks that needs to be completed is to set up an authenticated configuration management system before the user and administrator documentation is written. As I said, the current wiki system will not be sufficient for the long term, but there's a huge mountain of other tasks as well. Since no administrator or user documentation is being created at this point, the wiki does not need to move to a positive identification system yet. Not in the short term, and probably not in the medium term. This doesn't mean that the eventual need shouldn't be planned for.
I agree.
(Arguably, design documents and implementation plans should also be entered into CM. Again, though, this is a long-term project, and rough ideas probably don't.)
-Kyle H
Thanks Kyle,
Ron Teitelbaum