We believe that it is practical to protect information infrastructure by building critical systems out of a relatively small number of components with well understood properties and combining those components into composites that work together well and retain desired properties. Our belief is based on experience in related fields, like computer engineering, where we build up high reliability systems by combining components with known reliability properties to form overall composites with desired system reliability. While reliability is a relatively simple measure, a considerable amount of theoretical and experimental work was required to understand how to combine components into more reliable composites. We plan to take a similar approach to understand how to combine components with identifiable protection properties into composites with high assurance.
To do this, we will identify desirable protection properties, create sets of components with measurable quantities of those properties, develop methods for analyzing and generating composites to attain more desirable system properties, and develop methodologies for demonstrating and assuring these properties. We will prototype components and composites so that we can conduct experiments to provide the evidence and measurements required for this approach to work, demonstrate the technical and practical feasibility of our approach, and provide usable prototypes of real value on an interim basis while our work is underway.
The basic methodologies we will use for this research have been demonstrated to work. However, these methodologies have not been applied to this area or in this manner. We understand well that information protection is a far more complex area than reliability and that others have worked and continue to work on composition-based methods for providing high assurance. We believe that our approach is unique and that it compliments the work of others who work in this area. We are using tried and true methods to give our efforts the highest likelihood of success, but this is still a risky research venture that will produce substantial new knowledge and publishable results. This effort is also sufficiently different from other approaches that it has the potential for creating the sorts of breakthrough advances that this area needs.
For clarity, within this proposal, we will use the following terms and meanings.
Protection: Keeping from harm.
Component: By this we mean a mechanism with an identifiable function. Typically we use component to indicate a software program such as a web server or a device driver, but sometimes it might apply to a piece of hardware.
Composite: By this we mean an entity made up of distinct components. A structure in which two or more distinct, structurally complementary components combine to produce structural or functional properties not present in any individual component. For example, a composite might be a processor connected to memory and input and output functions to form a computer, or a web server combined with an operating system and a computer to form a network node.
Properties: By this we mean something we can measure. For example, the technical term availability as defined in the fault tolerant computing literature is defined as uptime divided by uptime plus downtime. Since we can measure time and associate measurable characteristics with uptime and downtime, we can measure availability and it is therefore a property.
High Assurance: By this we mean a higher than average certainty that some property or properties hold or do not hold with respect to a component or composite. This implies that some measurement of certainty is in use and that it has been applied. For example, we may be more certain that one component has a property than another component because we have tested one component more thoroughly than the other and found that it displays no flaws. The more tested component would then be a higher assurance component than the less tested one.
Coverage: The portion of a space explored or dealt with. In the testing literature, coverage is defined as the portion of the total space that is explored by a set of tests. In the broader sense we might talk about coverage in other terms. For example we might notion about how well a set of properties of a composite is covered by a set of properties associated with its components under a particular composition method. Similarly we might consider how a particular sort of measurement method covers the metric space it is being used to demonstrate assurance properties over.
Over the years, a great deal of funds have been expended on the ongoing repair, replacement, and upgrading of security critical elements of large information infrastructures. Two typical examples are the sendmail and bind servers operating in their respective environments, both of which are critical to the successful operation of the Internet, and both of which have had recurrent security problems over many years resulting in large-scale vulnerabilities and requirements for widespread emergency upgrades. By this point in time, the cost associated with repairing and upgrading these software packages has far outweighed the cost that would have been required to build higher assurance versions of them. If it is true that these infrastructures are critical to national security, it is the government's role to provide for the common defense by addressing these problems. We believe that the critical elements required in order to achieve higher assurance in critical infrastructures are (1) a clearer understanding of what properties we want components and composites to have and (2) the ability to measure these properties effectively and engineer components and composites to have these properties to measured levels of assurance.
We propose to address the requirement for building a stronger and more resilient base for critical infrastructures through a combined research, development, and deployment effort focussed on (1) understanding what properties we want to attain and how to build components with higher assurance of these properties, (2) understanding how to test for these properties with high coverage, (3) understanding how to build high assurance composites out of components with different assurance levels, and (4) integrating higher assurance components and composites into infrastructures to provide effective infrastructure protection.
The proof of concept for this approach already exists. Examples demonstrated over the past several years include a high assurance limited functionality web server, a high assurance gopher server, and a high assurance SMTP receive-only server. Researchers at Sandia National Laboratories and U. C. Berkeley followed up on these initial results with a high assurance authoritative-only DNS server and a hardware implementation of a high assurance web server. Some of these have been integrated onto bootable read-only media to provide higher assurance of initial conditions and rapid restoration of initial state. The results have been higher assurance systems with useful functionality. These examples have stood up to source code scrutiny, and in one case mathematical proof of select properties, and they have undergone extensive testing against defined fault models with specific coverage goals. They have been deployed to a limited extent in commercial use, have been exposed to widespread over-the-Internet and laboratory attacks, and have proven their assurance properties in real use.
These proofs of concept have led us to believe that addressing large-scale protection through this approach will be fruitful, but these proofs of concept are only that. In order for this approach to be meaningfully applied to increasingly higher assurance requirements and a broader set of applications, a combination of research into other properties, development of capabilities for analysis of those properties, improved testing for those properties, theoretical and practical issues in building composites, and the development and deployment of higher assurance prototypes is required.
The objective of this medium-sized research effort will be to create more comprehensive understanding and a high quality set of tools for building and testing high assurance components and composites. Our effort will include work in four areas:
Properties of Components: We will (1) define a set of specific properties of components derived from the notional properties of integrity, availability, confidentiality, and use control which have specific utility to addressing critical infrastructure protection issues; (2) create the knowledge about those properties necessary to understand their utility and their limits; and (3) define desired components based on this understanding of properties. It would be premature to specify components until properties are well understood. The identification of desired components will be done in the context of the desired properties.
Testing for Properties: We will use the existing testing epistemology to (1) create fault models from properties, develop techniques to generate test sets and sequences with defined coverage relative to these fault models; and (2) develop the capability to generate test sets and sequences with defined coverage against properties relative to identified faults. This will include the development of experimental testing and test generation tools and their application to components and composites.
Composites: We will (1) expand on existing composition theory to develop methodologies and mechanisms for deriving and predicting properties of composites from the properties of components and the manner in which they are composed; and (2) develop tools for applying these methodologies and mechanisms to create composites with useful properties. This will include the specific goal of developing some components and methods for composition such that when composed with components not having select properties, the resulting composites will have those properties.
Prototypes: We will (1) build prototype components and composites with some of these defined properties; (2) test those components and composites to known coverage levels using the results of the work on testing; and (3) deploy those composites in networked environments to demonstrate their utility in protection of critical infrastructures.
It is well known that, in general, universal solutions to these problems are undecidable or of very high complexity, and for this reason we are taking a different approach. Instead of seeking out universal solutions, we will be seeking properties and ways to compose components with those properties that provide us with substantial benefits without high complexity. By defining narrow but useful properties of components, we will be able to build useful special purpose components and composites that have these useful properties. By defining fault models reflective of real issues in critical infrastructures, we will be able to establish meaningful coverage in our tests with finite runtimes and storage requirements. In defining composition methods, we believe that we will be able to find properties and composition methods that produce useful composites, even if those components and composites don't all have Turing capability, thus producing useful and analyzable composites. Finally, the systems we build will have well defined properties that, while not necessarily perfect for all situations, may be highly effective for critical infrastructure protection and for many other useful purposes.
Using existing tools and developing new ones where required, we will create a new set of critical infrastructure components and composites. In the process we will both advance the state of the art in secure systems design and develop and deploy operational systems and methods that reduce the likelihood of certain types of large-scale infrastructure failures.
In order to accomplish this, our team will involve: (1) researchers from NSA-recognized Centers of Excellence including Naval Postgraduate School, and the University of New Haven (curriculum already approved, application for the new National Security Program pending March, 2003 approval), (2) technical experts and researchers from security software development and deployment firms including SecurityPosture, (3) critical infrastructure providers, including water, and power providers, and (4) military and national security network operations environments from the U.S. Navy and NNSA National Laboratories.
Together, we will (1) do basic research, development, testing, and assurance of components and composites; and (2) deploy those components into existing and planned infrastructures. Patents and copyrights will be generated to provide for efficient protection of the resulting intellectual property. The results will be made available as replacements for the current generation of information infrastructure technology and critical enablers for the next generation by our commercial partners.
While the resources provided for this effort will be spent exclusively on the research, prototype, and testing aspects of this effort, we will leverage the skills, expertise, and position of our industrial partners to assure that our efforts yield direct benefits to real infrastructures and to assure that the properties and testing regimens we create are suited to the purpose and address the issues that systems of today and the future face.
One of the key properties that underlies high assurance is ease of understanding. Ease of understanding goes to both human and automated generation and analysis of components. The notion of building up complex systems from relatively simple components and the resulting impact in complexity is well known. Analysis of components becomes easier but a composition theory is needed and analysis of the composite becomes an issue. We will focus on the creation of components that are readily analyzable for defined properties. This implies that they will typically be relatively small functional elements.
Our goal is to produce components with properties that are both easy to understand and useful. We intend to develop structured arguments that particular components have particular properties of interest. Formal proofs are not a focus of the proposed work but would certainly fit into this framework. We will do this by first deriving specific properties of interest from notional properties of integrity, availability, confidentiality, and use control. (In this context the use of the term availability is not the specific one defined above for fault tolerant computing, but the more general notion.)
As an example, a property of interest might be that only specified state information and outputs are affected by inputs. This property would preclude things like input overflows that cause content to be placed in areas outside of the buffers identified for their use and outputs to files not identified as affected by a given program.
As we define these more specific properties, we will be able to start to specify components with these properties that will have functional capabilities appropriate to the design of critical infrastructure systems. We will then produce sets of arguments upon which to base the assertion that a component has or does not have those properties. The basis for asserting properties about components will be delivered with the component to allow further analysis as needed. We also intend to develop a list of such properties along with techniques that can be used to achieve and assure them at low cost.
Expertise Applied: This task requires expertise associated (1) with automated program development and proof of program properties provided by Dr. Blaine Burnham who has worked for many years in automatic programming and Dr. Fred Cohen who developed some of the previous example components and helped prove properties thereof; and (2) understanding of desirable and attainable properties provided by Dr. Blaine Burnham who has performed extensive work for industry and government in identifying properties of security components, Dr. Fred Cohen who has done extensive consulting in this area for government and industry specifically in the critical infrastructure area, and Chet Uber who provides components with specific security properties to industrial partners, and Dr. Bret Michael who has specific knowledge of needs associated with Naval and other military networks. Together they will provide the necessary set of expertise to both identify and implement systems capable of demonstrating and analyzing these properties. These efforts will be supplemented by graduate students at NPS, UNOmaha, and UNH. Dean Johnson will provide overall project supervision and integration with critical infrastructure national security perspectives as well as facilitating graduate assistants and other support functions.
Testing for properties is critical to assuring those properties and is charged with identifying the extent to which we can be certain that components have the properties asserted about them. While it would be nice to be able to formally prove that a set of components interacting in a complex environment have desired properties, the complexity of such proofs and the realities of the world are such that for substantial environments involving large composites, such proofs are infeasible for the sets of properties likely to be of import to critical infrastructure protection. For this reason, we will pursue a testing approach to assuring properties with defined coverage.
We will start with the general epistemology of testing and apply it to specific properties generated for components in order to generate fault models associated with properties. For example, the requirement of limited changes to state and output might be partially tested by introducing a fault model based on the mechanism of excessive input altering state variables not in the normal information flow path of inputs. This fault model might then used to generate a set of tests involving attempts to drive excessive inputs through to state changes not identified. Since this is too complex in the general case, the component under test might be broken into combinational and sequential components with test points placed throughout the component to make testing feasible. Inputs can then be driven through to test points and outputs and values at test points driven through to other test points and outputs so that the combined coverage far exceeds the coverage attainable by input sequences alone.
Coverage of such tests can also be identified. For example, by placing measurement points within hardware or software we can observe internal state changes for representative subsets of specific classes of long input sequences. We might then derive a statistical likelihood that other input sequences could result in undesired states. It is already well known from testing theory that for large classes of sequential machines, high coverage cannot be attained in feasible length test sequences without appealing to composite properties. For example, detecting all stuck-at faults in sequential circuits can be achieved with dramatically reduced test set length by placing sensors and actuators within the circuit under test. This will be part of the work on composites identified below.
Formal test plans will be generated by applying the fault models with identified coverage to generate test sets of appropriate length and makeup for the need. Automated test generation methods will allow tests to be regenerated and verified independently to the desired degree of coverage. Imperfections identified by testing will be fed back both to the component and composite designers and to the properties researchers to identify possible additional properties of import. After corrections are made, new tests can be generated and run for desired coverage.
Expertise Applied: This task requires expertise associated with (1) hardware and software testing which will be provided by Dr. Fred Cohen who has has extensive experience in automated test generation and fault modeling and Dr. Bret Michael who has expertise and experience in software reliability engineering; and (2) Industrial security and software quality testing models which will be provided by Jon David who has performed extensive testing and supervision of testing for security properties for industry and government. These efforts will be supplemented by graduate students at NPS and UNH. Dean Johnson will provide overall project supervision and integration with critical infrastructure national security perspectives as well as facilitating graduate assistants and other support functions.
We are especially interested in properties of composite systems. For example, one property with obvious security relevance is the set of files that a component reads and writes. It has been shown that in composite systems, transitivity of file reads and writes can result in universal access, and time transitivity can cause analysis of sequential protection settings to become undecidable. These fairly simple examples show the need for both theoretical and practical methods associated with composites.
We anticipate that many components will be controlled by configurations. A composite might then have derived properties involving both the properties of components and the states of their respective configurations. For example, the fact that one component might have the ability to write configurations or storage areas of other components significantly complicates the situation for composites, possibly making this problem undecidable. Special types of composites, for example those wherein no component can affect the configuration or memory state of another component, might be of particular interest. The creation of components that enforce composition rules could be achieved so as to allow high assurance composites to be built even when some components are of lower assurance.
The components, their properties, the arguments that support them, the properties of compositions, the arguments that support them, and the testing properties and capabilities will all be useful to the recipient. The obvious value is the ability to use the component, to know to an identified level of certainty that it has the given properties, and to understand why. For example, a recipient wishing some additional property might use additional components to form a composite that has the desired properties. Alternatively, the knowledge of missing properties might lead them to know when to use a detection and response component to offset the limitations of particular components. The arguments and components are also valuable as examples of how to build other components with similar properties and corresponding arguments.
Some components may also support extension mechanisms, such as scripting languages. In order to preserve important properties of those components with the extensions it will typically be necessary to ensure that the extensions have similar or related properties. In this case it is important that the person supplying the extension know what properties are required of it or have components that assure properties of the composites given the increased complications associated with the scripting language.
Finally, we plan to build some components for the specific purpose of creating composites with desirable properties even though some of the components do not or are not known to have those properties. A simple example would be a component that provides a security feature like limiting input line lengths and symbol sets. When composed with a program with unknown properties, this might help to mitigate classes of input buffer overruns and syntactic exploits for the specific application and environment in use. The testing mechanisms might then be applied to provide high coverage for the composite relative to input overrun faults, even though the otherwise unknown component could not otherwise be tested with substantial coverage under this fault model.
Expertise Applied: This task requires (1) expertise in the mathematics of program composition that will be provided by Dr. Blaine Burnham who will augment existing knowledge of program proof and generation techniques and in this area and Dr. Fred Cohen who has taught in this area from existing literature; (2) expertise in current commonly used composition components such as wrapper technologies and virtual machine environments which will be provided by Dr. Fred Cohen and Dr. Bret Michael who have experience in the design, development, and application of these technologies, and Jon David, and Chet Uber who have extensive experience in applying these technologies in industrial applications; and (3) Dr. Blaine Burnham who has extensive experience in the development of software for automating the sorts of processes required to perform analysis of these sorts of composite systems. These efforts will be supplemented by graduate students at NPS and UNH. Dean Johnson will provide overall project supervision and integration with critical infrastructure national security perspectives as well as facilitating graduate assistants and other support functions.
We will prototype components, testing capabilities, composites, and the mechanisms to help automate aspects of these activities as part of this effort. The result will be a set of infrastructure support components and composites that will be placed into existing infrastructures on a test deployment basis. They will be operated as part of normal infrastructures and, as they prove their effectiveness, will ultimately replace lower assurance implementations. While no funding will go directly to the implementation of these components and composites in infrastructures, the development process will be critical to realizing real benefits in real infrastructures. Testing, for example, will be done on prototyped components so that we have high assurance not only that our fault models are covered properly, but also that those models realistically cover the space of critical properties for real infrastructure elements.
This activity will be facilitated by our industrial partners who work in critical infrastructure areas.
Expertise Applied: This task requires (1) expertise in the development and deployment of high assurance prototypes of these sorts which will be provided by Dr. Fred Cohen, Dr. Bret Michael, Dr. Blaine Burnham, Chet Uber, all of whom have extensive experience in developing and applying prototypes using similar technologies; and (2) application environment integration which will be provided by Jon David and Chet Uber, both of whom have extensive experience in applying these sorts of technologies to existing infrastructures. Dean Johnson will help to integrate this effort with additional providers and organizations to which he has unique access through the National Security Program and form the interface with National Laboratories for experiments with their researchers and facilities.
Many other efforts have been undertaken over a very long time to work toward building high security systems. The whole area of trusted systems has been directed this way for scores of years. This particular effort takes a different approach to the problem than these other efforts and is intended to compliment these efforts by providing alternative routes to achieving overall objectives.
An excellent example of related work is the Survivable Systems and Networks work done for DARPA by Neumann et. al. and their present effort in Architectural Frameworks for Composable Survivability and Security. While they work on general theories for composability, design principles, and design methodologies for provable architectures, our effort is oriented toward specific properties of components and composites and the ability to test for those properties against fault models to a measurable level of assurance. By applying our results to their outcomes, we may be able to automatically generate tests of their components for specific properties with known assurance. Applying their results to our efforts might allow us to build better composites based on their more general results or improve the efficiency of our tests based on portions of the test space we can rule out due to their analysis. Their model-based checks of source code for malicious content might allow them to detect Trojan Horses by appearance, while ours will detect them by their externally observable properties. These diverse approaches to high assurance are key to overall protection just as biological diversity is key to surviving diseases. The difficulty of creating effective attacks against systems is reduced synergistically by such complementary efforts.
Another key area where we will leverage results is from the hardware and software testing community. In the hardware and software testing literature, the goal is often stated in terms of reducing the frequency of failures to below a certain rate. For example, in hardware testing, the dominant goal is to achieve a high mean-time-to-failure of the overall system. In software testing, the goal is to reduce the failure rates to a level where the average user rarely experiences one. It is generally assumed that once a fault is found, it will not be exercised as often because people will avoid it and that faults are relatively independent of each other. While both hardware and software testing fields have their merits, both also fail to address protection testing issues. Attackers seek out faults instead of avoiding them, the frequency of faults being exercised increases dramatically as soon as they are found, and faults are exploited to induce failures that include the creation of additional faults. The underlying theories behind hardware and software testing are still valid, and we will leverage them by changing the economic assumptions surrounding their application to create test sets with coverage more well suited to the requirements of information protection.
Our efforts will also help to build intellectual capacity in the information protection arena because it will involve professors and graduate students at universities that specialize in these areas. Graduate students in Computer Science, Computer Engineering, Information Systems, Forensic Sciences, and National Security with backgrounds ranging from economics to hardware design to software testing will work together on facets of this research and we expect that several Masters theses and at least one Ph.D. will emerge from this program as well as several journal articles and conference papers.
From a practical standpoint we are also well positioned to have our results applied in existing infrastructures. The affiliation between the UNH National Security Program and Sandia National Laboratories will provide the means for results to be leveraged in their National Security Missions. The affiliation of NPS with Naval Operations will provide the means for results to be leveraged with other efforts to protect Naval network. Our team members that work in security systems design and critical infrastructure providers will provide the ability to place prototypes in parallel with existing lower assurance systems, and ultimately the ability to replace existing systems with higher assurance versions as part of the normal replacement cycle.
This effort will pragmatically examine the properties of components associated with critical infrastructure protection and both advance the state of the art in identifying these properties and extend the techniques available for applying these properties. This will be achieved by creating the necessary capabilities to analyze properties and test components and composites for those properties. It will extend testing theory by adding new fault models and methods for fault analysis, test generation, coverage analysis, and economic models that reflect security concerns more realistically. It will also provide ways to apply these techniques to composites involving components without many of these properties. This will help to form the basis for the more general development of trusted systems from combinations of untrusted and trusted components; something of vital import if the current model of industrial development and security enhancement is to proceed.
The plan for this effort will be to operate at a steady funding level over a 4-year period funded at a rate of $984K/year. This will be divided into the 4 tasks which are scheduled to proceed as follows:
Work on properties and how to create components that will fulfill properties while integrating usefully into functional composites will be started immediately. This effort will result in the definition of sets of useful properties appropriate to critical infrastructure operations, produce limited mathematical analyses of these properties, and generate tools for analysis of properties.
Work on testing will start as soon as properties have been well defined. The initial work will be on epistemology, but will be followed up rapidly by initial test generation tool development, and testing of test generation tools. These tools will be expanded for testing properties of composites and automated test generation will be an eventual result. These test generation tools will also be applicable to other test requirements of information protection.
Work on composites will start more slowly and build up as we have more knowledge of components and actual components available to combine into composites. This will include composition mathematics, tools for working on composition and composing systems, and some design automation tools for analysis and design of composites.
Work on prototypes will begin as soon as we have a clear understanding of how functional requirements will be partitioned in order to assure properties of components. From that time forward, a series of prototypes will be created to meet the testing and experimental needs of the rest of the effort and to supply test components and composites to early adopters of the technologies.
The following is a more detailed breakdown of costs and deliverables: (details removed)
|Q01|| Quarterly report and PI meeting |
Initial proposals for critical properties list
|Q02|| Quarterly report and PI meeting |
List of critical properties defined
|Q03|| Quarterly report and PI meeting |
Initial property analysis and understanding
|Q04|| Annual report and PI meeting |
Identification of initial components appropriate to needs
Simple prototypes for sample analysis and testing completed
Initial results on composite properties and methods
Sample prototypes with known faults in place for testing testers
|Q05|| Quarterly report and PI meeting |
Identify more components and components designed for property enforcement
Additional property analysis understanding and initial tool demonstrations
Testing epistemology report and initial fault models
|Q06|| Quarterly report and PI meeting |
Fault models worked through and test generation starts
Property analysis tool prototype applied to component prototypes
Initial versions of first components completed
|Q07|| Quarterly report and PI meeting |
Initial composition theory work completed
Initial test generation designs for identified properties
|Q08|| Annual report and PI meeting |
Property analysis tool applied to prototypes
Test generation tool prototypes being tested and applied to prototypes
First composites prototyped
|Q09|| Quarterly report and PI meeting |
Test generation tool applied to prototypes
Composite design criteria and prototype tools in place
|Q10|| Quarterly report and PI meeting |
First fully tested components released for industrial installation
|Q11|| Quarterly report and PI meeting |
Final property analysis tools
Composite tools completed and applied
|Q12|| Quarterly report and PI meeting |
Papers on property analysis tools out
|Q13|| Quarterly report and PI meeting |
First composites fully tested and released for industrial installation
|Q14|| Quarterly report and PI meeting |
|Q15|| Quarterly report and PI meeting |
Final test generation tools
Several functional composites and components in industrial operation
|Q16|| Final Report and PI meeting |
Final test generation tools
Final composite property and tool papers done
Select composites and components in industrial operation
This effort focuses on a high payoff, high risk research area that has shown great promise but has not been adequately explored to date. It involves highly skilled and seasoned researchers with long track records of major advances in their fields, many successful efforts in their combined histories, and the right skill sets to accomplish this work. Along the way, it will produce advances in understanding of protection properties, properties of composites, protection testing, and protection metrics that would be justifiable on their own as a basis for funding this effort. As important as any components or composites that might result could be for critical infrastructure protection, the methodologies resulting from our research may prove to be of even greater long term impact to the advancement of information protection.
Although it is a good idea we weren't specific enough on how we were going to solve all the problems - in other words we hadn't done the research yet.
Our budget had several problems. (1) the numbers were rounded - no to the penny and looking random enough - apparently they want more detailed fictions (2) there were no raises for the people - apparently we have to get paid more each year in order to get funded by NSF (3) the work was being done by faculty - not graduate students - apparently they don't want faculty to do research but rather to only guide graduate students in doing research.
And finally - we were considered underqualified to do the work - which means to me that they think our faculty and 20+ years experienced researchers are less qualified than graduate students at other universities.
I just thought you would like to know thet it wasn't anything like the content or idea that they didn't like - they thought it was a good idea - they just didn't want us to do it.
The overall summary:
Strengths: The panel felt that the central idea of the proposal was interesting and had some promise.
Weaknesses: The panel was unsatisfied with the development of the proposal's research ideas. The proposers do not yet know which security properties will be required, and do not have a good plan for discovering those properties. They also do not identify candidate secure systems to build from these components, nor do they indicate how these secure systems will constitute a secure infrastructure, as their proposal title promises. The proposers say that sample components and systems have been built and shown to provide strong security, but do not provide references to them. Nor do they provide references in the text to anything else.
Strengths: If the project is successful, the components and sample applications produced might improve security of the Internet. The proposers plan to train graduate and undergraduate students on the project.
Weaknesses: The broader impacts are largely confined to the value of the resulting software, so the degree of that impact is impossible to gauge. Very little funding goes to support graduate students, so the impact there is small.
Strengths: The proposal includes a clear order in which they will proceed.
Weaknesses: A lack of specifics on how any task will be done. Management plan:
Strengths: The proposers have a schedule with milestones.
Weaknesses: The budget is not realistic. The numbers are all round, and do not increase from year to year. They devote a large amount of money to the PI, who has few qualifications to perform this research, and devote the bulk of their budget to half or full time funding of senior personnel. This level of funding for senior personnel is not obviously required for this research, and is made at the cost of support for graduate students.
There are concerns about the generic nature of the schedule milestones and their lack of content.
Summary rationale for recommendation: The panel feels this proposal is non-competitive and does not recommend it for funding.
The proposed research is ambitious. If successful, the activity will result in a significant advancement of knowledge. However, the investigators need to develop a more convincing research plan. Presentation of initial results in simple examples can be helpful in this regard. [FC - I thought we had several examples from previous efforts of the researchers - like the secure Web server and similar items.]
This proposal suggests a good area of research, but does not bring any new ideas to the table. Nor is the approach fully developed. For example, they have not suggested what example applications they will build from the components. There are no references in the text. Their list of goals and milestones is exceedingly generic. The division of the budget among the tasks seems arbitrary. There is no section describing existing and pending support for the principals. [FC - of course the approach is not fully developed - it is a research proposal! - We don't have existing support - do you only fund those who are already funded? There were references in the text we provided but apparently they did not see them.]
The budget is huge and is almost exclusively devoted to supporting the principal investigators. They have two investigators on 100% salary at over $100,000 per year each. In addition to that, four other senior personnel are allocated 50% time per year. In a budget of nearly $1 million per year, they can only find room for two graduate students each year. The budget is also unrealistic, since it shows no salary increases over the course of five years for anyone. [FC - so NSF doesn't support senior researchers who get paid the enormous sum of $100K per year - even though most professors in these fields at most universities get paid more than this - the money is mostly devoted to supporting the people doing the work - and we aren't getting salary increases - which is true!]
The PIs mention that proof of concept already exists for such components as high assurance limited functionality web servers, gopher servers, SMTP receive-only servers, and authoritative-only DNS servers..., yet they do not provide any references to those research efforts or to the relationship of those research efforts to their own. The research plan appears to be vague. No preliminary findings are given and the research appears to be too preliminary for any component properties to be defined. The team does appear to have the background to be successful. Management plan is well defined. [FC - actually we did provide references - but they apparently did not read these papers - and of course if we had cited any more of our own papers they would have accused of excessive self-citations. We developed all of these secure servers over the last several years of effort. It's amazing that supposed expert reviewers apparently are unaware of these results even though they were published in major journals and at major conferences.]
Having proposals rejected is not really the thing that bothers people like us. We are used to rejection. The problems we have with such processes is that anonymous people who are getting funded make comments that we feel are unjustified and based on a perspective that we don't understand. Try as I might, I don't understand why a proposal would be rejected because the people working on it are two people with Ph.Ds and experience rather than graduate students. I don't understand how real research can be done if we have to present preliminary results beyond several working examples first. I don't understand how we are supposed to fit all of the things they want into the approximately 4-5 pages of content permitted in an NSF proposal (15 pages total - mostly forms). In fact, I don't understand how any of these comments are at issue at all. I think that if researchers have a good idea - which this apparently is according to their reviews - and a track record of good research - which try as you might you can hardly assert otherwise with respect to the research team here - that the proposal qualifies as a good one. It may be ranked higher or lower according to some other criteria, but having other researchers tell us we are paid too much when they are all paid more - and that we must have salary increases when we have agreements with the university that don't have them - or that our budget is unrealistic because it is rounded to the nearest thousand dollars, this is pathetic.
Of course it would be very helpful if we had copies of their previous successful proposals to use as guidelines. And I think that's what the NSF should do. They should provide full details of every successful proposal - including reviewer comments - to everyone. In this way the rest of us can understand precisely what is expected of the succeessful proposal so we can prepare successful ones as well. Or there is another approach. I think that all NSF proposals should be reviewed by people who did NOT get funded last year. That way we will get a diversity of views instead of a good old boys club.