The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Cooperating Security Managers: A Peer-Based Intrusion Detection System

Maj. Gregory B. White,Eric A. Fisch,Udo W. Pooch

CSM is designed to handle intrusions as opposed to simply detetecting and reporting on them, resulting in a comprehensive approach to individual system and network intrusions. Tests of the initial prototype have shown the cooperative methodology to perform favorably.

Added 2002-07-26

An Evaluation of Software Test Environment Architectures

Nancy S. Eickelmann,Debra J. Richardson

Software Test Environments (STEs) provide a means of automating the test process and integrating testing tools to support required testing capabilities across the test process. Specifically, STEs may support test planning, test management, test measurement, test failure analysis, test development, and tests execution. The software architechture of an STE describes the allocation of the environment’s functions to specific implementation structures. An STE’s architecture can facilitate or impede modifications such as changes to processing algorithms, data representation, or functionality. Performance and reusability are also subject to architecturally imposed constraints. Evaluation of an STE’s architecture can provide insight into modifiability, extensibility, portability and reusability of the STE. This paper proposes a reference architecture for STEs. Its analytical value is demonstrated by using SAAM (Software Architectural Analysis Method) to compare three software test environments: PROTest II (Prolog Test Environment, Version II), TAOS (Testing with Analysis and Oracle support), and CITE (CONVEX Integrated Test Environ- ment).

Added 2002-07-26

Scene: Using Scenario Diagrams and Active Text for Illustrating Object-Oriented Programs

Kai Koskimies,Hanspeter Mossenbock

Scenario diagrams are a well-known notation for visualizing the message flow in object- oriented systems. Traditionally, they are used in the analysis and design phases of software development to prototype the expected behavior of a system. We show how they can be used reversely for understanding and browsing existing software. We have implemented a tool called Scene that automatically produces scenario diagrams for existing object- oriented systems. The tool makes extensive use of an active text framework providing the basis for various hypertext-like facilities. It allows the user to browse not only scenarios but also various kinds of associated documents, such as source code (method definitions and calls), class interfaces, class diagrams, and call matrices.

Added 2002-07-26

Multilanguage Interoperability in Distributed Systems

Mark J. Maybee,Dennis M. Heinbigner,Leon J. Osterweil

The Q system provides interoperability support for multilingual, heterogenous component- based software systems. Initial development of Q began in 1988, and was driven by the very pragmatic need for a communication mechanism between a client program written in Ada and a server written in C. The initial design was driven by language features present in C, but not in Ada, or vice-versa. In time our needs and aspirations grew and Q evolved to support other languages, such as C++, Lisp, Prolog, Java, and Tcl. As a result of pervasive usage by the Arcadia SDE research project, usage levels and modes of the Q system grew and so more emphasis was placed upon portability, reliability, and performance. In that context we identified specific ways in which programming language support systems can directly impede effective interoperablility. This necessitated extensive changes to both our conceptual model and our implementation of the Q system. We also discovered the need to support modes of interoperability far more complex than the usual client-server. The continued evolution of Q has allowed the architecture of Arcadia software to become highly distributed and component-based, exploiting components written in a variety of languages. in addition to becoming an Aradia project mainstay, Q has also been made available to over 100 other sites, and it is currently in use in a variety of other projects. This paper summarizes key points that have been learned from this considerable base of experience.

Added 2002-07-26

A Reliability Model Combining Representative and Directed Testing

Brian Mitchell,Steven J. Zeil

Directed testing methods, such as functional or structual testing, have been criticized for a lack of quantifiable results. Representative testing permits reliability modeling, which provides the desired quantification. Over time, however, representative testing becomes inherently less effective as a means of improving the actual quality of the software under test. A model is presented which permits representative and directed testing to be used in conjunction. Representative testing can be used early, when the rate of fault revelation is high. Later results from directed testing can be used to update the reliability estimates conventionally associated with representative methods. The key to this combination is shifting the observed random variable from the interfailure time to a post-mortem analysis of the debugged faults, using order statistics to combine the observed failure rates of faults no matter how those faults were detected.

Added 2002-07-26

An Exact Array Reference Analysis for Data Flow Testing

Istvan Forgacs

Data flow testing is a well-known technique, and it is proved to be better than the commercially used branch testing. The problem with data flow testing is that except scalar variables only approximate information is available. This paper presents an algorithm that determines the definition use pairs for arrays precisely within a large domain. There are numerous methods addressing array data flow problem, however, requires at least one real solution of the problem for which the necessary program path is executed. On the contrary to former precise methods, we avoid negation in formulae, which seems to be the biggest problem in all previous methods.

Added 2002-07-26

A Demand-Driven Analyzer for Data Flow Testing at the Integration Level

Evelyn Duesterwald,Rajiv Gupta,Mary Lou Soffa

Data flow relies on static analysis for computing the def-use pairs that serve as the test case requirements for a program. When testing large programs, the individual procedures are first tested in isolation during unit testing. Integration testing is performed to specifically test the procedure interfaces. The procedures in a program are integrated and tested in several steps. Since each integration step requires data flow analysis to determine the new test requirements, the accumulated cost of repeatedly analyzing a program can considerably contribute to the overhead of testing. Data flow analysis is typically computed using an exhaustive approach or by using incremental data flow updates. This paper presents a new and more efficient approach to data flow integration testing that is based on demand-driven analysis. We developed an implemented a demand-driven analyzer and experimentally compared its performance of (i) a traditional exhaustive analyzer and (ii) an incremental analyzer. Our experiments show that demand-driven analysis is faster that exhaustive analysis by up to a factor of 25. The demand-driven analyzer also outperforms the incremental analyzer by up to a factor of 5.

Added 2002-07-26

Assertion-Oriented Automated Test Data Generation

Bogdan Korel,Ali M. Al-Yami

Assertions are recognized as a powerful tool for automatic run-time detection of software errors. However, exsisting testing methods do not use assertions to generate test cases. In this paper we present a novel approach of automated test data generation in which assertions are used to generate test cases. In this approach the goal is to identify test cases on which an assertion is violated. If such a test is found then this test uncovers an error in the program. The problem of finding program input on which an assertion is violated may be reduced to the problem of finding the program input on which a selected statement is executed. As a result, the exsisting methods of automated test data generation for white-box testing may be used to generate tests to violate assertions. The experiments have shown that his approach may significantly improve the chances of finding software errors as compared to the existing methods of test generation.

Added 2002-07-26

Effect of Test Set Minimization on Fault Detection Effectiveness

W. Eric Wong,Joseph R. Horgan,Saul London,Aditya P. Mathur

Size and code coverage are important attributes of a set of tests. When a program P is executed on elements of the test set T, we can observe the fault detecting capability of T for P. We can also observe the degree to which T induces code coverage on P according to some coverage criterion. We would like to know whether it is the size of T or the coverage of T on P which determines the fault detection effectiveness of T for P. To address this issue we ask the following question: While keeping coverage constant, what is the effect on fault detection of reducing the size of a test set? We report results from an empirical study using the block and all-uses criteria as the coverage measures.

Added 2002-07-26

Systems Security Engineering Capablility Maturity Model and Evaluations: Partners Within the Assurance Framework.

Charles G. Menk III

Since the inception of the SSE-CMM program in 1993, there have been some misconceptions within the computer security and evaluations communities regarding its intended purpose. Evaluators in particular have expressed strong resistance to this effort due to the perception that the SSE-CMM is intended to replace evaluated assurance with developmental assurance. That has not and never will be the case. The SSE-CMM efforts can greatly enhance government, corporate, developer, user and integrator knowledge of security in general. As such,the efforts of the SSE-CMM development team are intended to provide significantly improved input to system developers (internal assessments) and the higher level assurance activities (e.g. evaluations, certification, accreditation) efforts (third party assessments). To best address the needs of our customers, the efforts of SSE-CMM and other assurance efforts must grow to complement each other. It will take focused effort from the security community and developmental assurance organizations, as well as industry partners to achieve this goal. Evaluated assurance, provided by programs like the Trusted Product Evaluation Program(TPEP), has become widely accepted throughout the computer security industry. However, as the state of technology has advanced, the current process and methodology used by the evaluation community have been unable to keep pace with the accelerated development cycles of the advanced products that computer-security customers desire. The deficit of security expertise, as well as unclear and at times inadequate guidance and requirements within the industry and from government agencies has lead to the persistent practice among development organizations developing security as an afterthought or add-on to an existing product. Such practices make correcting security flaws that affect the underlying product expensive, difficult, and time consuming. All of these factors have forced evaluators to carry out duties and activities for beyond the scope of pure evaluations and to take on the roles of trainer, developer, writer, and quality assurance inspector for the various products that they have been evaluating. Given these sometimes conflicting demands on the evaluation process, it has become problematic if not impossible (in some cases) to expect the current evaluation approach to continue providing all the product security assurance and keep pace with the increasing demands of computer security customers (i.e. they can not produce enough evaluated products to meet the demand). That is where the concept of an Assurance Framework comes in. Each activity within the security arena (e.g. CMMs, ISO9000, Evaluations) brings with it a certain level of assurance. The composite view forms the Assurance Framework in which a customer can pick and choose products to support their mission based on their risk tolerance and product cost. by allowing certain activities, like the CMM efforts, to address specific assurance needs, the strain on the evaluation community may be alleviated a little thereby allowing evaluators to focus on the high assurance products while the lower assurance products undergo a less rigorous assessment / certification process. In the form of the SSE-CMM, developmental assurance can accomplish many needed improvments in the way that INFOSEC products and systems are produced. These improvements may well have a direct impact on the quality of the product’s security development and can assist vendors by better preparring their teams for an evaluation. At the higher maturity levels, some of the work now required of evaluators for low assurance products, such as IV&V functions and general security knowledge, can be accomplished during the initial product development. This will allow evaluators to concentrate more of their efforts on evaluation activities and less on security education and or product development for the vendors. The SSE-CMM is a metric for an organization’s capability to develop a secure system. Wouldn’t it be nice to know an organization has the capability to build secure systems prior to accepting them into a rigorous evaluation activity?

Added 2002-07-26

Reverse Engineering of Legacy Code Exposed

Bruce W. Weide,Wayne D. Heym

Reverse engineering of large legacy software systems generally cannot meet its objectives because it cannot be cost-effective. There are two main reasons for this. First, it is very costly to “understand” legacy code sufficiently well to permit changes to be made safely, because reverse engineering of legacy code is intractable in the usual computational complexity sence. Second, even if legacy code could be cost-effective reverse engineered, te ultimate objective - re-engineering code to create a system that will not need to be reverse engineered again in the future - is presently unattainable. Not just crusty old systems, but even ones engineered today, from scratch, cannot escape the clutches of intractability until software engineers learn to design systems that support modular reasoning about their behavior. We hope these observations serve as a wake-up call to those who dream of developing high-quality software systems by transforming them from defective raw materials.

Added 2002-07-26

Developing Secure Objects

Deborah Frincke

Distributed object systems are increasingly popular, and considerable effort is being expended to develop standards for interaction between objects. Some high-level requirements for secure distributed object interactions have been identified. However, there are no guidelines for developing the secure objects themselves. Some aspects of object-oriented design do not translate directly to traditional methods of developing secure systems. In this paper, we identify features of object oriented design that affect secure system development. In addition, we explore ways to derive security, and provide techniques for developing secure COTS libraries with easily modifiable security policies.

Added 2002-07-26

WWW Technology in the Formal Evaluation of Trusted Systems

E. J. McCauley

The World Wide Web (WWW) indtroduces exciting possibilities for the use of new technology in the formal evaluation of trusted systems. This is a report of a work in progress. It discusses the conceptual foundations of the WWW use in formal evaluations of the security properties of a system, and offers some of the initial insights gained in its use. Silicon Graphics is using this structure for the submittal of documentation for the formal evaluation of the Trusted IRIX/CMW 6.2 operating system.

Added 2002-07-26

Covert Channels

Jonathan Millen

An explanation of covert channels

Added 2002-07-26

Building Diverse Computer Systems

Stephanie Forrest,Anil Somayaji,David H. Ackley

In biological systems, diversity is an important source of robustness. A stable ecosystem, for example, contains many different species which occur in highly conserved frequency distributions. If this diversity is lost and a few species become dominant, the ecosystem becomes susceptible to perturbations such as catastrophic fires, infestations, and disease. Similarly, health problems often emerge when there is low genetic diversity within a species, as in the case of endangered species or animal breeding programs. The verebrate immune system offers a third example, providing each individual with a unique set of immunological defenses, helping to control the spread of disease within a population.

Added 2002-07-26