The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Schema Francais d\'Evaluation et de Certification de la Securite des Technologies de l\'Information

Centre de Certification de la Securite des Technologies de l\'Information
Added 2002-07-26

Ma Vie Avec Melisa Et Les Autres

La pratique de l
Added 2002-07-26

Integrity In Automated Information Systems

Terry Mayfield, J. Eric Roskos, Stephen R. Welke, John M. Boone, Catherine W. McDonald

As public, private, and defense sectors of our society have become increasingly dependent on widely used interconnected computers for carrying out critical as well as more mundane tasks, integrity of these systems and their data has become a significant concern. The purpose of this paper is not to motivate people to recognize the need for integrity, but rather to motivate the use of what we know about integrity and to stimulate more interest in research to standardize integrity properties of systems.

Added 2002-07-26

Traditional Capability-Based Systems: An Analysis of Their Ability to Meet the Trusted Computer Security Evaluation Criteria

V. D. Gligor,J. C. Huskamp,S. R. Welke,C. J. Linn,W. T. Mayfield

This paper, through the use of a “traditional” capability-based system model, is intended to clarify the role of capabilities in supporting different security policies. In particular, the ability of these “traditional” systems to meet the Trusted Computer Security Evaluation Criteria [TCSEC83] is analyzed. The paper is further intended to be used as a background reference by the National Computer Security Center (NCSC) Product Evaluation Teams when they are involved in the evaluation of new capability-based products. The authors have assumed that the readers of this paper are computer professionals (e.g., NCSC Product Evaluation Team members or designers of computer operating systems) who are well versed in data structures, operating system principles, and operating system architectures, and who are also relatively familiar with security concepts and models. Virgil Gligor from the University of Maryland served as principal researcher. Many other individuals also have contributed to the production of this paper. We wish to acknowledge the assistance of Dan Nesset, Lawrence Livermore Labs; Richard Kain, University of Minnesota; Norman Hardy, Susan Rajunas, et. al., of Keylogic, Inc.; and Roger Schell of Gemini Computers, Inc., for their thorough review and critique of the initial drafts of this paper. Their comments helped significantly in providing better focus and presentation of the material. The authors, however, remain responsible for the accuracy and appropriateness of this final version.

Added 2002-07-26

A Methodology for Testing Intrusion Detection Systems

Nicholas J. Puketza,Kui Zhang,Mandy Chung,Biswanath Mukherjee,Ronald A. Olsson

Intrusion Detection Systems (IDS) attempt to identify unauthorized use, misuse, and abuse of computer systems. In response to the growth in the use and development of IDS’s, we have developed a methodology for testing IDS’s. The methodology consistes of techniques from the field of software testing which we have adapted for the specific purpose of testing IDS’s. In this paper, we identify a set of general IDS performance objectives which is the basis for the methodology. We present the details of the methodology, including strategies for test-case selection and specific testing procedures. We include quantitative results from testing experiments on the Network Security Monitor (NSM), an IDS developed at UC Davis. We present an overview of the software platform that we have used to create user-simulation scripts for testing experiments. The platform consists of the UNIX tool ‘expect’ and enhancements that we have developed, including mechanisms for concurrent scripts and a record-and-replay feature. We also provide background information on intrusions and IDSs to motivate our work.

Added 2002-07-26

A Narrated Tour of the Schlumberger Web

William I. MacGregor

The World Wide Web is accelerating the evolution of corporate information systems. Based on TCP/IP Internet technology, the web is attracting and approachable to the user, and is an unequaled tool for the systems integrator. Schlumberger has invested in TCP/IP networking for several years, and in 1995 a series of web-based business resources debuted on the Schlumberger Intranet. These resources are integrated with Schlumberger business processes, and have rapidly become a vital business capability. The business drivers that motivate the exploitation of the web include reducing reaction time, extending the influence of exports, and centralizing services to reduce cost. Our tour visits the resources collected on the Schlumberger Quick Reference Page, including the corporate directory, web server directory, supplier directory, Technology Watch Coordination, software distribution. The Refinery information filter, and the Information Technology Standards areas, as well as the author’s personal page. In each case, the history is one of ‘process insertion’, building a technical capability into the fabric of Schlumberger’s business to achieve a new level of performance. We conclude that the web’s capabilities for integration of diverse resources and incremental extension are the foundation of its extraordinary success. There are obstacles, but with breakthroughs imminent in security, interactivity, and protablility, the use of web technology in corporate Intranets has a bright future.

Added 2002-07-26

Java Security: From HotJava to Netscape and Beyond

Drew Dean,Edward W. Felten,Dan S. Wallach

The introduction of Java applets has taken the World Wide Web by storm. Information servers can customize the presentation of their content with server-supplied code which executes inside the Web browswer. We examine the Java language and both the HotJava and Netscape browsers which support it, and find a significant number of flaws which compromise their security. These flaws arise for several reasons, including implementation errors, unintended interactions between browser features, differences between the Java language and bytecode semantics, and weaknesses in the design of the language and the bytecode format. On a deeper level, these flaws arise because of weaknesses in the design methodology used in creating Java and the browswers. In addition to the flaws, we discuss the underlying tension between the openness desired by Web application writers and the security needs of their users, and we suggest how both might be accommodated.

Added 2002-07-26

Languages and Tools for Rule-Based Distributed Intrusion Detections

Abelaziz Mounji

The ever-rising complexity of operating systems and communication networks has resulted in an increased difficulty in designing reliable security protection mechanisms. As a last line of defense, automated audit trail analysis can be used to detect various forms of security intrusions. However, automated audit trail analysis is difficult because of the complextity of intrusion patterns, of the lack of a complete model of security intrusions, and of the huge amount of audit data. This difficulty is even compounded in a distributed environment, where an attack evidence may span numerous hosts of possibly different architectures, operating systems, and auditing facilities. Because of the lack of an accurate model of security intrusions and because existing audit trails have operating system-specific formats and semantics, we approach the problem of detecting intrusions by designing languages and tools for powerful yet convenient data streams analysis. The proposed approach is independent of any model of security intrusions and audit data format and semantics, making it possible to implement the detection of new intrusion scenarios as they are learned by security experts. This dissertation describes a novel rule-based language (RUSSEL), tailor-made for efficient processing of sequential unstructured data streams in a heterogeneous multi-host environment. The proposed approach enables event correlation occuring at multiple hosts and achieves gradual event abstraction at different levels. The universality of the analysis is attained by providing a format adaptor generator, which automatically converts a broad range of native audit trail formats into a Normalized Audit Data Format (NADF). The approach is powerful thanks to the rule-based RUSSEL, which allows us to express and match arbitrary event patterns in the audit trail. The efficiency of the system is attained by a careful implementation design. We have also developed a deductive system for continuously checking target- system security vulnerablilities. The deductive component is coupled with the audit trail analysis component, therby enabling an adaptive decection rule set. The proposed approach is computationally viable as suggested by the performance measurements of the implemented system against real-life penetrations scenarios. Performance measurements of the implemented tools on real-life scenarios (in simulated environments) suggests that the approach is computationally viable.

Added 2002-07-26


Minimal and Almost Minimal Perfect Hash Function Search with Application to Natural Language Lexicon Design

Nick Cercone,Max Krause,John Boates

New methods for computing perfect hash functions and applications of such functions to the problems of lexicon design are reported in this paper. After stating the problem and briefly discussing previous solutions, we present Cichelli’s algorithm, which introduced the form of the solutions we have pursued in this research. An informal analysis of the problemis given, followed by a presentation of three algorithms which refine and generalise Cichelli’s method in different ways. We next report the results of applying programmed versions of these algorithms to problem sets drawn from natural and artificial languages. A discusion of conceptual designs for the application of perfect hash functions to small and large computer lexicons is followed by a summary of our research and suggestions for futher work.

Added 2002-07-26

Multikey Access Methods Based on Superimposed Coding Techniques

(Abstract File Only),R. Sacks-Davis,A. Kent,K. Ramamohanarao

Both single level and two level indexed descriptor schemes for multikey retrieval are presented and compared. The descriptors are formed using superimposed coding techniques and stored using a bit-inversion technique. A fast-batch insertion algorithm for which the cost of forming the bit-inverted level implementation is generally more efficient for queries with a small number of matching records. For queries that specify two or more values, there is a potential problem with the two-level implementation in that costs may accrue when blocks of records match the query but individual records within these blocks do not. One approach to overcoming this problem is to set bits in the descriptors based on pairs of indexed terms. This approach is presented and analyzed.

Added 2002-07-26

Fast Implementation of Relational Operations Via Inverse Projections

(Abstract File Only),J. R. Ullmann

A relation can be represented by a bit matrix such that relational intersections, union, natural join, product and equiselection operations can be implemented by parallel bitwise AND and OR of bit matrices. Depending on the dimensions of the bit matrices, this representation is more or less approximate in so far as spurious tuples may be recovered from a bit matrix along with genuine tuples. The process of outputting a result relation is serial and has desirable properties that output tuples can be sorted at no extra cost, and elimination of duplicates from projections actually speeds up the process instead of requiring extra work. Results of small-scale simulation are reported.

Added 2002-07-26

Accessing Textual Documents Using Compressed Indexes of Arrays of Small Bloom Filters

J. K. Mullin

A highly compressed index for a collection of variable-sized documents is described. Arrays of small Bloom filters are used to effeciently locate documents where the search probe contains ‘anded’ and ‘ored’ combinations of words. Theoretical and experimental results are reported. The method is applicable to unplanned searching of large text files. We further describe a method to provide an index to the filters. Thus only a small proportion of the compressed filter need be examined. The method is highly amendable to parallel processing.

Added 2002-07-26

A Fixed-Size Bloom Filter for Searching Textual Documents

M. A. Shepherd,W. J. Phillips,C. K. Chu

The empirical false drop rate associated with a fixed-size Bloom filter used to represent textual documents may be quite different than the theoretical rate. This problem arises when the filter size is based on the expectation of a uniform distribution of the number of different terms per document. The distribution is, in fact, not uniform. This paper describes a method to determine the filter size for a database of textual documents, based on the desired false drop rate and the actual distribution of different words over the documents for that database. Theoretical and experimental results are reported and indicate that a filter size based on this method produces empirical false drop rates equivalent to the theoretical rates. The filter was also compared to variable-length filters with respect to storage requirements and search times.

Added 2002-07-26

Practical Performance of Bloom Filters and Parallel Free-Text Searching

M. V. Ramakrishna

Bloom filter technique of hashing finds several applications, such as in efficient maintenance of differential files, space efficient storage of dictionaries, and parallel free-text searching. The performance of has transformations with reference to the filter error rate is the focus of this article.

Added 2002-07-26