The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Concept-Systems Catalogue

F. Lehmann
Added 2002-07-26

A Taxonomy for Key Escrow Encryption Systems

D.E. Denning,D.K. Branstad
Added 2002-07-26

A Taxonomy of Software Development Methods

B.I. Blum
Added 2002-07-26


Medical Devices: The Therac-25

N. Leveson
Added 2002-07-26

Rethinking the Taxonomy of Fault Detection Techniques

M. Young,R.N. Taylor

The convetional classification of software fault detection techniques as staticor dynamic analysis is inadequate as a basis for identifying useful relationships between techniques. A more useful distinction is between techniques that sample the space of possible new execuations, and techniques that fold the space. The new distinction provides better insight into the ways different techniques can interact. and is a basis for considering hybrid fault detection techniques including combinations of testing and formal verification.

Added 2002-07-26

The Case Against C

P.J. Moyan

The programming language C has been in widespread use since the early 1970s, and is it probably the language most widely used by computer science professionals. The goal of this paper is to argue that it is time to retire C in favour of a more modern language. The choice of a programming langauge is often an emotional issue which is not the subject of rational discussion. Nevertheless it is hoped to show here that there are good objective reasons why C is not a good choice for large programming projects. These reasons are related primarily to the issues of software readability and programmer productivity.

Added 2002-07-26

Failure and Fault Analysis for Software Debugging

R.A. DeMillo,H. Pan,E.H. Spafford

Most Studie of software failures and faults have done little more than classfying failures and faults collected from long-term projects. We believe that results of the failures and faults analysis can benefit the debugging process. We thus propose a model for such analysis. In our model, we define failure modes and failure types to identify the existence of program failures and the nature of the program failures, respectively. The goal of the research is to achieve a systematic process model to localize faults in debugging. The process is summarized as follows: (1) identify a failure as belonging to one of the failure modes; (2) find failure types that possibly caused the failure based on the relationships between the failure modes and failure types; and (3) employ heuristics according to different situations for fault localization. In this paper, we first examine properties of the proposed model from a theoretical point of view. We then use the trityp program as a simple example to illustrate the possible usage of the model for debugging. In the example, the proposed failure analysis helps us eliminate irrelevant test case sets in program failures. In addition, it helps us achieve one of our purposes, which is to identify as few test case sets as possible but still hold enough information for debuggingafter a thorough testing. Based on the select test case sets, we can apply heuristics (eg. slicing heuristics) for fault localization. Further study on the failure mode, a pilot experiment of applying the proposed model, and the way to employ heuristics according to different situations for fault localization are areas of future work.

Added 2002-07-26

Smashing the Stack for Fun and Profit

aleph1 (Unknown Author)
Added 2002-07-26

A Grammar Based Fault Classification Scheme and its Application to the Classification of the Errors of TEX

R.A. Demillo,A.P. Mathur

We Present a novel scheme for categorizing coding faults. Our grammar based acheme uses the notion of syntactic transformers and is automatable. The classification that results from our scheme can be used by researchers investigatin the effectiveness of software testing techniques. In these respects our scheme is significantly different from several proposed in the past by other researchers. We haveused it to categorize the ten year log of errorsof TEX reported by Knuth. For each fault classified, we also provide, whereever possible, the precise substring that constitutes the fault . The entire error log and the associated program is in public domain and hence our categorization can be verified. We also provide a fault classification algorithm that uses top-down strategy to find differences between the two parse trees, annotated with syntatic transformers, to classify various faults. We claim that such an algorithm ca be integrated within a software developement enviroment and used as a low cost mechanism for monitoring and classifying faults.

Added 2002-07-26

Insertion, Evasion and Denial of Service: Eluding Network Intrusion Detection

T. Ptacek,T.N. Newsham

All Currently available network intrusion detection (ID) systems rely uopn a mechanism of data collection—passive protocol analysis—which is fundamentally flawed. In Passive protocol analysis, the itrusion detection system (IDS) unobtrusively watches traffic on the network, and scrutinizes it for patterns of suspicious activity. We outline in this paper two basic problems with the reliability of passive protocol analysis: (1) there isn’t enough information on the wire on which to base conclusions about what is actually happening on networked machines, and (2) the fact that the system is passive makes it inherently “fail-open”, meaning that a compromise in the availability of the IDS doesn’t compromise the availibility of the network. We define three classses of attackeswhich exploit these fundamental problems-insertion, evasion, and denial of service-and describe how to apply these three types of attacks to IP and TCP protocol analysis. We presetn the results of the tests of the efficacy of our attacks against foour of the most popoular network intrusion detection systems on the market . All of the ID systems tested were found to be vulnerable to each of our attacks. This indicates that network ID systems cannot be fully trusted until they are fundamentally redesigned.

Added 2002-07-26



Technology and Courage

I.E. Sutherland
Added 2002-07-26

Experimental Evalulation in Computer Science: A Quantitative Study

W.F. Tichy,P. Lukowicz,L. Prechelt,E.A. Heinz

A survey of 400 recent research articles suggests that computer scientists publish relatively few papers with experimentally validated results. The survey includes complete volumes of several referred computer science journals, a conference, and 50 titles drawn at random from all articles published by the ACM in 1993. The journals, Optical Engineering and Neural Computation were used for comparison. Of the papers in the random samplethat would require experimental validation, 40 have none at all. In journals related to software engineering, this fraction is over 50. In comparison, the fraction of papers lacking qunatitative evaluation in OE and NC is only 15 an 12 respectively. Conversely, the fraction of papers that devote 1/5 or more of their space to experimental validation is almost 70 for OE and NC, while it is a mere 30 for the CS random sample and 20 for software engineering. The low ratio of validated results appears to be a serious weakness in the area of computer science research. This weakness should be rectified for the long-term health of the field.

Added 2002-07-26