The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Direct Space-To-Time Pulse Shaping for Ultrafast Optical Waveform Generation

CERIAS TR 2000-26
Daniel E. Leaird
Download: PDF
Added 2003-06-10


Improving Information Assurance: A General Assessment and Comprehensive Approach to an Integrated IA Program for the Department of Defense

Assistant Secretary of Defense for Command, Control, Communications and Intelligence (ASD C31)
Added 2003-06-06



A Mixed-Strategy Page Replacement Algorithm for a Multiprogramming, Virtual Memory Computer

Eugene Howard Spafford

Most modern computers depend on some form of virtual memory, using either a paged, segmented, or hybrid (paged-segmenteed) memory organization and management scheme.  One aspect of most of these schemes that has a dramatic effect on system performance is the algoirthm used in demand paging to select pages for removal.  Many near-optimal replacement schemes have been found, but their complexity and various practical considerations tend to limit the effectiveness of the algorithms implemented in real systems. This thesis examines some strategies for page replacement and their rationales.  A mathematical argument is given to modify the current version of “The Principle of Optimality” governing page replacement and implementations derived from that principle.  A scheduling consideration is developed as an extension to the concept of demand paging. The extensions discussed have been incorporated into working systems.  These implementations are discussed, and results from various forms of load testing are examined and interpreted.  Conclusions are made as to the applicability of this type of algoirthm to other systems, and as to extensions that might yet be made.

Added 2003-06-04

Kernel Structures for a Distributed Operating System

Eugene Howard Spafford

In recent years there has been considerable interest in developing distributed computing systems.  Distribution of computing resources suggests many possible benefits including greater felxibility, enhanced computing power through greater parallelism, and increased reliability. In practice, achieving any any of these benefits has been difficult, since a distributed system also presents potential problems in naming, synchronization, and the effective use of resources.  Consistency problems arise when dealing with operations and data structures that may span machine and device boundaries; that is, should a communications or machine failure occur at an inopportune time, the data may be left in an unknown, incorrect, or inaccessible condition.  This type of problem is certainly undesireable in user programs, but special problems arise when operating system data structures become inconsistent.  Due to the large number of components involved in a distributed system, these problems are more likely to occur and more damaging in their effects. Since 1982, the Clouds project has been researching an approach to the construction of a distributed computing environment intended to address these concerns.  The Clouds operating system is intended to reliably support effective use of distributed resources.  Some of that design is derived from the action/object model of computation developed in Jim Allchin’s dissertation.[Allc83]  That work suggested an architecture for a distributed, reliable computing system built from abstract data objects and atomic transactions.  The architecture, properly implemented, can be used to address many of the problems presented by distributed systems.  However, Allchin’s work does not address the structure or implementation of the kernel and operating system services necessary for a functional distributed system. This dissertation explores the requirements for services and structures needed to support a distributed computing environment as suggested by Allchin’s work.  It contains the design of a distributed operating system kernel which meets these requirements and which could flexibly support various implementations of the Clouds reliable system as well as other forms of object-oriented distributed systems.  This dissertation also descibes a prototype implementation, which was done to help refine and validate the design and provide a testbed for further research.

Added 2003-06-04

An Action Management System for a Distributed Operating System

Gregory Grant Kenley

The goal of constructing reliable programs has led to the introduction of transaction (action) software into programming environments.  The further goal of constructing reliable programs in a distributed environment has led to the extension of transaction systems to operate in a more decentralized environment. We present the deisgn of a transaction manager that is integrated within the kernel of a decentralized operating system, the Clouds kernel.  This action management system supports nested actions, action-based locking, and efficient facilities for supporting recovery.  The recovery facilities have been designed to support a systems programming language which recognizes the concept of an action. Orphans, disjoint parts of actions that will later abort, are managed using a time-driven orphan detection scheme which requires a clock synchronization protocol.  We present the facilities to generate a system-wide global clock.  In addition to the design of facilities necessary to support a decentralized action manager, we present a search protocol to locate objects in the distributed environment. The design goal of this implementation has been to achieve the erformance necessary to support an experimental testbed, which serves as the basis for further work in the area of decentralized systems.  We feel this prototype implementation meets this goal.

Added 2003-06-04

Controlling State Explosion in Reachability Analysis

Wei Jen Yeh

Concurrent software systems are more difficult to design and analyze than sequential systems.  Considerable research has been done to help the testing, analysis, and verification of concurrent systems.  Reachability analysis is a family of analysis techniques involving exhaustive examination of the whole state space.  Reachability analysis is attractive because it is simple and relatively straightforward to automate, and can be used in conjunction with model-checking procedures to check for application-specific as well as general properties like freedom from deadlocks and race conditions.  However, practical application of reachability analysis to real systems has been stymied by the state explosion problem, and exponential growth in the number of states to be explored as the number of processes increases. State explosion can be substantially controlled if the analysis approach is “compositional.”  An analysis technique is compositional if the results of analyzing subsystems can be “composed” to obtain analysis results for the complete system.  We propose using process algebra to achieve this goal in automated analysis tools.  The algebraic structure of process algebra and its rich abstraction capabilities provide the means to achieve compositional (divide-and-conquer) analysis. The primary contribution of this dissertation will be a demonstration and evaluation of the usefulness of process algebra in controlling state explosion in reachability analysis of models close to the model of concurrency found in real programs.  We propose a two-leveled approach to control state explosion - using process algebra at the coarse-grained level and simplification strategies based on a process-sleep mechanism at the fine-grained level.  Although the process algebra-based compositional approach is not guaranteed to solve the state explosion problem, with a suitably modular design that provides good service abstraction, state explosion can be controlled sufficiently for application of reachability analysis to be practical.

Added 2003-06-03

Two Dimensional Concurrent Program Debugging

Lu Xu

Debugging, which entails locating program faults responsible for a program failure, is more difficult in concurrent programs than in sequential programs because of the inherent difficulty of understanding task interactions.  In addition to sequential debugging facilities for individual threads of control, specialized facilities are needed to debug interactions among tasks.  Integration of the two kinds of debugging facilities is motivated by the observation that a program fault can be manifested as a sequence of erroneous program states and erroneous coordinates among tasks alternately before causing an observable failure. Localization of a fault may require facilities to trace the causes of a failure through both sequential computation and interactions. In this dissertation, we propose a two dimensional model to debugging of concurrent programs.  Different and specialized facilities are provided for sequential debugging and concurrent debugging, respectively.  These two “dimensions” are integrated in a way that supports tracing the effect of a program fault both within and across tasks.  We present a design of concurrent debugging facilities based on the model.  As the key component of the design, we introduce and define a customized dynamic dependence analysis called augmented concurrent dynamic slicing. As many concurrent programs are nondeterministic, program instrumentation may perturb a program to the extent that it fails to reproduce the error-revealing behavior.  Our analysis requires less execution information than required by prior analyses, and provide flexible control of the tradeoff between precise analysis and behavior perturbation.  Thus, our analysis is applicable to time-sensitive programs.

Added 2003-06-03

Software Debugging with Dynamic Instrumentation and Test-Based Knowledge

Hsin Pan

Developing effective debugging strategies to guarantee the reliability of software is important.  By analyzing the debugging process used by experienced programmers, we have found that four distinct tasks are consistently performed: (1) determining statements involved in program failures, (2) selecting suspicious statements that might contain faults, (3) making hypotheses about suspicious faults (variables and locations), and (4) restoring program state to a specific statement for verification.  This dissertation focuses on the second task, reducing the search domain for faults, referred to as fault localization. A new approach to enhancing the process of fault localization is explored based on dynamic rogram slicing and mutation-based testing.  In this new scenario, a set of heuristics was developed to enable debuggers to highlight suspicious statements and thus to confine the search domain to a small region.  A prototype debugging tool, SPYDER, was previously constructed to support the first tasks by using dynamic program slicing and the fourth task by backward execution; some of the heuristics were integrated into SPYDER to demonstrate our approach.  An experiment confirms the effectiveness and feasibility of the proposed heuristics.  A new debugging paradigm equiped with our proposed fault localization strategies is expected to reduce huaman interaction time significantly and aid in the debugging of complex software.

Added 2003-06-03

Software Testing Using High Performance Computers

Byoungju Choi

Reliable software testing is a time consuming operation.  In addition to the time spent by the tester in identifying, locating, and correcting bugs, a significant time is spent in the execution of the program under test and its instrumented or fault induced variants.  When using mutation based testing to achieve high reliability, the number of such variants can be large.  Providing a software testing tool that can efficiently exploit the architecture of a parallel machine implies providing more computing power to the software tester and hence an opportunity to improve the reliability of the product being developed. In this thesis, we consider the problem of utilizing high performance computers to improve the quality of software.  We describe three approaches to the parallelization of mutant execution on three architectures: MIMD, Vector, and MIMD with vector processors.  We describe the architecture of the PMothra system designed to provide the tester a transparent interface to parallel machines.  A prototype, constructed by interfacing the Mothra system to an Ncube through a scheduler, was used to conduct the eperiments reported in this dissertation.  Analysis of algorithms developed and experimental results obtained on these three architecture are presented.  Our results enable us to conclude that the MIMD machine, as typified by the Ncube, is superior to some other architectures for mutation based software testing.

Added 2003-06-03

Planning Variable Information Assurance in Agent-Base Workflow Systems

Thomas Bellocci

The objective of this research was to investigate the planning of information assurance in agent-based workflow systems.  Two objectives were set: (1) defining information assurance in distributed information systems, such as ERP, and (2) planning agents for effective execution of assurance tasks in workflow systems.  To achieve the first objective, a TQM-based definition of information assurance was developed after literature review, MICSS lab experiments, and an industry survey.  A list of requirements for information assurance was created.  The result of the lab experiments show the difference of impact of information assurance failtures in an ERP system on the profit and due-date-performance of a company.  The fact that certain information failures are more critical than others motivated the implementation of variable assurance in workflow systems.  The responses to the industry survey proved that comanies are facing Information Significance problems in their ERP systems, and that decision-makers are willing and able to wait for better assured information.  To achieve the second objective of planning effective execution of assurance tasks in agent-based workflow systems, two research tasks were performed: development of a model for variable assurance implementation, and investigation if different variale assurance protocols and agent modes.  For implementing variable assurance in agent-based workflow system, the concept of request analysis and context analysis for risk assessment were introduced.  These features were added to a recently developed multi-agent framework for process monitoring called Agent-based Integration Model of Information Systems (AIMIS).  For investigating assurance protocols and models, experiments were performed with AutoMod to simulate assurance and production tasks execution by agents.  The main findings are: (1) Compared to systematic total information assurance, significant requestprocessing time can be saved by executing assurance tasks on an assurance-needs basis.  The needs-based variable assurance protocol reduces the mean processing time of requests without decreasing the proportion of trusted requests; (2) Polyvalent agents are recommended solution to assure production requests when assurance tasks are serialized.

Added 2003-06-03

A Storage Management System for a Reliable Distributed Operating System

David Vernon Pitts

Decentralization of computing systems has several attractions: performance enhancements due to increased parallelism; resource sharing; and the increased reliability and availability of data due to redundant copies of the data.  Providing these characeristics in a decentralized system requires proper organization of the system.  With respect to increasing th e reliability of a system, one model which has proven successful is the object/action model, where tasks performed by the system are organizaed as sequences of atomic operations.  the system can determine which operations have been performed comopletely and so maintain the system in a consistent state. This dissertation describes the design and a prototype implementation of a storage management system for an object-oriented, action-based decentralized kernel.  The storage manager is responsible for providing reliable secondary storage structures.  First, the dissertation shows how the object model is supported at the lowest levels in the kernel by the storage manager.  it also describes how storage management facilities are integrated into the virtual memory management provided by the kernel to support the mapping of objects into virtual memory.  All input and output to secondary storage is done via virtual memory management.  This dissertation discusses the role of the storage management system in locating objects, and a technique intended to short circuit searches whenever possible by avoiding unnecessary secondary storage queries at each site.  It also presents a series of algorithms which support two-phase commit of atomic actions and then argues that these algorithms do indeed provide consistent recovery of object datd.  These algorithms make use of virtual memory management information to provide recovery, and relieve the action management system of the maintenance of the stable storage.

Added 2003-06-03