With the dramatic growth of information exchanges within and between organizations, major concerns emerge about the assurance of information. Without clear knowledge of the true needs for information assurance, a company may employ local, specialized solutions that are too restrictive, or nor comprehensive. On the other hand, cost-effective, variable integrity and variable security may be economically justifiable and adequate for ertain situations and decisions. Therefore, a new definition of information assurane has been developed following the TQM approach. It describes assurance as a combination of information security, integrity, and significance. The requirements of information assurance are presented and have been justified on the basis of concrete results obtained from the lab experiments that were conducted. The exeriments and results have been briefly discussed in this paper.
A lab experiment has been perfrmed using an ERP simulator to study the impact of information failure on the results of a company. Two scenerios have been considered: correct but delayed information, and wrong information. The influence of the length of delay, of the error size, and of the dataet concerned by the failure have also been studied. It follows from the analysis that: -The consequences of a given information failure depend on the dataset in which the failure occurs. -For a given dataset, information failures impact depends on the failure type. -The influence of the length of delay depends on the dataset. -The influence of the error size depends on the dataset. So far companies employ local, specialized solutions that are too restrictive, or not compehensive. The experiments presented in this paper justify economically the use of solutions with variable assurance in ERP sysems. They also provide directions for the design of autonomous agents to handle these assurance problems.
Denial of service (DoS)attack on the Internet has become a pressing problem.In this paper,we describe and evaluate route-based distributed packet .ltering (DPF),a novel ap- proach to distributed DoS (DDoS)attack prevention.We show that DPF achieves proactiveness and scalability,and we show that there is an intimate relationship between the e .ectiveness of DPF at mitigating DDoS attack and power- law network topology.
Computer viruses are a worrying real-world problem, and a challenge to theoretical modelling. In this issue of the \‘Computer Journal\’, Erkki Makinen proposes universal machines in a critique of an earlier paper, \“A Framework for Modelling Trojans and Computer Viris Infection\” (H. Thimbleby, S. O. Anderson and P. A. Cairns, Comp. J., 41(7):444-458, 1999). This short paper is a reply by those authors.
Alice has a private input x (of any data type, such as a number, a matrix or a data set). Bob has another private input y. Alice and Bob want to cooperatively conduct a specific computation on x and y without disclosing to the other person any information about her or his private input except for what could be derived from the results. This problem is a Secure Two-party Computation (STC) problem, which has been extensively studied in the past. Several generic solutions have been proposed to solve the general STC problem; however the generic solutions are often too inefficient to be practical. Therefore, in this dissertation, we study several specific STC problems with the goal of finding more efficient solutions than the generic ones.
We introduce a number of specific STC problems in the domains of scientific computation, statistical analysis, computational geometry and database query. Most of the problems have not been studied before in the literature.
It is not possible to view a computer operating in the real world, including the possibility of Trojan Horse programs and computer viruses, as simply a finite realisation of a Turing Machine. We consider the actions of Trojan Horses and viruses in real computer systems and suggest a minimal framework for an adequate formal understanding of the phenomena. Some conventional approaches, including biological metaphors, are shown to be inadequate; some suggestions are made towards constructing virally-resistant systems.
The purpose of the study was to determine the effect of computer viruses on disaster recovery model development. Through a review of the literature and careful thought, the Susceptibilities/Assets/Frequencies and Expected Value Model was developed. The design of this model is unique in that it addresses the threat of computer viruses to organizational computing resources. The model consists of two concrrent processes. These processes are the management process and the prevention recovery process. The S.A.F.E. Model is inended to function as a tool that guides and organization through the systematic assessment of areas that are essential to the development of viral recovery strategies within the organization. Computer viruses are a dynamic threat. The S.A.F.E. Model represents an attempt to outline a process that can be utilized to develop prevention and recovery strategies to cope with this threat.
This paper describes a method of monitoring file integrity (changes in file contents) using a collection of embedded sensors within the kernel. An embedded sensor is a small piece of code designed to monitor a specific condition and report to a central logging facility. In our case, we have built several such sensors into the 4.4 BSD kernel (OpenBSD V2.7) to monitor for changes in file contents. The sensors look for files which are marked with a specific system flag in the inode. When the sensors detect a file with this flag, they will report all changes to file contents made through the file system interface. This provides administrators with a valuable audit tool and supplies more reporting granularity than conventional file system integrity checkers (such as Tripwire).
Our technique relies on only two fundamental file system characteristics. First, the file system object must have a provision for storing file characteristics (i.e. flags) within the object. Secondly, the file system must present a block device interface to the operating system.
We show that system performance is not severely hampered by the presence of this monitoring mechanism given the select set of files that would be monitored in a conventional system and the beneficial audit data that results from monitoring.
This dissertation introduces the concept of using internal sensors to perform intrusion detection in computer systems. It shows its practical feasibility and discusses its characteristics and related design and implementation issues.
We introduce a classification of data collection mechanisms for intrusion detection systems. At a conceptual level, these mechanisms are classified as direct and indirect monitoring. At a practical level, direct monitoring can be implemented using external or internal sensors. Internal sensors provide advantages with respect to reliability, completeness, timeliness and volume of data, in addition to efficiency and resistance against attacks.
We introduce an architecture called ESP as a framework for building intrusion detection systems based on internal sensors. We describe in detail a prototype implementation based on the ESP architecture and introduce the concept of embedded detectors as a mechanism for localized data reduction.
We show that it is possible to build both specific (specialized for a certain intrusion) and generic (able to detect different types of intrusions) detectors. Furthermore, we provide information about the types of data and places of implementation that are most effective in detecting different types of attacks.
Finally, performance testing of the ESP implementation shows the impact that embedded detectors can have on a computer system. Detection testing shows that embedded detectors have the capability of detecting a significant percentage of new attacks.
Survivability and secure communications are essential in a mobile computing environment. In a secure network, all the hosts must be authenticated before communicating, and failure of the agents that authenticate the hosts may completely detach the hosts from the rest of the network. In this paper, we describe two techniques to eliminate such a single point of failure. Both of these approaches make use of backup servers, but they differ in the way they are organized and deployed. We evaluate our proposed architectures with threats and performance issues in group (multicast) communications in mobile computing environments. We propose a scheme for efficient key distribution and management using key graphs to provide secure multicast service.