The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

The Effect of Scheduling Discipline on Spin Overhead in Shared Memory Parallel Processors

John Zahorjan,Edward D Lazowska

Spinning, or busy waiting, is commonly employed in parallel processors when threads of execution must wait for some event, such as synchonization with another thread. Because spinning is purely overhead, it is detrimental to both user response time and system throughput. In the paper we study how spinning is affected by two environmental factors, multiprogramming and data-dependent execution times, and how the choice of scheduling discipline can be used to reduce the amount of spinning in each case. Both evironmental factors increase the variation in delay until the awaited event occurs. In the case of multiprogramming, the thread that will eventually generate the event must compete with uncertain number of other threads for processors. In the case of data-dependent behavior, the delay until the event occurs may depend strongly on the data presented at runtime. We are interested in both the extent to which spin times increase over a simple baseline environment when multiprogramming or data-dependency is intorduced and how this can be reduced through scheduling. The scheduling disciplines we examine are independent of the semantics of the parallel applications. They are thus applicable to the parallel solution of a wide variety of problems and to alternative approaches to solving any single problem independently of algorithm employed. Our analysis is conducted using simple, abstract models of parallel hardware and software. We investigate how software models, one representing the archetypical construct of fork/join rendezvous and the other mutual exclusion using a lock. Our hardware model is most naturally applied to shared memory multiprocessors, since we assume that each thread is capable of running on many different processors during its execution with negligible penalty. both simulation and exact analytic techniques are used to obtain performance results.

Added 2002-07-26

Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism

Thomas E Anderson,Brian N Bershad,Edward D Lazowska,Henry M Levy

‘Threads’ are the vehicle for concurrency in many approaches to parallel programming. Threads separate the notion of a sequential execution stream from the other aspects of traditional UNIX-like processes such as address spaces and I/O descriptors. The goal is to make the expression and control of parallelism sufficiently cheap that the “natural parallel decomposition” of an application can be exploited by the programmer or compiler with acceptable overhead, even in the case where this decomposition is relatively fine-grained. Threads can be supported either by the operating system kernel or by user level library code in the application address space. Neither approach has been fully satisfactory. The performance of kernel-level threads, although at least an order of magnitude better than that of traditional processes, has been atleast an order of magnitude ‘worse’ than that of user-level threads. User-level threads, on the other hand, have suffered from lack of system integration, leading to poor performance or even incorrect behavior in the presence of “real world” factors such as multiprogramming, I/O, and page faults. Thus the parallel programmer has been faced with a difficult dilemma: employ kernel-level threads, which “work right” but perform poorly, or employ user-level threads, which typically will perform but sometimes are seriously deficient. This paper addresses this dilemma. First, we argue that the performance of kernel level threads is inherently worse than that of user-level threads, rather than this being an artifact of exsisting implementations; we thus argue that managing parallelism at the user level is essential to high-performance parallel computing. Next, we argue that the lack of system integration exhibited by user-contemporary multiprocessor operating systems; we thus argue that kernel-level threads or processes, as currently conceived, are the ‘wrong abstraction’ on which to support user-level management of parallelism. Finally, we describe the design implementation, and performance of a new kernel interface and user-level thread package that together provide the same functionality as kernel-level threads without compromising the performance and flexibility advantages of managing parallelism at the user level within each applicatins address space.

Added 2002-07-26

A Dynamic Processor Allocation Policy for Multiprogrammed, Shared Memory Multiprocessors

Cathy McCann,Raj Vaswani,John Zahorjan

We propose and evaluate empirically the performance of a dynamic processor scheduling policy for multiprogrammed, shared memory multiprocessors. The policy is dynamic in that it reallocates processors from one parallel job to another based on the currently realized paralelism of those jobs. The policy is suitable for implementation in production systems in that: - it interacts well with very efficient user-level thread packages, leaving to them many low level thread operations that do not require kernel intervention. - it deals with thread blocking due to user I/O and page faults. - it ensures fairness in delivering resources to jobs. - its performance, measured in terms of average job response time, is superior to that of previously proposed schedulers, including those implemented in exsisting systems. - it provides good performance to very short, sequential (e.g., interactive) requests. We have evaluated our scheduler and compared it to alternatives using a set of prototype implementations running on a Sequent Symmetry multiprocessor. Using a number of parallel applications with distinct qualitative behaviors, we have both evaluated the policies according to the major criterion of overall performance and examined a number of more general policy issues, including the advantage of “space sharing” over “time sharing” the processors of a multiprocessor, the importance of cooperation between the kernel and the applications in reallocating processors between jobs, and the impact of scheduling policy on an application’s cache behavior. We have also compared the policies according to other criteria important in real implementations: fairness, resiliency to countermeasures, and response time to short, sequential requests. We conclude that a combination of performance and implementation considerations makes a compelling case for our dynamic scheduling policy.

Added 2002-07-26

Scheduling a Mixed Interactive and Batch Workload on a Parallel, Shared Memory Supercomputer

Ashok Immaneni,John Zahorjan

Scientific computations are the ‘raison d’etre’ of supercomputers. These computations have very large resource requirements, not only in terms of the number of calculations they must make but also in terms of the amount of storage they must use. The primary goal of a supercomputer system is to provide these resources as efficiently as possible. Besides servicing production large scale computations, a supercompter must also accommodate the steady stream of interacive jobs that arises because of program development and initial debugging of new applications on test data sets. While the resource requirements of these jobs are relatively small, by their nature they demand reasonable response times. Because the batch and interactive workloads compete for resources, one of the objectives of a supercomputer scheduler is to balance the high resource consumption rate of the batch workload against the response time requirements of the interactive workload. In doing this the scheduler must consider both processor and memory demands of the jobs. In this paper we analyze three approaches to scheduling mixed batch and interactive work loads on a supercomputer: (i) Fixed Partition, in which memory resources are statically allocated between the workloads, (ii) No Partition, in which the interactive preempts resources as needed from the batch workload, and (iii) No Partition With Grouped Admission, in which the interactive workload preempts resources only when the number of waiting interactive jobs reaches a threshold value. We also investigate the potential benefits of using virtual memory instantaneously available to them. Using analytic tools, we compare the different policies according to the average speedup achieved by the batch workload given that a mean interactive job response time objective must be met by each. We show that, under a wide variety of conditions, Fixed Partition performs better then the other polices.

Added 2002-07-26

The Performance Implications of Thread Management Alternatives for Shared-Memory Multiprocessors

Thomas E Anderson,Edward D Lazowska,Henry M Levy
Download: PDF

Threads (“lightweight” processes) have become a common element of new languages and operating systems. This paper examines the performance implications of several data structure and algorithm alternatives for thread management in shared-memory multi- processors. Both experimental measurements and analytical model projections are presented. For applications with fine-grained parallelism, small differences in thread management are shown to have significant performance impact, often posing a tradeoff between throughput and latency. Pre-processor data structures can be used to improve throughput, and in some circumstances to avoid locking, improving latency as well. The method used by processors to queue for locks is also shown to affect performance significantly. Normal methods of critical resource waiting can substantially degrade performance with moderate numbers of waiting processors. We present an Ethernet-style backoff algorithm that largely eliminates this effect.

Added 2002-07-26

Reliable Scheduling of Database Transactions

Marc H. Graham,Nancy Griffeth,Barbara Smith-Thomas

A database transaction is usually required to have an “all-or-none” property, that is, either all of its changes to the database are installed in the database or none of them are. This property can be violated by a hardware or software failure occurring in the middle of the execution of a transaction or even by a decision to abort a transaction to avoid an inconsistent (non-serializable) schedule. In such a situation, we must recover from the failure by restoring the database to a state in which the failed transaction never executed. The idea of “recoverable schedules” was defined in [H83]. In a recoverable schedule, a transaction does not commit until there is no possibility that it will be rolled back. If the underlying hardware or software is unreliable, this much not occure until all transactions which have written values read by the transaction have themselves committed.

Added 2002-07-26

Hashing with SL2

Jean-Pierre Tillich,Gilles Zemore

We propose a new family of hash functions based on computations over a finite field of characteristic 2. These functions can be computed quickly, detect small modifications of the input text, and their security is equivalent to a precise mathematical problem. They rely on the arithmetic of the group of matrices SL2, and improve upon previous functions based on the same strategy.

Added 2002-07-26

Consider the Crank

Carl Pfluger
Added 2002-07-26

The Case for "Clipper"

Dorothy E. Denning

The U.S Government has launched a program to expand security and privacy protection for electronic communications while preserving the Government’s ability to conduct authorized wire taps. Despite attacks from civil libertarians, the approach is the best way to balance individual privacy with the social good.

Added 2002-07-26

Perils and Pitfalls of Practical CyberCommerce - The Lessons of First Virtual's First Year

Nathaniel S Borenstein,John Ferguson,Jerry Hall,Carlyn Lowery,Rich Mintz,Darren New,Beverly Parenti,Marshall Rose,Einar Stefferud,lee Stein,Carey Storm,Ed Vielmetti,Marc Weiser,Pierre-R Wolff

Unlike many would-be players in the field of Internet commerce, First Virtual (tm) chose to announce its payment system only after it was fully operational and to operate it initially with relatively little publicity hype, while learning from the experience of its use. In its first year of operation, it has experienced exponential growth, and the company has gained substantial experience with and insight into the nature of Internet Commerce. In this paper, the First Virtual team discusses the lessons we have learned from a year’s experience with the actual operation of an Internet commerce system, and the prospects for the future.

Added 2002-07-26

Software Testing Techniques

Boris Beizer
Added 2002-07-26

A Case Study Of Evaluating Security In An Open Systems Environment

Daniel Tobat,Errol S Weiss

The goal of this paper is to describe a case study of a computer security evaluations effort conducted on a system known as the Office Automation Network (OAN). The OAN is representative of many of today’s networked systems by being a heterogeneous mix of system components connected to open systems such as the Internet. The OAN differs from the typical systems in that security was a design and implementation objective, and that it was subjected to an extensive six month evalutation effort by an experienced vulnerablility testing team. The vulnerablility testing yielded some surprising results which demostrated that it is possible in today’s environment to have an Automated Information System (AIS) connected to open systems such as the Internet and still have an effective security posture.

Added 2002-07-26

Software Testing and Reliablility

Roger S. Pressman

The importance of software testing and its implications with respect to reliablity cannot be overemphasized. To quote Deutsch(1); “The development of software systems involves a series of production activities where opportunities for injection of human fallibilities are enormous. Errors may begin to occur at the very inception of the process where the objectives…my be erroneously or imperfectly specified, as well as (errors that occur in) later design and development stages…Because of human inability to perform and communicate with perfection, software development is accompanied by a quality assurance activity”. Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design, and coding. The increase visibility of software as a system element and the attendant “costs” associated with a software failure are motivating forces for well-planned, thorough testing. It is not unusual for a software development organization to expend 40 of total project effort on testing. In the extreme, testing of human-rated software (e.g. flight control or nuclear reactor monitoring) can cost 3-5 times as much as all other software engineering steps combined! In this chapter we discuss three interrelated topics. The first, software testing, is a planned step in the software engineering process. Like other steps, deliverables derived from testing become part of the software configuration. Testing invariably leads to the second topic of disscussion-debugging. More an art than a science, debugging diagnoses program errors and corrects them. The results of testing can also lead to a consideration of reliablity, the third topic. We strive to guarantee the same time developing failure prediction models to help anticipate problems. At the time of this writing, we must rely on a series of thorough test steps as the only practical “guarantee” of software reliablilty.

Added 2002-07-26

Security in Open Systems

Michel Dupuy
Added 2002-07-26

Legal Aspects of Ice-Pick Testing

Dr. Bruce C. Gabrielson

The Ice-Pick package is a window driven program that provides a multi-layered approach to network testing. The automated tool is used to identify frquently exploited security problems present on well known UNIX based operating systems. Information provided by testing is used to determine what protective mechanisms need to be implemented by network administrators. The paper deals with two issues of primary concern, the user’s legal basis for performing vulnerabliity identification testing, and the consequences of unauthorized use or release of the software itself. It is essential for self protection that the tester understands what he or she can legally do with a tool such as Ice-Pick. The issue of trust can also effect users. Trusting each user to protect Ice-Pick against unauthorized release is essential for absolute control of the technology involoved. The structure of this document allows traceablility from top level law through applicable Navy regulation. The most important points are the understanding of what monitoring involves, and knowing what the Ice-Pick test tool can be used for. The use of other pentration type testing tools, such as SATAN, will not be discussed, nor will the regulatory requirements of non-Navy organizations. However, the discussion can be applied to using similar test tools in other organizations.

Added 2002-07-26