Abstract
We propose and evaluate empirically the performance of a dynamic processor scheduling policy
for multiprogrammed, shared memory multiprocessors. The policy is dynamic in that it
reallocates processors from one parallel job to another based on the currently
realized paralelism of those jobs. The policy is suitable for implementation in production
systems in that:
- it interacts well with very efficient user-level thread packages, leaving to them
many low level thread operations that do not require kernel intervention.
- it deals with thread blocking due to user I/O and page faults.
- it ensures fairness in delivering resources to jobs.
- its performance, measured in terms of average job response time, is superior to that
of previously proposed schedulers, including those implemented in exsisting systems.
- it provides good performance to very short, sequential (e.g., interactive) requests.
We have evaluated our scheduler and compared it to alternatives using a set of prototype
implementations running on a Sequent Symmetry multiprocessor. Using a number of parallel
applications with distinct qualitative behaviors, we have both evaluated the policies
according to the major criterion of overall performance and examined a number of more
general policy issues, including the advantage of "space sharing" over "time sharing"
the processors of a multiprocessor, the importance of cooperation between the kernel
and the applications in reallocating processors between jobs, and the impact of
scheduling policy on an application's cache behavior. We have also compared the
policies according to other criteria important in real implementations: fairness,
resiliency to countermeasures, and response time to short, sequential requests. We
conclude that a combination of performance and implementation considerations makes
a compelling case for our dynamic scheduling policy.