Abstract
To secure today’s computer systems, it is critical to have different intrusion detection sensors embedded in them. The
complexity of distributed computer systems makes it difficult to determine the appropriate choice and placement of these
detectors because there are many possible sensors that can be chosen and each sensor can be placed in several possible
places in the distributed system. In this paper, we describe a method to evaluate the effect a detector configuration has on
the accuracy and precision of determining the system’s security goals. The method is based on a Bayesian network model, obtained from an attack graph representation of multi-stage attacks. We use Bayesian inference to solve the problem of determining the likelihood that an attack goal has been achieved, given a certain set of detector alerts. We quantify the overall performance as a function of the placement, quality, and uncertainty in knowledge of the detectors. Based on these observations, we implement a dynamic programming algorithm for determining the optimal detector settings in a large-scale distributed system and compare it against a previous Greedy algorithm. Finally, we report the results of five experiments, measuring the Bayesian networks behavior in the context of two real-world distributed systems undergoing attacks. Results show the dynamic programming algorithm outperforms the Greedy algorithm in terms of the benefit provided by the set of detectors picked. The dynamic programming solution also has the desirable property that we can trade off the running time with how close the solution is to the optimal.
Key alpha
security configuration, detector placement, Bayesian network, intrusion detection, attack graph