The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

U.S. Bank of Cyber: An analysis of Cyber Attacks on the U.S. Financial System

CERIAS TR 2014-3
Crimmins, Falk, Fowler, Gravel, Kouremetis, Poremski, Sitarz, Sturgeon, Zhang
Download: PDF

The following paper looks at past cyber attacks on the United States financial industry for analysis on attack patterns by individuals, groups, and nationstates to determine if the industry really is under attack. The paper first defines the terms used, then explains the theory and paradigm of cyber attacks on the U.S. financial industry. Following is a graphical and detailed timeline of known cyber attacks on the U.S. financial industry reaching from 1970 through 2014. Four attack cases are chosen to be researched in summary and four attack cases are chosen to be researched in depth. These cases include: Kalinin & Nasenkov, Mt. Gox, Stock Market Manipulation Scheme, Project Blitzkrieg, Union Dime Savings Bank Embezzlement, National Bank of Chicago Wire Heist, and an attempted Citibank Heist. An analysis then explores attack origination from individuals, groups, and/or nation states as well as type of attacks and any patterns seen. After gathering attacks and creation of a timeline, a taxonomy of attacks is then created from the analysis of attack data. AStrenghts, Weakness, Opportunities, and Threats (S.W.O.T.) analysis is then applied to the case study Heartland Payment Systems.

Added 2014-05-14

Improved Kernel Security Through Code Validation, Diversification, and Minimization

CERIAS TR 2013-19
Dannie M. Stanley
Download: PDF

The vast majority of hosts on the Internet, including mobile clients, are running one of three commodity, general-purpose operating system families. In such operating systems the kernel software executes at the highest processor privilege level. If an adversary is able to hijack the kernel software then by extension he has full control of the system. This control includes the ability to disable protection mechanisms and hide evidence of compromise.

The lack of diversity in commodity, general-purpose operating systems enables attackers to craft a single kernel exploit that has the potential to infect millions of hosts. If enough variants of the vulnerable software exist, then mass exploitation is much more difficult to achieve. We introduce novel kernel diversification techniques to improve kernel security.

Added 2014-04-07

Mapping Water Sector Cyber-Security Vulnerabilities

CERIAS TR 2012-15
James H. Graham, Jeffrey L. Hieb and J. Chris Foreman
Download: PDF

This paper identifies, characterizes, maps, and prioritizes cyber-vulnerabilities in the industrial control systems which are used throughout the Water Sector (includes both drinking water and wastewater treatment facilities). This report discusses both technical vulnerabilities and business/operational challenges, with concentration on the technical issues. The priority order is based upon the research team’s review of the “Road Map to Secure Control Systems in the Water Sector,” DHS Control Systems Security Program documents, a CSET-CS2SAT evaluation, and from comments by the project advisory board and individual discussion with water sector personnel.

Added 2014-04-04

A Curriculum Model for Industrial Control Systems Cyber-Security with Sample Modules

CERIAS TR 2012-14
J. Chris Foreman, James H. Graham, Jeffrey L. Hieb, Rammohan K. Ragade
Download: PDF

Cyber-security has been a topic of interest for several decades, and much work has been done in this area. Historically, industrial control systems (ICS) have been an island, both figuratively and literally, as they have utilized closed, proprietary systems air-gapped from the outside world. As these systems are now being incorporated into the corporate Wide Area Network (WAN) and subsequently exposed to the Internet at large, they are now at risk for cyber attack. The educational arena is still considerably lacking in producing new professionals or training existing ones to combat this new threat. As ICS are often in control of critical infrastructure, they are increasingly becoming targets of terroristic cyber attacks. A discussion of the educational deficits and a proposed solution is presented with sample modules and a class evaluation.

Added 2014-04-04

Mapping Dams Sector Cyber-Security Vulnerabilities

CERIAS TR 2014-01
J. Chris Foreman, James H. Graham, and Jeffrey L. Hieb
Download: PDF

Vulnerabilities in the cyber-security of industrial control systems as used in the Dams Sector are identified, analyzed, and prioritized. These vulnerabilities span both organization and technical aspects operational control in the Dams Sector. The research team has completed projects in both the Water and Dams Sectors for the Department of Homeland Security as recent attacks in these and other critical infrastructure have become more prevalent. The analysis is based on expert knowledge by the research team, interviews with field personnel, tours of field locations, and an associated project advisory board.

Added 2014-04-04

Reliability and Cyber-Security Assessment of Telehealth Systems

CERIAS TR 2014-2
Karla Welch, J. Chris Foreman, James H. Graham, Mostafa Farag, Melinda Whitfield Thomas, Phil Womble
Download: PDF

This paper presents the results from a recent evaluation of the reliability and cyber-security vulnerability of telemedicine/telehealth systems used today in the United States. As this technology becomes more widely used in an effort to reduce costs and better serve remote and isolated populations, these issues will undoubtedly become more pronounced. This paper presents some background information about telemedicine/telehealth systems and an overview of recognized reliability and cyber-security risks. It then discusses a national survey of equipment and software vendors for telemedicine/telehealth, discusses a risk scoring system that was developed as part of the project, and presents overall results and recommendations for future work in this area.

Added 2014-04-04

Multi-Finger Recognition

CERIAS TR 2009-39
Dan Bowman, Mithun Vaidhyanathan, Alexander Miller
Download: PDF

Fingerprint verification is a commonly used modality in biometric identification and as such, Biometric fingerprint verification continues to be incorporated into many facets of world-wide society it is prudent that multiple factors in the image acquisition by accepted as ‘industry standards’ to facilitate and ensure the information security community can seamlessly integrate technologies.

There is also a need for both better understanding how fingerprint recognition system can better match against multiple fingers without creating the potential for greater security holes.

The impetus for this paper is the better understand the how combinations of multiple fingers affect match scores against a common thresh-hold and to determine, if one exists, an optimal number and combination of fingers to match against to create the lost possibility to both false accepts and false rejects.

Added 2014-03-05

Neighborhood Overhearing for Detection of Security Attacks in Wireless Sensor Networks

CERIAS TR 2013-18
Matthew Tan Creti

An attractive approach for securing sensor networks has been behavior-based detection of malicious actions performed through overhearing traffic in the neighborhood. This approach has been applied toward detection of different kinds of network security attacks, building trust relationships, and also for non-security functions such as providing an implicit acknowledgment. However, observations on a wireless channel are known to be imperfect, both due to the intrinsic nature of the channel and contention from other concurrent flows. An open question has been whether any higher level protocol that relies on overhearing can be useful in light of such imperfections. This thesis addresses that question through the design and implementation of an overhearing scheme, called local monitoring, that monitors the communication functionality of neighboring nodes. The answer, derived through experiments on a sensor network testbed, is that neighborhood observation is useful for certain network configurations and parameter settings. The significant settings are node density and threshold for determining a node to be malicious. For specificity, we apply local monitoring to the detection of the highly disruptive wormhole attack. We design customized structures and algorithms for detection of anomalous events that optimize computational, memory, and bandwidth usages. These include a method for discretizing the events observed by a node for the purpose of determining malicious behavior. We also present a novel method for launching the wormhole attack and develop a countermeasure based on local monitoring. Experiments demonstrate the quality of detection measured through latency and rates of correct and false detection. ^

Added 2014-02-28

High Accuracy, Lightweight Methods for Network Measurement Services

CERIAS TR 2013-17
Sriharsha Gangam

Network monitoring is indispensable for maintaining and managing networks efficiently. With increasing network traffic in the ISP, enterprise and cloud environments, it is challenging to provide low overhead monitoring services without sacrificing accuracy. In this dissertation, we present techniques to enable measurement systems and services to have (1) high measurement accuracy, and (2) low measurement overhead. In the context of active measurements, shared active measurement services have been proposed to provide a common and safe environment to conduct measurements. By adapting to user measurement requests, we present solutions to (1) selectively use inference mechanisms, and (2) schedule active measurements in a non-interfering manner. These techniques reduce the measurement overhead costs and improve the accuracy for an active measurement service.^ In the context of passive flow based measurements systems, this dissertation introduces Pegasus, a monitoring system that leverages co-located compute and storage devices to support aggregation queries. Using Pegasus, we present IFA (Iterative Feedback Aggregator), a technique to accurately detect global icebergs and network anomalies at a low communication cost. Finally, we present ALE (Approximate Latency Estimator), a scalable and low-overhead technique to estimate TCP round trip times at high data rates for troubleshooting network performance problems.^

Added 2014-02-28

Increasing Scalability in Network Simulation and Testbed Experiments

CERIAS TR 2013-16
Wei-Min

One of the major challenges that network researchers and operators face today is the lack of reliable and scalable network testbeds. Since it is often infeasible to perform experiments directly on a production network or build analytical models for complex systems, researchers often resort to simulation or downscaled testbed experiments. However, designing a downscaled experiment that can faithfully represent a large-scale experiment is often challenging. The results of a non-representative experiment can be misleading and unexpected bugs may not be discovered until the Internet protocol or application is deployed into an operational network. In this work, we present two solutions to enable large-scale network experiments. Our first solution, flow-based scenario partitioning (FSP), is a platform-independent mechanism to partition a large network experiment into a set of small experiments that are sequentially executed. Each of the small experiments can be conducted on a given number of experimental nodes, e.g., the available machines on a testbed. Results from the small experiments approximate the results that would have been obtained from the original large experiment. Experimental results from several simulation and testbed experiments demonstrate that our techniques approximate performance characteristics, even with closed-loop traffic and congested links. Our second solution, EasyScale, aims to bridge the current gap between emulation testbed users and large-scale security experiments possibly using multiple scaling techniques. EasyScale is a new framework for easily configuring a large-scale network security experiment on an emulation testbed. Multiple scaling techniques, such as full and OS-level virtualization techniques, can be used for different parts of the input experimental topology in order to balance scalability and fidelity. The EasyScale resource allocation scheme considers user-specified fidelity requirements. Additional resources are allocated to the experiment components that are considered to be highly important, in order to increase the experimental fidelity. Our results from distributed denial of service and worm attack experiments demonstrate that EasyScale can easily allocate testbed resources to the critical components in an experiment, lowering the barrier for testbed users to conduct high fidelity yet scalable network security experiments.^

Added 2014-02-28

Mining Roles and Access Control for Relational Data Under Privacy and Accuracy Constraints

CERIAS TR 2013-15
Zahid Pervaiz

Access control mechanisms protect sensitive information from unauthorized users. However, when sensitive information is shared and a Privacy Protection Mechanism (PPM) is not in place, an authorized insider can still compromise the privacy of a person leading to identity disclosure. A PPM can use suppression and generalization to anonymize and satisfy privacy requirements, e.g., k-anonymity and l-diversity. However, the protection of privacy is achieved at the cost of precision of authorized information. In this thesis, we propose an accuracy-constrained privacy-preserving access control framework for static relational data and data streams. The access control policies define selection predicates available to roles and the associated imprecision bound. The PPM has to satisfy the privacy requirement along with the imprecision bound for each selection predicate. We prove the hardness of problem, propose heuristics for anonymization algorithms and show empirically that the proposed approach satisfies imprecision bounds for more queries than the current state of the art. We also formulate the problem of predicate role mining for extraction of authorized selection predicates and propose an approximate algorithm. The access control for stream data allows roles access to tuples satisfying an authorized predicate sliding window query. The generalization introduces imprecision in the authorized view of stream. This imprecision can be reduced by delaying the publishing of stream data. However, the delay in sharing the stream tuples to access control can lead to false negatives. The challenge is to optimize the time duration for which the data is held by PPM so that the imprecision bound for maximum number of queries are met. We present the hardness results, provide an anonymization algorithm, and conduct experimental evaluation of the proposed algorithm.^

Added 2014-02-28

A Multi-policy Framework for Mitigating Insider Threat in Healthcare Domain

CERIAS TR 2013-14
Zahid Pervaiz

Access control policies in healthcare domain define permissions for users to access different medical records. Role Based Access Control (RBAC) helps to restrict medical records to users in a certain role but sensitive information in medical records can still be compromised by authorized insiders. The disclosure of sensitive medical information can create embarrassing situation for a patient or even cause discrimination based on medical ailment. The threat is from users who are not treating the patient but have access to medical records. We propose selective combination of policies where sensitive records are only available to primary doctor under Discretionary Access Control (DAC) and he may share it for consultation after permission from patient. This helps not only better compliance of principle of least privilege but also helps to mitigate the threat of authorized insiders disclosing sensitive patient information. We use Policy Machine (PM) proposed by National Institute of Standards and Technology (NIST) to combine policies and develop a flexible healthcare access control policy which has benefits of context awareness and discretionary access. We have implemented temporal constraints for RBAC in PM and after combination of Generalized Temporal Role Based Access Control (GTRBAC) and DAC, an example healthcare scenario has been established. ^

Added 2014-02-21

Improving Security Using Deception

CERIAS TR 2013-13
Mohammed Almeshekah, Eugene H. Spafford, Mikhail J. Atallah
Download: PDF

As the convergence between our physical and digital worlds continues at a rapid pace, much of our information is becoming available online. In this paper we develop a novel taxonomy of methods and techniques that can be used to protect digital information. We discuss how information has been protected and show how we can structure our methods to achieve better results. We explore complex relationships among protection techniques ranging from denial and isolation, to degradation and obfuscation, through negative information and deception, ending with adversary attribution and counter-operations. We present analysis of these relationships and discuss how can they be applied at different scales within organizations. We also identify some of the areas that are worth further investigation. We map these protection techniques against the cyber kill-chain model and discuss some findings.

Moreover, we identify the use of deceptive information as a useful protection method that can significantly enhance the security of systems. We posit how the well-known Kerckhoffs’s principle has been misinterpreted to drive the security community away from deception-based mechanisms. We examine advantages these techniques can have when protecting our information in addition to traditional methods of hiding and hardening. We show that by intelligently introducing deceptive information in information systems, we not only lead attackers astray, but also give organizations the ability to detect leakage; create doubt and uncertainty in any leaked data; add risk at the adversaries’ side to using the leaked information; and significantly enhance our abilities to attribute adversaries. We discuss how to overcome some of the challenges that hinder the adoption of deception-based techniques and present some recent work, our own contribution, and some promising directions for future research.

Added 2014-01-18

Kinesis: A Security Incident Response and Prevention System for Wireless Sensor Networks

CERIAS TR 2013-12
Salmin Sultana, Daniele Midi, Elisa Bertino
Download: PDF

Due to resource constraints, unattended operating environment, and communication phenomena, Wireless Sensor Networks (WSNs) are susceptible to operational failures and security attacks. However, WSNs must be able to continuously provide their services despite anomalies or attacks and to effectively recover from attacks. In this paper, we propose Kinesis - the first systematic approach to a security incident response and prevention system for WSNs. We take a declarative approach to support the specification of the response policies, based on which Kinesis selects the response actions. The system is distributed in nature, dynamic in actions depending on the context, quick and effective in response, and secure. We implement Kinesis in TinyOS. Testbed experiments and extensive TOSSIM simulations show that the system successfully counteracts anomalies/attacks and behaves consistently under various attack scenarios and rates.

Added 2013-12-31

Distributed Digital Forensics on Pre-Existing Internal Networks

CERIAS TR 2013-11
Jeremiah J Nielsen
Download: PDF

Today’s large datasets are a major hindrance on digital investigations and have led to a substantial backlog of media that must be examined. While this media sits idle, its relevant investigation must sit idle inducing investigative time lag. This study created a client/server application architecture that operated on an existing pool of internally networked Windows 7 machines. This distributed digital forensic approach helps to address scalability concerns with other approaches while also being financially feasible. Text search runtimes and match counts were evaluated using several scenarios including a 100 GB image with prefabricated data. When compared to FTK 4.1, a 125 times speed up was experienced in the best case while a three times speed up was experienced in the worst case. These rapid search times nearly irrationalize the need to utilize long indexing processes to analyze digital evidence allowing for faster digital investigations.

Added 2013-12-16