The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Software and Hardware Approaches for Record and Replay of Wireless Sensor Networks

CERIAS TR 2015-12
Matthew Tan Creti
Download: PDF

Wireless Sensor Networks (WSNs) are used in a wide variety of applications including environmental monitoring, electrical grids, and manufacturing plants. WSNs are plagued by the possibility of bugs manifesting only at deployment. However, debugging deployed WSNs is challenging for several reasons—the remote location of deployed nodes, the non-determinism of execution, and the limited hardware resources available. A primary debugging mechanism, record and replay, logs a trace of events while a node is deployed, such that the events can be replayed later for debugging. Existing recording methods for WSNs cannot capture the complete code execution, thus negating the possibility of a faithful replay and causing some bugs to go unnoticed. Existing approaches are not resource efficient enough to capture all sources of non-determinism. We have designed, developed, and verified two novel approaches to solve the problem of practical record and replay for WSNs. Our first approach, Aveksha, uses additional hardware to trace tasks and other generic events at the function and task level. Aveksha does not need to stop the target processor, making it non-intrusive. Using Aveksha we have discovered a previously unknown bug in a common operating system. Our second approach, Tardis, uses only software to deterministically record and replay WSN nodes. Tardis is able to record all sources of non-determinism, based on the observation that such information is compressible using a combination of techniques specialized for respective sources. We demonstrate Tardis by diagnosing a newly discovered routing protocol bug.

Added 2015-08-28

Control-Flow Bending: On the Effectiveness of Control-Flow Integrity

Nicholas Carlini, Antonio Barresi, Mathias Payer, David Wagner, and Thomas R. Gross

Control-Flow Integrity (CFI) is a defense which prevents control-flow hijacking attacks. While recent research has shown that coarse-grained CFI does not stop attacks, fine-grained CFI is believed to be secure.

We argue that assessing the effectiveness of practical CFI implementations is non-trivial and that common evaluation metrics fail to do so. We then evaluate fully-precise static CFI—- the most restrictive CFI policy that does not break functionality—- and reveal limitations in its security. Using a generalization of non-control-data attacks which we call Control-Flow Bending (CFB), we show how an attacker can leverage a memory corruption vulnerability to achieve Turing-complete computation on memory using just calls to the standard library. We use this attack technique to evaluate fully-precise static CFI on six real binaries and show that in five out of six cases, powerful attacks are still possible. Our results suggest that CFI may not be a reliable defense against memory corruption vulnerabilities.

We further evaluate shadow stacks in combination with CFI and find that their presence for security is necessary: deploying shadow stacks removes arbitrary code execution capabilities of attackers in three of six cases.

Added 2015-08-20

CAIN: Silently Breaking ASLR in the Cloud

Antonio Barresi, Kaveh Razavi, Mathias Payer, and Thomas R. Gross

Modern systems rely on Address-Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) to protect software against memory corruption vulnerabilities. The security of ASLR depends on randomizing regions in memory which can be broken by leaking addresses. While information leaks are common for client applications, server software has been hardened to reduce such information leaks.

Memory deduplication is a common feature of Virtual Machine Monitors (VMMs) that reduces the memory footprint and increases the cost-effectiveness of virtual machines (VMs) running on the same host. Memory pages with the same content are merged into one read-only memory page. Writing to these pages is expensive due to page faults caused by the memory protection, and this cost can be used by an attacker as a side-channel to detect whether a page has been shared. Leveraging this memory side-channel, we craft an attack that leaks the address-space layouts of the neighboring VMs, and hence, defeats ASLR. Our proof-of-concept exploit, CAIN (Cross-VM ASL INtrospection) defeats ASLR of a 64-bit Windows Server 2012 victim VM in less than 5 hours (for 64-bit Linux victims the attack takes several days). Further, we show that CAIN reliably defeats ASLR, regardless of the number of victim VMs or the system load.

Added 2015-08-20

Fine-Grained Control-Flow Integrity through Binary Hardening

Mathias Payer, Antonio Barresi, and Thomas R. Gross

Applications written in low-level languages without type or memory safety are prone to memory corruption. Attackers gain code execution capabilities through memory corruption despite all currently deployed defenses. Control-Flow Integrity (CFI) is a promising security property that restricts indirect control-flow transfers to a static set of well-known locations.

We present Lockdown, a modular, fine-grained CFI policy that protects binary-only applications and libraries without requiring source-code. Lockdown adaptively discovers the control-flow graph of a running process based on the executed code. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks using information from a trusted dynamic loader. A shadow stack enforces precise integrity for function returns. Our prototype implementation shows that Lockdown results in low performance overhead and a security analysis discusses any remaining gadgets.

Added 2015-08-20

Risk-Aware Sensitive Properties Driven Resource Management in Cloud Datacenters

Abdulrahman Almutairi, Muhammad Felmban and Arif Ghafoor

For efficient management of resources and economic benefits, organizations are increasingly moving towards the paradigm of “cloud computing” by which they are allowed on-demand delivery of hardware, software and data as services. However, there are many security challenges which are particularly exacerbated by the multitenancy and virtualization features of cloud computing that allow sharing of resources among potentially untrusted tenants in access controlled cloud datacenters which can result in increased risk of data leakage. To address this risk vulnerability, we propose an efficient risk-aware virtual resource assignment mechanism for cloud’s multitenant environment. In particular, we have proposed a global property/knowledge driven profile model for an RBAC policy. For this propose we have used two properties based on KL-divergence and mutual information extracted from check-in dataset. Based on the vulnerabilities of cloud architecture and the knowledge profile, we have proposed resource scheduling problem based on the optimization pertaining to risk management. The problem is shown to be NP-complete. Accordingly, we have proposed two heuristics and presented their simulation based performance results for HSD and LSD datacenters.

Added 2015-08-14

Using Deception to Enhance Security: A Taxonomy, Model, and Novel Uses

CERIAS TR 2015-11
Mohammed H. Almeshekah
Download: PDF
Added 2015-07-27

Final Report of the Computer Incident Factor Analysis and Categorization (CIFAC) Project Volume 1: College and University Sample

Virginia E. Rezmierski; Daniel M. Rothschild; Anamaria S. Kazanis; Rick D. Rivas
Added 2015-07-21


Trustworthy Data from Untrusted Databases

Rohit Jain
Download: PDF

Increasingly, data are subjected to environments which can result in invalid (malicious or inadvertent) modifications to the data. For example, when we host the database on a third party server, or when there is a threat of insider attack or hacker attack. Ensuring the trustworthiness of data retrieved from a database is of utmost importance to users. In this dissertation, we address the question of whether a data owner can be assured that the data retrieved from an untrusted server are trustworthy. In particular, we reduce the level of trust necessary in order to establish the trustworthiness of data. Earlier work in this domain is limited to situations where there are no updates to the database, or all updates are authorized and vetted by a central trusted entity. This is an unreasonable assumption for a truly dynamic database, as would be expected in many business applications, where multiple users can access (read or write) the data without being vetted by a central server. The legitimacy of data stored in a database is defined by the faithful execution of only valid (authorized) operations. Decades of database research has resulted in solutions that ensure the integrity and consistency of data through principles such as transactions, concurrency, ACID properties, and access control rules. These solutions have been developed under the assumption that the threats arise due to failures (computer crashes, disk failures, etc), limitations of hardware, and the need to enforce access control rules. However, the semantics of these principles assumes complete trust on the database server. Considering the lack of trust that arises due to the untrusted environments that databases are subjected to, we need mechanisms to ensure that the database operations are executed following these principles. In this dissertation, we revisit some of these principles to understand what we should expect when a transaction execution follows those principles. We propose mechanisms to verify that the principles were indeed followed by the untrusted server while executing the transactions.

Added 2015-06-30

Secure platforms for enforcing contextual access control

Aditi Gupta

Advances in technology and wide scale deployment of networking enabled portable devices such as smartphones has made it possible to provide pervasive access to sensitive data to authorized individuals from any location. While this has certainly made data more accessible, it has also increased the risk of data theft as the data may be accessed from potentially unsafe locations in the presence of untrusted parties. The smartphones come with various embedded sensors that can provide rich contextual information such as sensing the presence of other users in a context. Frequent context profiling can also allow a mobile device to learn its surroundings and infer the familiarity and safety of a context. This can be used to further strengthen the access control policies enforced on a mobile device. Incorporating contextual factors into access control decisions requires that one must be able to trust the information provided by these context sensors. This requires that the underlying operating system and hardware be well protected against attacks from malicious adversaries. ^ In this work, we explore how contextual factors can be leveraged to infer the safety of a context. We use a context profiling technique to gradually learn a context’s profile, infer its familiarity and safety and then use this information in the enforcement of contextual access policies. While intuitive security configurations may be suitable for non-critical applications, other security-critical applications require a more rigorous definition and enforcement of contextual policies. We thus propose a formal model for proximity that allows one to define whether two users are in proximity in a given context and then extend the traditional RBAC model by incorporating these proximity constraints. Trusted enforcement of contextual access control requires that the underlying platform be secured against various attacks such as code reuse attacks. To mitigate these attacks, we propose a binary diversification approach that randomizes the target executable with every run. We also propose a defense framework based on control flow analysis that detects, diagnoses and responds to code reuse attacks in real time.

Added 2015-06-30

Website Forgery: Understanding Phishing Attacks & Nontechnical Countermeasures for Ordinary Users

CERIAS TR 2015-10
Ibrahim Waziri Jr
Download: PDF

Website Forgery is a type of web based attack where the phisher builds a website that is completely independent or a replica of a legitimate website, with the goal of deceiving a user by extracting information that could be used to defraud or launch other attacks upon the victim. In this paper we attempt to identify the different types of website forgery phishing attacks and non-technical countermeasure that could be used by users, (mostly by non IT users) that lack the understanding of how phishing attack works and how they can prevent themselves from these criminals.

Added 2015-06-02

Modeling and Performance of Privacy Preserving Authorization Mechanism for Graph Data

Zahid Pervaiz, Arif Ghafoor, Walid G. Aref

There has been significant interest in the development of anonymization schemes for publishing graph data. However, due to strong correlation among users’ social identities, privacy is a major concern in dealing with social network data. In this paper, we propose a privacy-preserving mechanism for publishing graph data to prevent identity disclosure. The framework is a combination of access control and privacy protection mechanisms. The access control policies define selection predicates available to roles/queries and their associated imprecision bounds. Only authorized role/query predicates on sensitive data are allowed by the access control mechanism. For this framework, we define the problem of k-anonymous Bi-constraint Graph Partitioning (k-BGP) and provide its hardness results. We present heuristics for graph data partitioning to satisfy the imprecision and information loss bounds for k-BGP problem. The privacy-protection mechanism anonymizes the graph data with minimal information loss while simultaneously meeting the QoS requirement in terms of satisfying the bounds on the number of roles being satisfied. This approach provides an anonymous view based on the target class of role-based workloads for graph data. We present detailed performance evaluations to demonstrate the effectiveness of our algorithms w.r.t. both meeting both the QoS requirements and global information loss on real-world data sets.

Added 2015-05-20

Digital Forensics and Community Supervision

CERIAS TR 2015-8
Christopher Flory
Download: PDF

In this paper I reviewed the literature concerning investigator digital forensics models and how they apply to field investigators. A brief history of community supervision and how offenders are supervised will be established. I also covered the difference between community supervision standards and police standards concerning searches, evidence, standards of proof, and the difference between parole boards and courts. Currently, the burden for digital forensics for community supervision officers is placed on local or state law enforcement offices, with personnel trained in forensics, but may not place a high priority on outside cases. Forensic field training for community supervision officers could ease the caseloads of outside forensic specialists, and increase fiscal responsible by increasing efficiency and public safety in the field of community supervision.

Added 2015-05-19

Basic Dynamic Processes Analysis of Malware in Hypervisors Type I & II

CERIAS TR 2015-9
Ibrahim Waziri Jr, Sam Liles
Download: PDF

In this paper, we compare, analyze and study the behavior of a malware processes within both Type 1 & Type 2 virtualized environments. In other to achieve this we to set up two different virtualized environments and thoroughly analyze each malware processes behavior. The goal is to see if there is a difference between the behaviors of malware within the 2 different architectures. At the end we achieve a result and realized there is no significant difference on how malware processes run and behave on either virtualized environment. However our study is limited to basic analysis using basic tools. An advance analysis with more sophisticated tools could prove otherwise.

Added 2015-05-18

There is Something Fishy About Your Evidence... Or How to Develop Inconsistency Checks for Digital Evidence Using the B Method

Pavel Gladyshev & Andrea Enbacka

Inconsistencies fin various data structures, such as missing log records and modified operating system files, have been used by intrusion investigators and forensic analysts as indicators of suspicious activity. This paper describes a rigorous methodology for developing such inconsistency checks and verifying their correctness. It is based on the use of the B Method- a formal method of software development. The idea of the methodology is to (1) formulate a state-machine model of the (sub)system in which inconsistencies are being detected, (2) formulating inconsistency checks in terms of that model, and (3) rigorously verifying correctness of these checks using the B Method. The methodology is illustrated by developing ConAlyzer utility- an inconsistency checker for the FTP log files.

Added 2015-05-11