Around the world, domestic violence, human trafficking, and stalking affect millions of lives every day. According to a report published by the Center for Disease Control and Prevention in January 2015, every minute 20 people fall victim to physical violence perpetrated by an intimate partner in the United States (US). As offenders use advancements in technology to perpetuate abuse and isolate victims, the scale of services provided by crisis organizations must rise to meet the demand while keeping a close eye on potential digital security vulnerabilities. It has been reported in general media and research that phishing emails, social engineering attacks, denial of service attacks, and other data breaches are gaining popularity and affecting business environments of all sizes and in any sector, including organizations dedicated to working with victims of violence.
To address this, an exploratory research study to identify the current state of information security within the US-based non-profit crisis organizations was conducted. This study identified the gaps between a theoretical maximum level of information security and the observed level of information security in organizations working with victims of violence inspired by a recognized and respected framework, National Institute of Standards and Technology (NIST) Cybersecurity Framework. This research establishes the critical foundation for researchers, security professionals, technology companies, and crisis organizations to develop assessment tools, technology solutions, training curriculum, awareness programs, and other strategic initiatives specific to crisis organizations and other non-profit organizations to aid them in improving information security for themselves and the victims they serve.
Cyber breaches are increasing in frequency and scope on a regular basis. The targeted systems include both commercial and governmental networks. As the threat of these breaches rises, the public sector and private industry seek solutions that stop to the ones responsible for the attacks. While all would agree that organizations have the right to protect their networks from these cyber-attacks, the options for defending networks are not quite as clear. Few would question that a passive defense (i.e. the filtering of traffic, rejecting packets based on the source, etc.) is well within the realm of options open to a defender. What active defensive measures are ethically available to the defenders when passive options fail to stop a persistent threat is not as clear. This paper outlines the two (law enforcement and military) ethical frameworks commonly applied by cyber security professionals when considering the option of a cyber counter-offensive or “hacking back.” This examination includes current applicable literature in the fields of information security, international law, and information assurance ethics.
This work investigates whether or not the semantic representation of an email’s content is more useful than the surface features of the text in classifying an email as a phishing attack email or not. A series of experiments were conducted using machine learning binary classifiers to measure the performance of the competing approaches. The conclusion is that semantic information is just as good if not better in every case than text surface features.
Cyber crime is a growing problem, with the impact to both businesses and individuals increasing exponentially, but the ability of law enforcement agencies to investigate and successfully prosecute criminals for these crimes is unclear. Many national needs assessments were conducted in the late 1990’s and early 2000’s by the Department of Justice (DOJ) and the National Institute of Justice (NIJ), which all indicated that state and local law enforcement did not have the training, tools, or staff to effectively conduct digital investigations (Institute for Security and Technology Studies [ISTS], 2002; NIJ, 2004). Additionally, there have been some studies conducted at the state level, however, to date, none have been conducted in Indiana (Gogolin & Jones, 2010). A quick search of the Internet located multiple training opportunities and publications that are available at no cost to state and local law enforcement, but it is not clear how many agencies use these resources (“State, Local, & Tribal” for FLETC, n.d.; https://www.ncfi. usss.gov). This study provided a current and localized assessment of the ability of Indiana law enforcement agencies to effectively investigate when a crime that involves digital evidence is alleged to have occurred, the availability of training for both law enforcement officers and prosecuting attorneys, and the ability of prosecuting attorneys to pursue and vii obtain convictions in cases involving digital evidence. Through an analysis of the survey responses by Indiana law enforcement agencies and prosecutors’ offices, it is evident that Indiana agencies have improved their ability to investigate crimes with digital evidence, with more than half with employees on staff who have attended a digital forensic training course within the past five years. However, a large majority of the agencies still perceive their abilities to investigate crimes with digital evidence in the mid-range or lower. The results support the recommendation that a comprehensive resource guide needs to be made available that the agencies can use to locate experts, obtain assistance with standard operating procedures, learn about free training courses, and find funding opportunities to increase their capabilities in investigating crimes involving digital evidence.
The purpose of this study was to explore the contribution of the localization data, network-management data, and content-of-communication data in the case processing performance in Macedonia. The mobile network forensics evidence was analyzed respective to the impact of the mobile network data variety, the mobile network data volume, and the forensic processing on the case disposition time. The results from this study indicate that the case disposition time is negatively correlated with the network- management data volume and positively correlated with the content-of-communication data volume. The relevance of the network-management data was recognized in the highly granular service behavior profile developed using larger number of records, while the relevance of the content-of-communication data was recognized in the substantial number of excerpts of intercepted communication. The results also reveal a difference in the case processing time for the cases where there is only localization or network- management data versus when they are combined with the content-of-communication data.
Issues of privacy in communication are becoming increasingly important. For many people and businesses, the use of strong cryptographic protocols is sufficient to protect their communications. However, the overt use of strong cryptography may be prohibited or individual entities may be prohibited from communicating directly. In these cases, a secure alternative to the overt use of strong cryptography is re- quired. One promising alternative is to hide the use of cryptography by transforming ciphertext into innocuous-seeming messages to be transmitted in the clear. In this dissertation, we consider the problem of synthetic steganography: generat- ing and detecting covert channels in generated media. We start by demonstrating how to generate synthetic time series data that not only mimic an authentic source of the data, but also hide data at any of several different locations in the reversible genera- tion process. We then design a steganographic context-sensitive tiling system capable of hiding secret data in a variety of procedurally-generated multimedia objects. Next, we show how to securely hide data in the structure of a Huffman tree without affecting the length of the codes. Next, we present a method for hiding data in Sudoku puzzles, both in the solved board and the clue configuration. Finally, we present a general framework for exploiting steganographic capacity in structured interactions like on- line multiplayer games, network protocols, auctions, and negotiations. Recognizing that structured interactions represent a vast field of novel media for steganography, we also design and implement an open-source extensible software testbed for analyz- x
ing steganographic interactions and use it to measure the steganographic capacity of several classic games. We analyze the steganographic capacity and security of each method that we present and show that existing steganalysis techniques cannot accurately detect the usage of the covert channels. We develop targeted steganalysis techniques which improve detection accuracy and then use the insights gained from those methods to improve the security of the steganographic systems. We find that secure synthetic steganography, and accurate steganalysis thereof, depends on having access to an accurate model of the cover media.
Data privacy in social networks is a growing concern that threatens to limit access to important information contained in these data structures. Analysis of the graph structure of social networks can provide valuable information for revenue generation and social science research, but unfortunately, ensuring this analysis does not violate individual privacy is difficult. Simply removing obvious identifiers from graphs or even releasing only aggregate results of analysis may not provide sufficient protection. Dif- ferential privacy is an alternative privacy model, popular in data-mining over tabular data, that uses noise to obscure individuals’ contributions to aggregate results and offers a strong mathematical guarantee that individuals’ presence in the data-set is hidden. Analyses that were previously vulnerable to identification of individuals and extraction of private data may be safely released under differential-privacy guaran- tees. However, existing adaptations of differential privacy to social network analysis are often complex and have considerable impact on the utility of the results, making it less likely that they will see widespread adoption in the social network analysis world. In fact, social scientists still often use the weakest form of privacy protection, simple anonymization, in their social network analysis publications, [1–6]. We review the existing work in graph-privatization, including the two existing standards for adapting differential privacy to network data. We then propose contributor-privacy and partition-privacy, novel standards for differential privacy over network data, and introduce simple, powerful private algorithms using these stan- dards for common network analysis techniques that were infeasible to privatize under previous differential privacy standards. We also ensure that privatized social net- x
work analysis does not violate the level of rigor required in social science research, by proposing a method of determining statistical significance for paired samples under differential privacy using the Wilcoxon Signed-Rank Test, which is appropriate for non-normally distributed data. Finally, we return to formally consider the case where differential privacy is not applied to data. Naive, deterministic approaches to privacy protection, including anonymization and aggregation of data, are often used in real world practice. De- anonymization research demonstrates that some naive approaches to privacy are highly vulnerable to reidentification attacks, and none of these approaches offer the robust guarantee of differential privacy. However, we propose that these methods fall across a range of protection: Some are better than others. In cases where adding noise to data is especially problematic, or acceptance and adoption of differential privacy is especially slow, it is critical to have a formal understanding of the alternatives. We define De Facto Privacy, a metric for comparing the relative privacy protection provided by deterministic approaches.
Personal identification is needed in many civil activities, and the common identification cards, such as a driver’s license, have become the standard document de facto. Radio frequency identification has complicated this matter. Unlike their printed predecessors, contemporary RFID cards lack a practical way for users to control access to their individual fields of data. This leaves them more available to unauthorized parties, and more prone to abuse. Here, then was undertaken a means to test a novel RFID card technology that allows overlays to be used for reliable, reversible data access settings. Similar to other proposed switching mechanisms, it offers advantages that may greatly improve outcomes. RFID use is increasing in identity documents such as drivers’ licenses and passports, and with it concern over the theft of personal information, which can enable unauthorized tracking or fraud. Effort put into designing a strong foundation technology now may allow for widespread development on them later. In this dissertation, such a technology was designed and constructed, to drive the central thesis that selective detuning could serve as a feasible, reliable mechanism. The concept had been illustrated effective in limiting access to all fields simultaneously before, and was here effective in limiting access to specific fields selectively. A novel card was produced in familiar dimensions, with an intuitive interface by which users may conceal the visible print of the card to conceal the wireless emissions it allows. A discussion was included of similar technologies, involving capacitive switching, that could further improve the outcomes if such a product were put to large-scale commercial fabrication. xvi The card prototype was put to a battery of laboratory tests to measure the degree of independence between data fields and the reliability of the switching mechanism when used under realistically variable coverage, demonstrating statistically consistent performance in both. The success rate of RFID card read operations, which are already greater than 99.9%, were exceeded by the success rate of selection using the featured technology. With controls in place for the most influential factors related to card readability (namely the distance from the reader antennas and the orientation of the card antenna with respect to them), the card was shown to completely resist data acquisition from unauthorized fields while allowing unimpeded access to authorized fields, even after thousands of varied attempts. The effect was proven to be temporary and reversible. User intervention allowed for the switching to occur in a matter of seconds by sliding a conductive sleeve or applying tape to regions of the card. Strategies for widespread implementation were discussed, emphasizing factors that included cost, durability, size, simplicity, and familiarity, all of which arise in card management decisions for common state and national identification such as a driver’s license. The relationship between the card and external database systems was detailed, as no such identification document could function in isolation. A practical solution involving it will include details of how multiple fields will be written to the card and separated sufficiently in external databases so as to allow for user-directed selection of data field disclosure. Opportunities for implementation in corporate and academic environments were discussed, along with the ways in which this technology could invite further investigation.
In this work we present a simple, yet effective and practical, scheme to improve the security of stored password hashes, rendering their cracking detectable and insuperable at the same time. We utilize a machine-dependent function, such as a physically unclonable function (PUF) or a hardware security module (HSM) at the authentication server to prevent off-site password discovery, and a deception mechanism to alert us if such an action is attempted. Our scheme can be easily integrated with legacy systems without the need of any additional servers, changing the structure of the hashed password file or any client modifications. When using the scheme the structure of the hashed passwords file, etc/shadow or etc/master.passwd, will appear no different than in the traditional scheme. However, when an attacker exfiltrates the hashed passwords file and tries to crack it, the only passwords he will get are the ersatzpasswords — the “fake passwords”. When an attempt to login using these ersatzpasswords is detected an alarm will be triggered in the system. Even with an adversary who knows about the scheme, cracking cannot be launched without physical ac- cess to the authentication server. The scheme also includes a secure backup mechanism in the event of a failure of the hardware dependent function. We discuss our implementation and provide some discussion in comparison to the traditional authentication scheme.
This paper describes a new version of the Network File System (NFS) that supports access to files larger than 4GB and increases sequential write throughput seven fold when compared to unaccelerated NFS Version 2. NFS Version 3 maintains the stateless server design and simple crash recovery of NFS Version 2, and the philosophy of building a distributed file service from cooperating protocols. We describe the protocol and its implementation, and provide initial performance measurements. We then describe the implementation effort. Finally, we contrast this work with other distributed file systems and discuss future revisions of NFS.
The notion of an ‘origin’ is introduced in the framework of conditional, not necessarily orthogonal, term rewriting systems. Origins are relations between subterms of intermediate terms which occur during rewriting, and subterms of the initial term. Original tracking is a method for incrementally computing origins during rewriting. Origins are a generalization of the well known concept of residuals (also called descendants). A formal definition of origins is given and a method for implementing them is presented. Origin tracking is a highly versatile technique when applied to the prototyping of algebraic specifications of programing languages. For example, origin tracking allows program execution to be visualized in a semi-automatic way, given an algebraic specification of the dynamic semantics of the programming language. Furthermore, various notions of breakpoints for generic debuggers can be defined without difficulty. Given a specification of the static semantics of a programming language, origin tracking enables, once an error (such as type-incompatability) has been detected, the position of the error in the source program to be inferred automatically.
Security of embedded devices today is a critical requirement for the Internet of Things (IoT) as these devices will access sensitive information such as social security numbers and health records. This makes these devices a lucrative target for attacks exploiting vulnerabilities to inject malicious code or reuse existing code to alter the execution of their software. Existing defense techniques have major drawbacks such as requiring source code or symbolic debugging information, and high overhead, limiting their applicability. In this paper we propose a novel defense technique, DisARM, that protects against both code-injection and code-reuse based buffer overflow attacks by breaking the ability for attackers to manipulate the return address of a function. Our approach operates on arbitrary executable binaries and thus does not require compiler support. In addition it does not require user interactions and can thus be automatically applied. Our experimental results show that our approach incurs low overhead and significantly increases the level of security against both code-injection and code-reuse based attacks.