- A Field Test of Mobile Phone Shielding Devices
- A Null Space Based Defense for Pollution Attacks in Network Coding
- A Performance Study of Unmanned Aerial Vehicle Systems-of-Systems and Communications Architectures Under Cyber Attack
- Analysis of Internet Addiction Among Child Pornography Users
- Data Locations on the Nokia N900
- Defeating Stateful Firewalls
- Differentially Private Graph Queries
- Digital Forensics Tool Box
- Efficient and Flexible Attribute Policy Based Key Management
- Energy-Efficient Provenance Transmission in Large-Scale Wireless Sensor Networks
- Flash Malware Analysis
- Hardening Network Embedded Devices
- Human Factors Considerations for Privacy Properties in Home Healthcare Systems
- Implicit Buffer Overflow Protection Using Memory Segmentation
- JSLocker: Flexible Access Control Policies with Delimited Histories and Revocation
- Kernel Malware Analysis with Un-tampered and Temporal Views of Dynamic Kernel Memory
- Malware Analysis & Reverse Engineering Quick Evaluation System
- Managing Identity Across Social Networks
- Nudging the Digital Pirate: Behavioral Issues in the Piracy Context
- Partitioning Network Experiments for the Cyber-Range
- SigGraph: Graph-based Signatures for Kernel Data Structures
- Strengthening Distributed Digital Forensics
- Trustworthy Data From Untrusted Servers
- Using context-profiling to aid access control decisions in mobile devices
- v-CAPS: A Confidential and Anonymous Routing Protocol for Content-Based Publish-Subscribe Networks
- Verification of Secure Cloud-based Workflow Services
- Web 2.0 in Organizations: Controlling Openness?
- Web 2.0: A Complex Balancing Act
- Yahoo Messenger Forensics for Windows Vista and Windows 7
- All Posters (PDF)
A Field Test of Mobile Phone Shielding Devices
Eric Katz, Rick Mislan, Marc Rogers, Tony Smith
Mobile phones are increasingly a source of evidence in criminal investigations. The evidence on a phone is volatile and can easily be overwritten or deleted. There are many tools that claim to radio isolate a phone in order to preserve evidence. Unfortunately the wireless preservation devices do not always successfully prevent network communication as promised. The purpose of this study was to identify situations where the devices used to protect evidence on mobile phones can fail. There has been little published research on how well these devices work in the field despite the escalating importance of mobile phone forensics. These shielding devices were tested using mobile phones from three of the largest services providers in the U.S. Calls were made to contact the isolated phones using voice, SMS, and MMS at varying distances from the provider’s towers. In the majority of the test cases the phones were not isolated from their networks despite being enclosed in a shielding device. It was found that SMS calls penetrated the shields the most often. Voice calls were the next most likely to penetrate the shields and MMS were the least.
A Null Space Based Defense for Pollution Attacks in Network Coding
Andrew Newell and Cristina Nita-Rotaru
A network coding system allows intermediate nodes of a network to code packets together which ultimately results in better network performance. Due to the nature of network coding, it is difficult to impose hop-by-hop data integrity as intermediate nodes change packet contents. Without hop-by-hop data integrity, a byzantine adversary can mount a denial of service attack (pollution attack) which cripples a network coding system. Much work has focused on pollution defenses, but they all have limitations in terms of time synchronization, expensive computations, and large coding headers. A recent solution based on null spaces has the potential to escape the aforementioned limitations. However, their solution does not work for arbitrary network topologies. We propose a new protocol with a novel null space splitting technique that ensures practical defense for arbitrary topologies.
A Performance Study of Unmanned Aerial Vehicle Systems-of-Systems and Communications Architectures Under Cyber Attack
Ethan Puchaty, Dr. Dan DeLaurentis
In civilian and defense networks involving the use of unmanned aerial vehicles (UAVs) as sensor platforms, an emerging area of interest is the analysis of the performance of these networks from a high-level perspective, especially with respect to cyber security. A System-of-Systems approach is useful in creating tools to quantify network-level functionality, and such tools can assist in designing and evaluating architectures that are resistant and resilient to failures that might occur during cyber attack. This study seeks to evaluate the performance trade-offs between various communications architecture options in the context of an integrated air defense network. An agent-based discrete event simulation is used to model a defense communication network consisting of UAVs, military communications satellites, ground relay stations, mobile interceptor agents, and a mission control center for the purpose of tracking ballistic missiles. Network susceptibility to cyber attack is modeled with probabilistic failures and induced data variability, with performance metrics focusing on information availability and trustworthiness.
Analysis of Internet Addiction Among Child Pornography Users
Rachel A. Sitarz, Marcus Rogers, Eugene Jackson, Lonnie Bentley
The purpose of the study was to see if consumers of child pornography are addicted to the materials, causing them to spend excessive amounts of time viewing, collecting, or trading with others. The study focused on the general population on the Internet, whom were over the age of 18. 144 responded to the survey, with 26 classified as child pornography users. Statistical analysis revealed a relationship between child pornography usage and addiction to the Internet.
Data Locations on the Nokia N900
Mark Lohrum
The Nokia N900 is a very powerful smartphone and offers great utility to users. As smartphones contain a wealth of information about the user, including information about the user’s contacts, communications, and activities, investigators must have at their disposal the best possible methods for extracting important data from smartphones. Unlike with other smartphones, knowledge of forensic acquisition from the N900 is extremely limited. Extractions of data from the N900 are categorized into limited triage extractions and full physical extractions. The imaging process of the phone has been explained as is necessary for a full investigation of the phone. The types of data as called for in a limited data extraction have been identified, and the locations of these files on the N900 were detailed. Also, a script was created which can be utilized for a limited data extraction from a Nokia N900.
Defeating Stateful Firewalls
Dannie M. Stanley
Firewalls attempt to provide network access control. However, we describe a vulnerability that allows an outside attacker in collaboration with a mole to access UDP and TCP services running on an internal “protected” network. The End-to-End Argument in system design states that functions which depend on applications running on the end points should be placed at the end points and not in the communication system. The access control function found in firewalls depends on “connection tracking.” Firewalls attempt to track a connection by observing network data flow using stateful packet inspection. However, IP, UDP and TCP were not designed to provide enough information for intermediate network devices to correctly and reliably track connection states. A connection state can only reliably be determined at the end hosts. By disregarding the End-to-End Argument, firewalls are vulnerable to attack. Many deployed networks have firewalls that allow network traffic, originating from the internal network, to flow to the outside. Determining the origin of a connection requires connection tracking. When a firewall is not able to accurately track a connection, the origin of a connection can be forged and the firewall can be manipulated into adding an “established” connection between an attacker and a protected network service. We describe the principles behind connection tracking that allow this to happen and demonstrate several attacks that allow access to both UDP and TCP based services including SNMP, NFS, and HTTP.
Differentially Private Graph Queries
Christine Task, Chris Clifton
Epsilon differential privacy is a context-independent guarantee of individual privacy in data query results, defined by Cynthia Dwork of Microsoft Research. Given two data-sets which differ only on one (arbitrarily chosen) individual, a differentially private query will return an answer S with nearly the same probability on both sets. Thus given some query result S, we’re unable to determine which data set the query ran on, obfuscating the contributions of any individual. Creating differentially private graph queries is especially challenging. If a graph’s nodes represent individuals, and its edges represent relationships, the removal of an individual from a data set can have catastrophic effect on the result of the query. We explore which queries are impossible to privatize, which are feasible, and which are feasible under certain constraints.
Digital Forensics Tool Box
Kelly Cole
The Digital Forensic Toolbox website reviews and rates the various digital forensic tools in the market. The ratings come from within the digital forensic community (Industry, Law Enforcement, Academia and Military). Thus, the community is able to submit ratings for the tools they have used and also find the best rated tool for their needs.
Efficient and Flexible Attribute Policy Based Key Management
Mohamed Nabeel, Elisa Bertino
Attribute based systems enable fine-grained access control among a group of users each identified by a set of attributes. Broadcast services and secure collaborative applications, such as location based services, cloud storage services, multimedia streaming services and document dissemination, need such flexible attribute based systems for managing and distributing group keys. However, current group key management schemes are not well designed to manage group keys based on the attributes of the group members. Attribute based group membership policies allow to select any sub-group of users from a large group of users. In this poster, we propose novel key management schemes that allow users whose attributes satisfy a certain group membership policy to derive the group key. Our schemes efficiently support rekeying operations when the group changes due to joins or leaves of group members. During a rekey operation, the private information issued to existing members remains unaffected and only the public information is updated to change the group key. Our schemes are expressive and flexible; specifically they are able to support any monotonic group membership policy over a set of attributes. Further, our schemes are resistant to collusion attacks; group members are unable to pool their attributes in a meaningful way and derive the group key which they cannot derive individually.
Energy-Efficient Provenance Transmission in Large-Scale Wireless Sensor Networks
S. M. Iftekharul Alam, Dr. Sonia Fahmy
With the deployment of large-scale sensor-based decision support systems, quality assurance of decision-making becomes a must. This underscores the requirement of assessing trustworthiness of sensor data and the owners of this data. Provenance-based trust evaluation frameworks use data provenance along with data values to compute the trustworthiness of each data item. However, in a sizeable multi-hop network, provenance information requires a large and variable number of bits in each packet, which, in turn, results in high energy dissipation with extended period of radio communication, making trust systems unusable. We propose an energy-efficient provenance transmission and construction scheme, which we refer to as Probabilistic Provenance Flow (PPF). To the best of our knowledge, ours is the first approach to adapt the Probabilistic Packet Marking (PPM) approach of IP traceback for sensor networks. We propose two bit-efficient complementary provenance encoding and construction methods, and combine them to deal with topological changes in the network. Our TOSSIM simulations demonstrate that PPF requires at least 33% fewer packets and consumes 30% less energy than PPM-based approaches of IP traceback to construct provenance, yet still provides high accuracy in trust score calculation.
Flash Malware Analysis
Francis Ripberger and Jim Goldman
This poster will explain the Flash Malware problem, where we are today for Reverse Engineering it, and where the research plans to take the Flash Malware Analysis (FMA).
Hardening Network Embedded Devices
Blake Self, Dr. Eugene Spafford
As botnets and other attacks against personal networks become more prevalent, router security is more important than ever. Many of the embedded network devices used in personal networks use the Linux operating system. This project aims to use existing vulnerability mitigation technology on these devices to obtain significant security benefits with a minimal performance hit. For this project, three different router operating systems were examined and modified to achieve practical security. The systems used for hardening were grsecurity and PaX. Aside from hardening these operating systems on various routers, we also examined hardware limitations and the requirements to take better advantage of vulnerability mitigation technology. This information will serve as hardware requirements for future hardened embedded network devices.
Human Factors Considerations for Privacy Properties in Home Healthcare Systems
Kyeong-Ah Jeong and Robert W. Proctor
Privacy properties for remote/home-based healthcare systems have been proposed, but human factors issues involved in implementing those properties have received little consideration. We reviewed proposed privacy properties and identified human factors issues associated with successful implementation of these properties. Implementations that do not take the users into account will most likely fail to accomplish their privacy and security goals.
Implicit Buffer Overflow Protection Using Memory Segmentation
Brent Roth and Dr. Eugene Spafford
Computing systems continue to be plagued by malicious corruption of instructions and data. Buffer overflows, in particular, are often employed to disrupt the control flow of vulnerable processes. Existing methods of protection against these attacks operate by detecting corruption after it has taken place or by ensuring that if corruption has taken place, it cannot be used to hijack a process’ control flow. These methods thus still allow the corruption of control data to occur but rather than being subverted, the process may terminate or take some other defined error. Few methods have attempted to prevent the corruption of control data, and those that have only focused on preventing the corruption of the return address. We propose the use of multiple memory segments to support multiple stacks, heaps, .bss, and .data sections per process with the goal of segregating control and non-control data. By segregating these different forms of data, we can prevent the corruption of control data by overflow and address manipulation of memory allocated for non-control data. We show that the creation of these additional data segments per process can be implemented through modifications to the compiler.
JSLocker: Flexible Access Control Policies with Delimited Histories and Revocation
Christian Hammer, Gregor Richards, Suresh Jagannathan, Jan Vitek
Providing security guarantees for software systems built out of untrusted components requires the ability to enforce fine-grained access control policies. This is evident in Web 2.0 applications where JavaScript code from different origins is often combined on a single page, leading to well-known vulnerabilities. This paper presents a security infrastructure which allows users and content providers to specify access control policies over delimited histories, subsets of JavaScript execution traces, allowing revocation of the history, and reversion to a safe state if a violation is detected. We report on an empirical evaluation of this proposal in the context of a production browser. We show examples of security policies which can prevent real attacks without imposing drastic restrictions on legacy applications. We have evaluated our proposal with two non-trivial policies on 50 of the Alexa top websites with no changes to the legacy JavaScript code. Between 72% and 84% of the sites were fully functional, and only 1 site was rendered non-functional. In term of performance overhead we observed a worst case 106% slowdown with a typical case closer to 10%.
Kernel Malware Analysis with Un-tampered and Temporal Views of Dynamic Kernel Memory
Junghwan Rhee, Ryan Riley, Dongyan Xu, and Xuxian Jiang
Dynamic kernel memory has been a popular target of recent kernel malware due to the difficulty of determining the status of volatile dynamic kernel objects. Some existing approaches use kernel memory mapping to identify dynamic kernel objects and check kernel integrity. The snapshot-based memory maps generated by these approaches are based on the kernel memory which may have been manipulated by kernel malware. In addition, because the snapshot only reflects the memory status at a single time instance, its usage is limited in temporal kernel execution analysis. We introduce a new runtime kernel memory mapping scheme called allocation-driven mapping, which systematically identifies dynamic kernel objects, including their types and lifetimes. The scheme works by capturing kernel object allocation and deallocation events. Our system provides a number of unique benefits to kernel malware analysis: (1) an un-tampered view wherein the mapping of kernel data is unaffected by the manipulation of kernel memory and (2) a temporal view of kernel objects to be used in temporal analysis of kernel execution. We demonstrate the effectiveness of allocation-driven mapping in two usage scenarios. First, we build a hidden kernel object detector that uses an un-tampered view to detect the data hiding attacks of 10 kernel rootkits that directly manipulate kernel objects (DKOM). Second, we develop a temporal malware behavior monitor that tracks and visualizes malware behavior triggered by the manipulation of dynamic kernel objects. Allocation-driven mapping enables a reliable analysis of such behavior by guiding the inspection only to the events relevant to the attack.
Malware Analysis & Reverse Engineering Quick Evaluation System
James E. Goldman, Cory Q. Nguyen, Anthony E. Smith
The Malware Analysis & Reverse Engineering Quick Evaluation System (MARQUES) is a system designed to create a preliminary analysis report that would give security administrators and investigators immediate information and insight into a suspected malware’s capabilities, functions, and purpose. MARQUES has the ability to automate analysis of malware not only on a behavioral level but also on a code level. The ability to automate analysis of malware on a code level separates it from the conventional existing malware services. This information is vital in responding and combating malware attacks and infection on network systems. The MARQUES system aims at increasing the response time to malware incidents and aims at providing valuable insight into pattern recognitions and trend analysis of existing and zero-day malware specimens. The MARQUES system incorporates the established Malware Analysis & Reverse Engineering (MARE) methodology developed by the Purdue Malware Lab research team. The MARE methodology is the engine of the MARQUES system that automates the behavioral and code analysis of suspected malware.
Managing Identity Across Social Networks
Mihaela Vorvoreanu, Quintana Clark
This project seeks to gain an in-depth understanding of online identity management across social networks. Specifically, it addresses the research question: Given the changes in social context brought about by social networking sites, how do people manage their identities online? Data was collected through an online survey from a criterion sample of people who use three or more social networks on a weekly basis. We identify strategies these advanced users employ to manage social context, audiences, and their online self-presentation efforts.
Nudging the Digital Pirate: Behavioral Issues in the Piracy Context
Matthew Hashim, Karthik Kannan, Sandra Maximiano, Duane Wegener, Jackie Rees
Piracy is a significant source of concern facing software developers, music labels, and movie production companies. Firms continue to invest in digital rights management technologies to thwart piracy, but their efforts are quickly defeated by hackers and pirates. The goal of this research is to provide actionable insights that management may use to mitigate piracy. We conduct two studies to explore behavioral issues in the piracy context. In the first study, we theorize and provide support that moral obligation may mediate other constructs from the theory of planned behavior. We believe this is a consequence of the desire for an individual to rationalize unethical behavior, especially when the crime is victimless. In the second study, we relate piracy to an abstract public goods problem in economics. Specifically, we design and implement an experiment with several treatments to investigate the role of information targeting on coordination in a multi-threshold public goods game. Our analysis of the experimental data shows that by targeting information about the contribution rate to a public good, one may be able to achieve improvements in coordination and thereby enable an increased allocation to the good. In contrast, providing information randomly does not improve cooperation and coordination any more than with no information at all. Note that our random information treatment approximates strategies currently used in practice for educating consumers about digital piracy. Overall, our findings in these two studies provide valuable insights into understanding piracy behavior, and developing appropriate information strategies to mitigate digital piracy.
Partitioning Network Experiments for the Cyber-Range
Wei-Min Yao, Sonia Fahmy
Understanding the behavior of large-scale systems is challenging, but essential when designing new Internet protocols and applications. It is often infeasible or undesirable to conduct experiments directly on the Internet. Thus, simulation, emulation, and testbed experiments are important techniques for researchers to investigate large-scale systems. In this paper, we propose a platform-independent mechanism to partition a large network experiment into a set of small experiments that are sequentially executed. Each of the small experiments can be conducted on a given number of experimental nodes, e.g., the available machines on a testbed. Results from the small experiments approximate the results that would have been obtained from the original large experiment. We model the original experiment using a flow dependency graph. We partition this graph, after pruning uncongested links, to obtain a set of small experiments. We execute the small experiments in two iterations. In the second iteration, we model dependent partitions using information gathered about both the traffic and the network conditions during the first iteration. Experimental results from several simulation and testbed experiments demonstrate that our techniques approximate performance characteristics, even with closed-loop traffic and congested links. We expose the fundamental tradeoff between the simplicity of the partitioning and experimentation process, and the loss of experimental fidelity.
SigGraph: Graph-based Signatures for Kernel Data Structures
Zhiqiang Lin, Junghwan Rhee, Xiangyu Zhang, Dongyan Xu, and Xuxian Jiang
Brute force scanning of kernel memory images for finding kernel data structure instances is an important function in many computer security and forensics applications. Brute force scanning requires effective, robust signatures of kernel data structures. Existing approaches often use the value invariants of certain fields as data structure signatures. However, they do not fully exploit the rich points-to relations between kernel data structures. In this work, we show that such points-to relations can be leveraged to generate graph-based structural invariant signatures. More specifically, we develop SigGraph, a framework that systematically generates non-isomorphic signatures for data structures in an OS kernel. Each signature is a graph rooted at a subject data structure with its edges reflecting the points-to relations with other data structures. Our experiments with a range of Linux kernels show that SigGraph-based signatures achieve high accuracy in recognizing kernel data structure instances via brute force scanning. We further show that SigGraph achieves better robustness against pointer value anomalies and corruptions, without requiring global memory mapping and object reachability. We demonstrate that SigGraph can be applied to kernel memory forensics, kernel rootkit detection, and kernel version inference.
Strengthening Distributed Digital Forensics
none
Ever increasing datasets in digital forensics are only causing current investigative tools to run slower and slower. A logical solution to this problem is to shrink datasets by enhancing image capturing and analysis processes. While research is being conducted in these areas, a solution has yet to be commercialized for use by the digital forensics community. Until such solution is available these large datasets must be dealt with which can be accomplished through the use of distributed digital forensics. Though distributed digital forensics is not a new concept, there are issues in regards to feasibility, security, reliability, and scalability that need to be address to make it a viable coping mechanism for the digital forensics community.
Trustworthy Data From Untrusted Servers
Rohit Jain, Sunil Prabhakar
Outsourcing a database system (e.g., into a cloud architecture) is an attractive option for reducing the complexity and cost of data management. While this model holds great promise, it raises a number of security and privacy concerns, including concerns about the fidelity of the outsourced database. Since the data owner and clients do not have direct control over the database, there is great reluctance to trust the outsourcing server. In particular, there is a need to establish the authenticity and integrity of an outsourced database. Earlier work on this problem is limited to the situation where there are no updates to the database at the server i.e., either the database is static, or the updates are determined at the data owner’s site. This is an unreasonable assumption for a truly outsourced database, as would be expected in a cloud database. We propose the problem of ensuring authenticity and integrity of an outsourced database in the presence of transactional updates that run directly on the outsourced database. We develop the first solutions to this problem and show how it is possible to assure the data owner about the fidelity of transaction processing at an outsourced server. We implement our solution in a prototype system built using Postgres with no modifications to the database internals. We also provide an empirical evaluation of the proposed solutions and establish their feasibility.
Using context-profiling to aid access control decisions in mobile devices
Aditi Gupta, Markus Miettinen, N. Asokan
In this work, we demonstrate the use of context-profiling for making access control decisions in mobile devices. In particular, we discuss the device locking use case, where the device locking timeout and unlocking method are dynamically decided based on the perceived safety of current context.
v-CAPS: A Confidential and Anonymous Routing Protocol for Content-Based Publish-Subscribe Networks
Amiya Kumar Maji, Saurabh Bagchi
Content-based Publish-Subscribe (CBPS) is a widely used communication paradigm where publishers “publish” messages and a set of subscribers receive these messages through filtering and routing by an intermediate set of brokers based on subscribers’ interests. We are interested in using CBPS in healthcare settings to disseminate health-related information (drug interactions, insurance quotes) to large numbers of subscribers in a confidentiality-preserving manner. Condentiality in CBPS requires that the message be hidden from brokers whereas the brokers need the message to compute routing decisions. Previous approaches to achieve these conflicting goals suffer from significant shortcomings—misrouting, lesser expressivity of subscriber interests, high execution time, and high message overhead. Our solution, entitled v-CAPS, achieves the competing goals while avoiding the previous problems. Our experiments show that v-CAPS has much lower end-to-end message latency than existing solutions and with unencrypted routing vectors, has similar latency as in a baseline CBPS system.
Verification of Secure Cloud-based Workflow Services
3
Organizations are increasingly using independently developed Web services distributed over the network for providing access to their information and computation resources. For example, Software as a Service (SaaS) has increasingly been adopted by organizations in private and public sectors. This creates a growing need to support secure interaction among autonomous cloud customers for developing distributed applications. We represent each cloud customer as a domain operates according to its individual security and access control policies. Supporting secure interactions among domains for distributed workflows is a complex task prone to subtle errors that can have serious security implications. In this project we propose a Generalized Temporal Role Base Access Control (GTRBAC) model to specify the time dependent access control policies deployed by autonomous cloud customers. In addition, we propose a framework for verifying secure composibility of distributed workflows in an autonomous multi-domain environment. The objective of workflow composibility verification is to ensure that all the users or processes executing the designated workflow tasks conform to the security policy specifications of all collaborating domains. We propose a two-step approach for verifying secure workflow composibility. In the first step, a distributed workflow is decomposed into domain-specific projected workflows and is verified for conformance with the respective domain’s security and access control policy. In the second step, the cross-domain dependencies amongst the workflow tasks performed by different collaborating domains are verified.
Web 2.0 in Organizations: Controlling Openness?
Preeti Rao
Web 2.0 in Organizations: Controlling Openness? Preeti Rao, Purdue University Abstract Technological advancements are influencing the ways in which people interact in organizational and societal contexts. Today, Web 2.0 technologies fundamentally affect human creativity in interacting, communicating and sharing information online. While very popular among people in social contexts, organizations are quickly adopting social media technologies as well. Studies have shown that while organizations realize the benefits and values of Web 2.0 technologies, the security and productivity risks of Web 2.0 adoption remain barriers today. This research study addresses the question: “Are organizations able to harness the value of Web 2.0 while controlling its inherent openness?” The study analyzed a set of in-depth interviews (N=25) of experts in the field of social media from the industry, market and academia. The methodological analysis was conducted using content analysis and semantic network analysis using Leximancer software. The robustness of the analysis process is attributed to both thematic and relational analysis of the content, followed by statistical significance testing of the results obtained. The results indicate four major themes in the content: organizations, people, security and policy. Openness, which is an inherent characteristic of Web 2.0, consisted of concepts like “create”, “share”, “post” which refer to Web 2.0’s idea of users contributing and sharing information online. Control, which could be termed as an organizational construct for managing information security risks, consisted of concepts like “control”, “monitor”, “policy”, “trust”. Results revealed that openness is more associated with conceptual themes of people than with themes of organizations. At the same time, themes of organizations are linked with semantic constructs of control more than openness. This suggests that while organizations recognize the value of Web 2.0, they seek to exercise control over the inherent openness of such tools. This organizational tension of balancing openness with control of Web 2.0 technologies can be attributed to the fact that Web 2.0 tools are fundamentally tools to create, share and transmit (potentially sensitive) information beyond corporate networks and its control.
Web 2.0: A Complex Balancing Act
Lorraine Kisselburgh, Mihaela Vorvoreanu, Eugene Spafford & Preeti Rao
Web 2.0 – A Complex Balancing Act The First Global Study on Web 2.0 Usage, Risks and Best Practices Lorraine Kisselburgh, Mihaela Vorvoreanu, Eugene Spafford & Preeti Rao Abstract Defined broadly as consumer social media applications such as Facebook, Twitter and YouTube, and specialized Enterprise 2.0 solutions, Web 2.0 has become a term surrounded by many debates. In the first global study on Web 2.0 usage, risks, and practices, we surveyed more than 1,000 organizational decision-makers in 17 countries and interviewed experts to develop an in-depth study of emerging policies and practices in how organizations balance the risks and benefits of using Web 2.0 technologies. Our findings show high Web 2.0 adoption: 75% of organizations use Web 2.0 for a variety of business functions, with the main driver being new revenue potential. Yet organizational decision makers continue to debate employee use of Web 2.0– either in the office or on the road. Security is the leading concern, and one of the main threats is employee use of social media: 33% restrict employee use, 25% monitor use, and 13% block all social media access. Social network sites are regarded as the main security threat, and are blocked by nearly half of the organizations surveyed. Organizations today employ a variety of measures to ensure safe use of Web 2.0: 66% have social media policies, and 71% use technology to enforce those policies. However, 33% of organizations have no social media policy, and 50% lack a policy for mobile Web 2.0. To address these challenges, many organizations have increased security protection since introducing Web 2.0 applications through increased firewall protection (79%), web filtering (58%), and web gateway protection (53%). This study recommends a multi-layer security approach customized for Web 2.0 specific challenges to mitigate adoption risks. Successful organizational use of Web 2.0 is a complex balancing act. It requires analyzing challenges and opportunities while mitigating risks, and combining policy, employee training and technology solutions to ensure security.
<a name="7DD-26B"></a>
Yahoo Messenger Forensics for Windows Vista and Windows 7
Matthew Levendoski, Tejashere Datar
The purpose of this study is to indicate several areas of interest within the Yahoo! Messenger application that are of forensic significance. This study will mainly focus on new areas of interest within the file structure of Windows Vista and Windows 7. One of the main issues with this topic is that little research has been previously conducted on the new Windows platforms. The previously conducted research indicates evidence found on older file structures, such as Windows XP, as well as outdated versions of Yahoo! Messenger.