The National Institute of Standards and Technology defines social engineering as an attack vector that deceives an individual into divulging confidential information or performing unwanted actions. Different methods of social engineering include phishing, pretexting, tailgating, baiting, vishing, SMSishing, and quid pro quo. These attacks can have devastating effects, especially in the healthcare sector, where there are budgetary and time constraints. To address these issues, this study aimed to use cybersecurity experts to identify the most important social engineering attacks to the healthcare sector and rank the underlying factors in terms of cost, success rate, and data breach. By creating a ranking that can be updated constantly, organizations can provide more effective training to users and reduce the overall risk of a successful attack. This study identified phishing attacks via email, voice and SMS to be the most important to defend against primarily due to the number of attacks. Baiting and quid pro quo consistently ranked as lower in priority and ranking.
Social Engineering attacks have been a rising issue in recent years, affecting a multitude of industries. One industry that has been of great interest to hackers is the Healthcare industry due to the high value of patient information. Social Engineering attacks are mainly common because of the ease of execution and the high probability of victimization. A popular way of combatting Social Engineering attacks is by increasing the user’s ability to detect indicators of attack, which requires a level of cybersecurity education. While the number of cybersecurity training programs is increasing, Social Engineering attacks are still very successful. Therefore, education programs need to be improved to effectively increase the ability of users to notice indicators of attack. This research aimed to answer the question - what teaching method results in the greatest learning gains for understanding Social Engineering concepts? This was done by investigating text-based, gamification, and adversarial thinking teaching methods. These three teaching methods were used to deliver lessons on an online platform to a sample of Purdue students. After conducting analysis, both text-based and adversarial thinking showed significant improvement in the understanding of Social Engineering concepts within the student sample. After conducting a follow-up test, a single teaching method was not found to be better among the three teaching methods. However, this study did find two teaching methods that can be used to develop training programs to help decrease the total number of successful Social Engineering attacks across industries.
This article presents the most commonly identified Security Control Deficiencies (SCD) faced, the attacks mitigated by addressing these SCD, and remediations suggested to 127 DoD contractors in order to bring them into compliance with the newly formed CMMC guidelines, the requirements and significance of cybersecurity compliance for small-midsized businesses.
Information security practitioners and researchers who possess sufficient depth of conceptual understanding to reconstitute systems after attacks or adapt information security concepts to novel situations are in short supply. Education of new information security professionals with sufficient conceptual depth is one method by which this shortage can be reduced. This study reports research that instructed two groups of ten undergraduate, pre-cryptography students majoring in Computer Science in cryptography concepts using representational understanding first and representational fluency first instructional treatment methods. This study compared learning results between the treatment groups using traditional paper-based measures of cognitions and fMRI scans of brain activity during cryptography problem solving. Analysis found no statistical difference in measures of cognitions or in cognitive processing, but did build a statistical model describing the relationships between explanatory variables and cryptography learning, and found common areas of cognitive processing of cryptography among the study’s twenty subjects.
This research contributes to effective risk communication for mobile devices. Mobile devices are becoming near-universal in presence, and the use of these devices comes with some risk. However, the average user does not understand these risks. Users who do not comprehend these dangers have a greater likelihood of suffering negative consequences than those who do understand the dangers. A means of alerting users to possible risks associated with an app is the permissions screen displayed with an app. In this study, I examined how this risk information is presented, and I compared two methods of Android interfaces. A survey was conducted with 756 participants recruited through Amazon Mechanical Turk. Each survey contained a simulation of the Google Play Store and instructed participants to role-play the task of downloading an app. Afterwards, each participant was questioned about which permissions were seen and what the function of each of those permissions are. The survey compared performance of users with the interfaces of Android 5.0 and Android 6.0 and found that, while each version has its own strengths, neither version was superior to the other across all domains. Android 5.0 showed better performance with informing users which permissions access their device, whereas Android 6.0 did better with presenting the functions of the permissions. The specific permissions associated with an app were a significant factor in determining whether a user could recall the permission name or definition, as some permissions are understood more easily recalled than others. In addition, Android 6.0 is shown to be more intuitive to use than Android 5.0. Although a pilot study showed users favored Android 6 over Android 5, the present study shows no clear evidence that Android 6 has a more effective permissions interface than Android 5.
Considerable attention has been given to the vulnerability of machine learning to adversarial samples. This is particularly critical in anomaly detection; uses such as detecting fraud, intrusion, and malware must assume a malicious adversary. We specically address poisoning attacks, where the adversary injects carefully crafted benign samples into the data, leading to concept drift that causes the anomaly detection to misclassify the actual attack as benign. Our goal is to estimate the vulnerability of an anomaly detection method to an unknown attack, in particular the expected minimum number of poison samples the adversary would need to succeed. Such an estimate is a necessary step in risk analysis: do we expect the anomaly detection to be suciently robust to be useful in the face of attacks? We analyze DBSCAN, LOF, one-class SVM as an anomaly detection method, and derive estimates for robustness to poisoning attacks. The analytical estimates are validated against the number of poison samples needed for the actual anomalies in standard anomaly detection test datasets. We then develop defense mechanism, based on the concept drift caused by the poisonous points, to identify that an attack is underway. We show that while it is possible to detect the attacks, it leads to a degradation in the performance of the anomaly detection method. Finally, we investigate whether the generated adversarial samples for one anomaly detection method transfer to another anomaly detection method.
More than ever, information system designers must provide security protection against a wide variety of threats. While numerous sources of guidance are available to inform the design process, system architects often improvise their own design methods. This paper aims to distil the experience gained by NSA trusted system analysts over decades so that it that can be practically applied by others. The general approach is to identify and reduce the number of assumptions on which the security of the system depends. Simply making these assumptions explicit and showing their interdependence has significant, albeit difficult to quantify, benefits for system security. Our hope is that this design methodology will serve as the starting point for the development of a more formal and robust engineering methodology for trusted system design.
For organizations moving to the cloud this paper provides considerations of security and privacy concerns that should be considered.
This dissertation introduces a scorecard to enable the State of Indiana to measure the cybersecurity of its public and private critical infrastructure and key resource sector organizations. The scorecard was designed to be non-threatening and understandable so that even small organizations without cybersecurity expertise can voluntarily self-asses their cybersecurity strength and weaknesses. The scorecard was also intended to enable organizations to learn, so that they may identify and self-correct their cybersecurity vulnerabilities. The scorecard provided quantifiable feedback to enable organizations to benchmark their initial status and measure their future progress.
Using the scorecard, the Indiana Executive Council for Cybersecurity launched a Pilot to measure cybersecurity of large, medium, and small organizations across eleven critical infrastructure and key resources sectors. This dissertation presents the analysis and results from scorecard data provided by the Pilot group of 56 organizations. The cybersecurity scorecard developed as part of this dissertation has been included in the Indiana Cybersecurity Strategy Plan published September 21, 2018.
User’s digital identity information has privacy and security requirements. Privacy requirements include confidentiality of the identity information itself, anonymity of those who verify and consume a user’s identity information and unlinkability of online transactions which involve a user’s identity. Security requirements include correctness, ownership assurance and prevention of counterfeits of a user’s identity information. Such privacy and security requirements, although conflict in nature, are critical for identity management systems enabling the exchange of users’ identity information between different parties during the execution of online transactions. Addressing all such requirements, without a centralized party managing the identity exchange transactions, raises several challenges. This paper presents a decentralized protocol for privacy preserving exchange of users’ identity information addressing such challenges. The proposed protocol leverages advances in blockchain and zero knowledge proof technologies, as the main building blocks. We provide prototype implementations of the main building blocks of the protocol and assess its performance and security.
Renewable energy resources challenge traditional energy system operations by substituting the stability and predictability of fossil fuel based generation with the unreliability and uncertainty of wind and solar power. Rising demand for green energy drives grid operators to integrate sensors, smart meters, and distributed control to compensate for this uncertainty and improve the operational efficiency of the grid. Real-time negotiations enable producers and consumers to adjust power loads during shortage periods, such as an unexpected outage or weather event, and to adapt to time-varying energy needs. While such systems improve grid performance, practical implementation challenges can derail the operation of these distributed cyber-physical systems. Network disruptions introduce instability into control feedback systems, and strategic adversaries can manipulate power markets for financial gain. This dissertation analyzes the impact of these outages and adversaries on cyber-physical systems and provides methods for improving resilience, with an emphasis on distributed energy systems. ^ First, a financial model of an interdependent energy market lays the groundwork for profit-oriented attacks and defenses, and a game theoretic strategy optimizes attack plans and defensive investments in energy systems with multiple independent actors. Then attacks and defenses are translated from a theoretical context to a real-time energy market via denial of service (DoS) outages and moving target defenses. Analysis on two market mechanisms shows how adversaries can disrupt market operation, destabilize negotiations, and extract profits by attacking network links and disrupting communication. Finally, a low-cost DoS defense technique demonstrates a method that energy systems may use to defend against attacks.
Unauthorized data destruction results in a loss of digital information and services, a devastating issue for society and commerce that rely on the availability and integrity of such systems. Remote adversaries who seek to destroy or alter digital information persistently study the protection mechanisms and craft attacks that circumvent defense mechanisms such as data back-up or recovery. This dissertation evaluates the use of deception to enhance the preservation of data under the threat of unauthorized data destruction attacks. The motivation for the proposed solution is two-fold. (i) An honest and consistent view of the preservation mechanisms are observable and often controlled from within the system under protection, allowing the adversary to identify an appropriate attack for the given system. (ii) The adversary relies on some underlying I/O system to facilitate destruction and assumes that the components operate according to a confirmation bias based on prior interactions with similar systems. A deceptive memory system, DecMS, masks the presence of data preservation and mimics a system according to the adversary’s confirmation bias. Two proofs of concepts and several destructive threat instances evaluate the feasibility of a DecMS. The first proof of concept, DecMS-Kernel, uses rootkits’ stealth mechanisms to mask the presence of DecMS and impede potential destructive writes to enable preservation of data before destruction. The experimental results show that DecMS is effective against two common secure delete tools and an application that mimics crypto ransomware methods.
Mobile app poses both traditional and new potential threats to system security and user privacy. There are malicious apps that may do harm to the system, and there are mis-behaviors of apps, which are reasonable and legal when not abused, yet may lead to real threats otherwise. Moreover, due to the nature of mobile apps, a running app in mobile devices may be only part of the software, and the server side behavior is usually not covered by analysis. Therefore, direct analysis on the app itself may be incomplete and additional sources of information are needed. In this dissertation, we discuss how we can apply machine learning techniques in multiple tasks for security issues in regard of mobile apps in the Android platform. These include malicious apps detection and security risk estimation of apps. Both direct sources of information from the developer of apps and indirect sources of information from user comments are utilized in these tasks. We also propose comparison of these different sources in the task of security risk estimation to point out the necessity of usage of indirect sources in mobile app security tasks.