In 2002, 80% of all adults in the United States sought health information and/or services online. This article reports the results of computer-assisted telephone interviews of a national random sample of 186 adults. The purpose of the survey was to clarify the circumstances under which consumers utilize Internet health resources and identify barriers to Internet use. The results indicated that although 78% of the respondents had used the Internet to obtain health information, only about 10% communicated by e-mail with their providers, purchased supplies over the web, or used the Internet to manage a chronic disease. At the same time, more than 50% of the respondents indicated an interest in using the Internet for clinical purposes. Major barriers to the use of the Internet for health-related purposes were potential threats to privacy, inaccuracy of information, problems in evaluating the quality of information and services obtained from the web, and physician disapproval.
Improving software assurance is of paramount importance given the impact of software on our lives. Static and dynamic approaches have been proposed over the years to detect security vulnerabilities. These approaches assume that the signature of a defect, for instance the use of a vulnerable library function, is known apriori. A greater challenge is detecting defects with signatures that are not known apriori—unknown software defects. In this paper, we propose a general approach for detection of unknown defects. Software defects are discovered by applying data-mining techniques to pinpoint deviations from common program behavior in the source code and using statistical techniques to assign significance to each such deviation. We discuss the implementation of our tool, FaultMiner, and illustrate the power of the approach by inferring two types of security properties on four widely-used programs. We found two new potential vulnerabilities, four previously known bugs, and several other violations. This suggests that FaultMining is a useful and promising approach to finding unknown software defects.
We introduce an approach to robust computation, in distributed systems. This approach is the foundation of reliablity in the Clouds decentralized operating system. it is based on atomic actions operating on instances of abstract data types(objects). We present an event-based model of computation in which scheduling of responses to operation invocations is controlled by objects. We discuss an integrated strategy for synchronization and recovery which uses relationships betweenthe abstract states of objectss to track dependencies between actions. Serilizability is defined in terms of the semantics of operations. This permits high concurrency to be obtained in non-serializable implementations without deviation from serializable abstract behavior. We define a class of schedulers that allows objecs to make autonomous scheduling decisions. We present the use of non-serializable operation semantics. Finally we discuss implementation of the model, includind action synchronization, object operation odering using action-based counting semaphores, and action recovery.
Using abstratct data types and nested actions as system structuring tools can help create more robust systems using these tools, several interesting principles have been encountered. First, in this environment synchronization and recovry should be associated with each object. By associating synchronization with each object and by using the semantics of the obeject operations, it is possible to acheieve higher concurrency. Binding recovery to ojects permits efficient recovery techniques which might not be possible without the specific implementation knowledge available to the programmer of the object. Second, its important to distinguish between the abstract behavior of an object and its implementation when analyzing cincurrency. Third, using serializability for the abstact behavior of an object is sometimes undesirable or unnecessary. Whether an object provides serializability as the abstarct behavior depends on the semantics of how the object is used. Examples of object types which motivate the principles are presented.
This work formally defines a digital forensic investigation and categories of analysis techniques. The definitions are based on an extended finite state machine (FSM) model that was designed to include support for removable devices and complex states and events. The model is used to define the concept of a computer’s history, which contains the primitive and complex states and events that existed and occurred. The goal of a digital investigation is to make valid inferences about a computer’s history.
Unlike the physical world, where an investigator can directly observe objects, the digital world involves many indirect observations. The investigator cannot directly observe the state of a hard disk sector or bytes in memory. He can only directly observe the state of output devices. Therefore, all statements about digital states and events are hypotheses that must be tested to some degree.
Using the dynamic FSM model, seven categories and 31 unique classes of digital investigation analysis techniques are defined. The techniques in each category can be used to test and formulate different types of hypotheses and completeness is shown. The classes are defined based on the model design and current practice.
Using the categories of analysis techniques and the history model, the process models that investigators use are formally compared. Until now, it was not clear how the phases in the models were different. The model is also used to identify where assumptions are made during an investigation and to show differences between the concepts of digital forensics and the more traditional forensic disciplines.
Securing access to data in location-based services and mobile applications requires the definition of spatially aware access control systems. Even if some approaches have already been proposed either in the context of geographic database systems or context-aware applications, a comprehensive framework, general and flexible enough to cope with spatial aspects in real mobile applications, is still missing. In this paper, we make one step towards this direction and we present GEO-RBAC, an extension of the RBAC model to deal with spatial and location-based information. In GEO-RBAC, spatial entities are used to model objects, user positions, and geographically bounded roles. Roles are activated based on the position of the user. Besides a physical position, obtained from a given mobile terminal or a cellular phone, users are also assigned a logical and device independent position, representing the feature (the road, the town, the region) in which they are located. To make the model more flexible and re-usable, we also introduce the concept of role schema, specifying the name of the role as well as the type of the role spatial boundary and the granularity of the logical position. We then extend GEO-RBAC to cope with hierarchies, modeling permission, user, and activation inheritance, and separation of duty constraints. The proposed classes of constraints extend traditional ones to deal with different granularities (schema/instance level) and spatial information. They represent an attempt to define a suitable class of constraints for spatially-aware applications. The paper is concluded with the investigation of several properties concerning the resulting model.
The design of context-aware access control models with spatial constraints is still far from satisfactory in a very important respect, vis-
Authorization and access control in Web services is complicated by the unique requirements of the dynamic Web services paradigm. Amongst them is the requirement for a context-aware access control specification and a processing model to apply fine-grained access control on various components of a Web service. In this paper, we address these two requirements and present a policy-based authorization system that leverages an emerging Web service policy processing model, WS-Policy, and integrates it with X-GTRBAC, an XML-based access control model to allow specification and processing of fine-grained, context-aware authorization policies in dynamic Web services environments. The architecture is designed to support the WS-Policy Attachment specification, which allows attaching, retrieving and combining policies associated with various components of a Web service in the WSDL document. Consequently, we present an algorithm to compute the effective access control policy of a Web service based on its description. The effective policy, represented as a normalized WS-Policy document, is then used by the X-GTRBAC system to evaluate an incoming access request. We have prototyped our architecture, and implemented it as a loosely coupled Web service, with logically distinct, heterogeneous modules acting as Policy Enforcement Point (PEP) and Policy Decision Point (PDP). Our prototype demonstrates the true promise of the decentralized Web services architecture, and incorporates SAML-based single sign-on communication between multiple system modules.
The problem of key management in an access hierarchy has elicited much interest in the literature. The hierarchy is modeled as a set of partially ordered classes (represented as a directed graph), and a user who obtains access (i.e., a key) to a certain class can also obtain access to all descendant classes of her class through key derivation. Our solution to the above problem has the following properties: (i) only hash functions are used for a node to derive a descendant’s key from its own key; (ii) the space complexity of the public information is the same as that of storing the hierarchy; (iii) the private information at a class consists of a single key associated with that class; (iv) updates (revocations, additions, etc.) are handled locally in the hierarchy; (v) the scheme is provably secure against collusion; and (vi) key derivation by a node of its descendant’s key is bounded by the number of bit operations linear in the length of the path between the nodes. Whereas many previous schemes had some of these properties, ours is the first that satisfies all of them. Moreover, for trees (and other “recursively decomposable” hierarchies), we are the first to achieve a worst- and average-case number of bit operations for key derivation that is exponentially better than the depth of a balanced hierarchy (double-exponentially better if the hierarchy is unbalanced, i.e., “tall and skinny”); this is achieved with only a constant increase in the space for the hierarchy. We also show how with simple modifications our scheme can handle extensions proposed by Crampton of the standard hierarchies to “limited depth” and reverse inheritance. The security of our scheme relies only on the use of pseudo-random functions.