In this paper we address the problem of ensuring the correctness of query results returned by an untrusted private database. The database owns the data and may modify it at any time. The querier is allowed to execute queries over this database; however it may not learn anything more than the result of these legal queries. The querier does not necessarily trust the database and would like the owner to furnish proof that the data has not been modified in response to recent events such as the submission of the query. We develop two metrics that capture the correctness of query answers and propose a range of solutions that provide a trade-off between the degree of exposure of private data, and the overhead of generation and verification of the proof. Our proposed solutions are tested on real data through implementation using PostgreSQL.
Similar document detection plays important roles in many applications, such as file management, copyright protection, and plagiarism prevention. Existing protocols assume that the contents of files stored on a server (or multiple servers) are directly accessible. This assumption limits more practical applications, e.g., detecting plagiarized documents between two conferences, where submissions are confidential. We propose novel protocols to detect similar documents between two entities where documents cannot be openly shared with each other. We also conduct experiments to show the practical value of the proposed protocols.
Many information sources contain information that can only be accessed through search-specific search engines. Federated search provides search solutions of this type of hidden information that cannot be searched by conventional search engines. In many scenarios of federated search, such as the search among health care providers or among intelligence agencies, an individual information source does not want to disclose the source of the search results to users or other sources. Therefore, this paper proposes a two-step federated search protocol that protects the privacy of information sources. As far as we know, this is the first attempt to address the research problem of protecting source privacy in federated text search.
Effective mitigation of denial of service (DoS) attack is a pressing problem on the Internet. In many instances, DoS attacks can be prevented if the spoofed source IP address is traced back to its origin which allows assigning penalties to the offending party or isolating the compromised hosts and domains from the rest of the network. IP traceback mechanisms based on probabilistic packet marking (PPM) have been proposed for achieving traceback of DoS attacks. We show that probabilistic packet marking-of interest due to its efficiency and implementability vis-a-vis deterministic packet marking and logging or messaging based schemes-suffers under spoofing of the marking field in the IP header by the attacker which can impede traceback by the victim. We show that there is a trade-off between the ability of the victim to localize the attacker and the severity of the DoS attack, which is represented as a function of the marking probability, path length, and traffic volume. The optimal decision problem-the victim can choose the marking probability whereas the attacker can choose the spoofed marking value, source address, and attack volume-can be expressed as a constrained minimax optimization problem, where the victim chooses the marking probability such that the number of forgeable attack paths is minimized. We show that the attacker’s ability to hide his location is curtailed by increasing the marking probability, however, the latter is upper-bounded due to sampling constraints. In typical IP internets, the attacker’s address can be localized to within 2-5 equally likely sites which renders PPM effective against single source attacks. Under distributed DoS attacks, the uncertainty achievable by the attacker can be amplified, which diminishes the effectiveness of PPM
Sensors are often employed to monitor continuously changing entities like locations of moving objects and temperature. The sensor readings are reported to a centralized database system, and are subsequently used to answer queries. Due to continuous changes in these values and limited resources (e.g., network bandwidth and battery power), the database may not be able to keep track of the actual values of the entities, and use the old values instead. Queries that use these old values may produce incorrect answers. However, if the degree of uncertainty between the actual data value and the database value is limited, one can place more confidence in the answers to the queries. In this paper, we present a frame-work that represents uncertainty of sensor data. Depending on the amount of uncertainty information given to the application, different levels of imprecision are presented in a query answer. We examine the situations when answer imprecision can be represented qualitatively and quantitatively. We propose a new kind of probabilistic queries called Probabilistic Threshold Query, which requires answers to have probabilities larger than a certain threshold value. We also study techniques for evaluating queries under different details of uncertainty, and investigate the tradeoff between data uncertainty, answer accuracy and computation costs.
Group communication systems are high-availability distributed systems providing reliable and ordered message delivery, as well as a membership service, to group-oriented applications. Many such systems are built using a distributed client-server architecture where a relatively small set of servers provide service to numerous clients. In this work, we show how group communication systems can be enhanced with security services without sacrificing robustness and performance. More specifically, we propose several integrated security architectures for distributed client-server group communication systems. In an integrated architecture, security services are implemented in servers, in contrast to a layered architecture, where the same services are implemented in clients. We discuss performance and accompanying trust issues of each proposed architecture and present experimental results that demonstrate the superior scalability of an integrated architecture.
This paper presents the first hierarchical Byzantine fault-tolerant replication architecture suitable to systems that span multiple wide area sites. The architecture con- fines the effects of any malicious replica to its local site, reduces message complexity of wide area communication, and allows read-only queries to be performed locally within a site for the price of additional hardware. A prototype implementation is evaluated over several network topologies and is compared with a flat Byzantine fault-tolerant approach.
Abstract—A main challenge in mobile commerce is to make it possible for users to manage their transaction histories from both online e-commerce transactions and in-person transactions. Such histories are typically useful to build credit or to establish trust based on past transactions. In this paper we propose an approach to manage electronic receipts on cellular devices by assuring their secure and privacy-preserving usage. We provide a comprehensive notion of transactions history including both on-line transaction and in-person transactions. We apply cryptographic protocols, such as secret sharing and zero knowledge proofs, in a potentially vulnerable and constrained setting. Specifically, our approach supports flexible strategies based on Shamir’s secret sharing to cater to different user requirements and architectural constraints. In addition, aggregate zero knowledge proofs are used to efficiently support proofs of various receipt attributes. We have implemented the system on Nokia NFC cellular phones and report in the paper performance evaluation results.
An ad hoc wireless network is an autonomous self-organizing system ofmobile nodes connected by wireless links where nodes not in directrange can communicate via intermediate nodes. A common technique usedin routing protocols for ad hoc wireless networks is to establish therouting paths on-demand, as opposed to continually maintaining acomplete routing table. A significant concern in routing is theability to function in the presence of byzantine failures whichinclude nodes that drop, modify, or mis-route packets in an attempt todisrupt the routing service.We propose an on-demand routing protocol for ad hoc wireless networks that provides resilience to byzantine failures caused by individual or colluding nodes. Our adaptive probing technique detects a malicious link after log n faults have occurred, where n is the length of the path. These links are then avoided by multiplicatively increasing their weights and by using an on-demand route discovery protocol that finds a least weight path to the destination.