The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Measuring Impact of DoS Attacks

Jelena Mirkovic, Sonia Fahmy, Peter Reiher, Roshan Thomas, Alefiya Hussain, Steven Schwab, Calvin Ko
Added 2008-05-12

Database middleware for distributed ontologies in state and federal family & social services

Athman Bouguettaya, Mourad Ouzzani, Ahmed Elmagarmid, Brahim Medjahed

Collecting benefits using current FSSA systems is time-consuming, frustrating, and complex for needy citizens and social workers. This requires citizens to visit several offices in and outside their hometowns to receive benefits they are entitled to. In many cases, dealing with this process prevents underprivileged citizens from devoting adequate time to enhancing their prospects for becoming self-supporting with a consequential harmful impact on their health and safety.

Added 2008-05-09

Business-to-business interactions: issues and enabling technologies

B. Medjahed, B. Benatallah, A. Bouguettaya, A.H.H. Ngu, A.K. Elmagarmid

Business-to-Business (B2B) technologies pre-date the Web. They have existed for at least as long as the Internet. B2B applications were among the first to take advantage of advances in computer networking. The Electronic Data Interchange (EDI) business standard is an illustration of such an early adoption of the advances in computer networking. The ubiquity and the affordability of the Web has made it possible for the masses of businesses to automate their B2B interactions. However, several issues related to scale, content exchange, autonomy, heterogeneity, and other issues still need to be addressed. In this paper, we survey the main techniques, systems, products, and standards for B2B interactions. We propose a set of criteria for assessing the different B2B interaction techniques, standards, and products.

Added 2008-05-09

Characterizing overlay multicast networks and their costs

Sonia Fahmy, Minseok Kwon

Overlay networks among cooperating hosts have recently emerged as a viable solution to several challenging problems, including multicasting, routing, content distribution, and peer-to-peer services. Application-level overlays, however, incur a performance penalty over router-level solutions. This paper quantifies and explains this performance penalty for overlay multicast trees via: 1) Internet experimental data; 2) simulations; and 3) theoretical models. We compare a number of overlay multicast protocols with respect to overlay tree structure, and underlying network characteristics. Experimental data and simulations illustrate that the mean number of hops and mean per-hop delay between parent and child hosts in overlay trees generally decrease as the level of the host in the overlay tree increases. Overlay multicast routing strategies, overlay host distribution, and Internet topology characteristics are identified as three primary causes of the observed phenomenon. We show that this phenomenon yields overlay tree cost savings: Our results reveal that the normalized cost L(n)/U(n) is ∞ n0.9 for small n, where L(n) is the total number of hops in all overlay links, U(n) is the average number of hops on the source to receiver unicast paths, and n is the number of members in the overlay multicast session. This can be compared to an IP multicast cost proportional to n0.6 to n0.8.

Added 2008-05-09

On TCP throughput and window size in a multihop wireless network testbed

Dimitrios Koutsonikolas, Jagadeesh Dyaberi, Prashant Garimella, Sonia Fahmy, Y. Charlie Hu

Although it is well-known that TCP throughput is suboptimal in multihop wireless networks, little performance data is available for TCP in realistic wireless environments. In this paper, we present the results of an extensive experimental study of TCP performance on a 32-node wireless mesh network testbed deployed on the Purdue University campus. Contrary to prior work which considered a single topology with equal-length links and only 1-hop neighbors within transmission range of each other, our study considers more realistic heterogeneous topologies. We vary the maximum TCP window size, in correlation with two important MAC layer parameters: the use of RTS/CTS and the MAC data rate. Based on our TCP throughput results, wegive recommendations on configuring TCP and MAC parameters, which in many cases contradict previous proposals (which had themselves contradicted each other).

Added 2008-05-09

FlowMate: scalable on-line flow clustering

Ossama Younis, Sonia Fahmy

We design and implement an efficient on-line approach, FlowMate, for clustering flows (connections) emanating from a busy server, according to shared bottlenecks. Clusters can be periodically input to load balancing, congestion coordination, aggregation, admission control, or pricing modules. FlowMate uses in-band (passive) end-to-end delay measurements to infer shared bottlenecks. Delay information is piggybacked on feedback from the receivers, or, if impossible, TCP or application round-trip time estimates are used. We simulate FlowMate and examine the effects of network load, traffic burstiness, network buffer sizes, and packet drop policies on clustering correctness, evaluated via a novel accuracy metric. We find that coordinated congestion management techniques are more fair when integrated with Flow-Mate. We also implement FlowMate in the Linux kernel v2.4.17 and evaluate its performance on the Emulab testbed, using both synthetic and tcplib-generated traffic. Our results demonstrate that clustering of medium to long-lived flows is accurate, even with bursty background traffic. Finally, we validate our results on the Internet Planetlab testbed.

Added 2008-05-09

Towards user-centric metrics for denial-of-service measurement

Jelena Mirkovic, Alefiya Hussain, Brett Wilson, Sonia Fahmy, Peter Reiher, Roshan Thomas, Wei-Min Yao, Stephen Schwab

To date, the measurement of user-perceived degradation of quality of service during denial of service (DoS) attacks remained an elusive goal. Current approaches mostly rely on lower level traffic measurements such as throughput, utilization, loss rate, and latency. They fail to monitor all traffic parameters that signal service degradation for diverse applications, and to map application quality-of-service (QoS) requirements into specific parameter thresholds. To objectively evaluate an attack’s impact on network services, its severity and the effectiveness of a potential defense, we need precise, quantitative and comprehensive DoS impact metrics that are applicable to any test scenario.

We propose a series of DoS impact metrics that measure the QoS experienced by end users during an attack. The proposed metrics consider QoS requirements for a range of applications and map them into measurable traffic parameters with acceptable thresholds. Service quality is derived by comparing measured parameter values with corresponding thresholds, and aggregated into a series of appropriate DoS impact metrics. We illustrate the proposed metrics using extensive live experiments, with a wide range of background traffic and attack variants. We successfully demonstrate that our metrics capture the DoS impact more precisely than the measures used in the past.

Added 2008-05-09

The ERICA switch algorithm for ABR traffic management in ATM network

Shivkumar Kalyanaraman, Raj Jain, Sonia Fahmy, Rohit Goyal, Bobby Vandalore
Added 2008-05-09

Measuring denial Of service

Jelena Mirkovic, Peter Reiher, Sonia Fahmy, Roshan Thomas, Alefiya Hussain, Stephen Schwab, Calvin Ko

Denial-of-service (DoS) attacks significantly degrade service quality experienced by legitimate users, by introducing large delays, excessive losses, and service interruptions. The main goal of DoS defenses is to neutralize this effect, and to quickly and fully restore quality of various services to levels acceptable by the users. To objectively evaluate a variety of proposed defenses we must be able to precisely measure damage created by an attack, i.e., the denial of service itself, in controlled testbed experiments. Current evaluation methodologies measure DoS damage superficially and partially by measuring a single traffic parameter, such as duration, loss or throughput, and showing divergence during the attack from the baseline case. These measures do not consider quality-of-service requirements of different applications and how they map into specific thresholds for various traffic parameters. They thus fail to measure the service quality experienced by the end users.We propose a series of DoS impact metrics that are derived from traffic traces gathered at the source and the destination networks. We segment a trace into higher-level user tasks, called transactions, that require a certain service quality to satisfy users’ expectations. Each transaction is classified into one of several proposed application categories, and we define quality-of-service (QoS) requirements for each category via thresholds imposed on several traffic parameters. We measure DoS impact as a percentage of transactions that have not met their QoS requirements and aggregate this measure into several metrics that expose the level of service denial. We evaluate the proposed metrics on a series of experiments with a wide range of background traffic and our results show that our metrics capture the DoS impact more precisely then partial measures used in the past.

Added 2008-05-09

Topology-aware overlay networks for group communication

Minseok Kwon, Sonia Fahmy

We propose an application level multicast approach, Topology Aware Grouping (TAG), which exploits underlying network topology information to build efficient overlay networks among multicast group members. TAG uses information about path overlap among members to construct a tree that reduces the overlay relative delay penalty, and reduces the number of duplicate copies of a packet on the same link. We study the properties of TAG, and model and experiment with its economies of scale factor to quantify its benefits compared to unicast and IP multicast. We also compare the TAG approach with the ESM approach in a variety of simulation configurations including a number of real Internet topologies and generated topologies. Our results indicate the effectiveness of the algorithm in reducing delays and duplicate packets, with reasonable algorithm time and space complexities.

Added 2008-05-09

HEED: A Hybrid, Energy-Efficient, Distributed Clustering Approach for Ad Hoc Sensor Networks

Ossama Younis, Sonia Fahmy

Topology control in a sensor network balances load on sensor nodes and increases network scalability and lifetime. Clustering sensor nodes is an effective topology control approach. In this paper, we propose a novel distributed clustering approach for long-lived ad hoc sensor networks. Our proposed approach does not make any assumptions about the presence of infrastructure or about node capabilities, other than the availability of multiple power levels in sensor nodes. We present a protocol, HEED (Hybrid Energy-Efficient Distributed clustering), that periodically selects cluster heads according to a hybrid of the node residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED terminates in O(1) iterations, incurs low message overhead, and achieves fairly uniform cluster head distribution across the network. We prove that, with appropriate bounds on node density and intracluster and intercluster transmission ranges, HEED can asymptotically almost surely guarantee connectivity of clustered networks. Simulation results demonstrate that our proposed approach is effective in prolonging the network lifetime and supporting scalable data aggregation.

Added 2008-05-09

A Survey of Application Layer Techniques for Adaptive Streaming of Multimedia

Bobby Vandalore, Wu-chi Feng, Raj Jain, Sonia Fahmy

Though the integrated services model and resource reservation protocol (RSVP) provide support for quality of service, in the current Internet only best-effort traffic is widely supported. New high-speed technologies such as ATM (asynchronous transfer mode), gigabit Ethernet, fast Ethernet, and frame relay, have spurred higher user expectations. These technologies are expected to support real-time applications such as video-on-demand, Internet telephony, distance education and video-broadcasting. Towards this end, networking methods such as service classes and quality of service models are being developed. Today’s Internet is a heterogeneous networking environment. In such an environment, resources available to multimedia applications vary. To adapt to the changes in network conditions, both networking techniques and application layer techniques have been proposed. In this paper, we focus on the application level techniques, including methods based on compression algorithm features, layered encoding, rate shaping, adaptive error control, and bandwidth smoothing. We also discuss operating system methods to support adaptive multimedia. Throughout the paper, we discuss how feedback from lower networking layers can be used by these application-level adaptation schemes to deliver the highest quality content.

Added 2008-05-09


Decentralized authorization and data security in web content delivery

Danfeng Yao, Yunhua Koglin, Elisa Bertino, Roberto Tamassia

The fast development of web services, or more broadly, service-oriented architectures (SOAs), has prompted more organizations to move contents and applications out to the Web. Softwares on the web allow one to enjoy a variety of services, for example translating texts into other languages and converting a document from one format to another. In this paper, we address the problem of maintaining data integrity and confidentiality in web content delivery when dynamic content modifications are needed. We propose a flexible and scalable model for secure content delivery based on the use of roles and role certificates to manage web intermediaries. The proxies coordinate themselves in order to process and deliver contents, and the integrity of the delivered content is enforced using a decentralized strategy. To achieve this, we utilize a distributed role lookup table and a role-number based routing mechanism. We give an efficient secure protocol, iDeliver, for content processing and delivery, and also describe a method for securely updating role lookup tables. Our solution also applies to the security problem in web-based workflows, for example maintaining the data integrity in automated trading, contract authorization, and supply chain management in large organizations.

Added 2008-05-08

A database approach to quality of service specification in video databases

Elisa Bertino, Ahmed K. Elmagarmid, Mohand-Saïd Hacid

Quality of Service (QoS) is defined as a set of perceivable attributes expressed in a user-friendly language with parameters that may be subjective or objective. Objective parameters are those related to a particular service and are measurable and verifiable. Subjective parameters are those based on the opinions of the end-users. We believe that quality of service should become an integral part of multimedia database systems and users should be able to query by requiring a quality of service from the system. The specification and enforcement of QoS presents an interesting challenge in multimedia systems development. A deal of effort has been done on QoS specification and control at the system and the network levels, but less work has been done at the application/user level. In this paper, we propose a language, in the style of constraint database languages, for formal specification of QoS constraints. The satisfaction by the system of the user quality requirements can be viewed as a constraint satisfaction problem. We believe this paper represents a first step towards the development of a database framework for quality of service management in video databases. The contribution of this paper lies in providing a logical framework for specifying and enforcing quality of service in video databases. To our knowledge, this work is the first from a database perspective on quality of service management.

Added 2008-05-08