With an ever-increasing amount of available data on protein–protein interaction (PPI) networks and research revealing that these networks evolve at a modular level, discovery of conserved patterns in these networks becomes an important problem. Although available data on protein–protein interactions is currently limited, recently developed algorithms have been shown to convey novel biological insights through employment of elegant mathematical models. The main challenge in aligning PPI networks is to define a graph theoretical measure of similarity between graph structures that captures underlying biological phenomena accurately. In this respect, modeling of conservation and divergence of interactions, as well as the interpretation of resulting alignments, are important design parameters. In this paper, we develop a framework for comprehensive alignment of PPI networks, which is inspired by duplication/divergence models that focus on understanding the evolution of protein interactions. We propose a mathematical model that extends the concepts of match, mismatch, and gap in sequence alignment to that of match, mismatch, and duplication in network alignment and evaluates similarity between graph structures through a scoring function that accounts for evolutionary events. By relying on evolutionary models, the proposed framework facilitates interpretation of resulting alignments in terms of not only conservation but also divergence of modularity in PPI networks. Furthermore, as in the case of sequence alignment, our model allows flexibility in adjusting parameters to quantify underlying evolutionary relationships. Based on the proposed model, we formulate PPI network alignment as an optimization problem and present fast algorithms to solve this problem. Detailed experimental results from an implementation of the proposed framework show that our algorithm is able to discover conserved interaction patterns very effectively, in terms of both accuracies and computational cost.
In this paper, we present COSMOS, a novel architecture for macroprogramming heterogeneous sensor network systems. Macroprogramming specifies aggregate system behavior, as opposed to device-specific programs that code distributed behavior using explicit messaging. COSMOS is comprised of a macroprogramming language, mPL, and an operating system, mOS. mPL macroprograms are statically verifiable compositions of reusable user-specified, or system supported functional components. The mOS node/network operating system provides component management and a lean execution environment for mPL programs in heterogeneous resource-constrained sensor networks. It provides runtime application instantiation, with over-the-air reprogramming of the network. COSMOS facilitates composition of complex real-world applications that are robust, scalable and adaptive in dynamic data-driven sensor network environments. An important and novel aspect of COSMOS is the ability to easily extend its component basis library to add rich macroprogramming abstractions to mPL, tailored to domain and resource constraints, without modifications to the OS. Applications built on COSMOS are currently in use at the Bowen Labs for Structural Engineering, in Purdue University, for high-fidelity structural monitoring. We present a detailed description of the COSMOS architecture, its various components, and a comprehensive experimental evaluation using macro- and micro- benchmarks to demonstrate performance characteristics of COSMOS.
In a multi-service network such as ATM, adaptive data services (such as ABR) share the bandwidth left unused by higher priority services. The network indicates to the ABR sources the fair and efficient rates at which they should transmit to minimize their cell loss. Switches must constantly measure the demand and available capacity, and divide the capacity fairly among the contending connections. In this paper, we propose a new method for determining the “effective” number of active connections, and the fair bandwidth share for each connection. We prove the efficiency and fairness of the proposed method analytically, and use several simulations to illustrate its fairness dynamics and transient response properties.
Firewalls protect a trusted network from an untrusted network by filtering traffic according to a specified security policy. A diverse set of firewalls is being used today. As it is infeasible to examine and test each firewall for all possible potential problems, a taxonomy is needed to understand firewall vulnerabilities in the context of firewall operations. This paper describes a novel methodology for analyzing vulnerabilities in Internet firewalls. A firewall vulnerability is defined as an error made during firewall design, implementation, or configuration, that can be exploited to attack the trusted network that the firewall is supposed to protect. We examine firewall internals, and cross-reference each firewall operation with causes and effects of weaknesses in that operation, analyzing twenty reported problems with available firewalls. The result of our analysis is a set of matrices that illustrate the distribution of firewall vulnerability causes and effects over firewall operations. These matrices are useful in avoiding and detecting unforeseen problems during both firewall implementation and firewall testing. Two case studies of Firewall-1 and Raptor illustrate our methodology.
In a wireless environment, due to topology changes and characteristics of media (interference in radio signals) the bandwidth of a link is unpredictable and possibly very low, the error rates are variable and extremely high. Provisioning and guaranteeing quality of service (QoS) in such an environment is a very challenging problem. In this paper we propose an adaptive QoS framework to support multimedia applications in a wireless networking environment. The proposed framework is hierarchical in nature with cluster of mobile end hosts connected to a base station, base stations are connected to a supervisory node, which in turn is connected to the wired infrastructure. The changing conditions in wireless due to interference and possibly mobility, entitle that the real-time applications needing stringent QoS should be adaptable. The framework uses a generic feedback mechanism to support adaptability at all layers of the wireless network.An overview of existing wireless architectures which support QoS is given. The architectures discussed are WAMIS (wireless adaptive multimedia information system), SWAN (seamless wireless ATM network), MMWN (multimedia support for mobile wireless networks), and QGMC (QoS guarantees in mobile computing). The proposed approach is compared with these existing methods.
Vulnerabilities in vendor as well as freeware implementations of firewalls continue to emerge at a rapid pace. Each vulnerability superficially appears to be the result of something such as a coding flaw in one case, or a configuration weakness in another. Given the large number of firewall vulnerabilities that have surfaced in recent years, it is important to develop a comprehensive framework for understanding both what firewalls actually do when they receive incoming traffic and what can go wrong when they process this traffic. An intuitive starting point is to create a firewall dataflow model composed of discrete processing stages that reflect the processing characteristics of a given firewall. These stages do not necessarily all occur in all firewalls, nor do they always conform to the sequential order indicated in this paper. This paper also provides a more complete view of what happens inside a firewall, other than handling the filtering and possibly other rules that the administrator may have established. Complex interactions that influence the security that a firewall delivers frequently occur. Firewall administrators too often blindly believe that filtering rules solely decide the fate of any given packet. Distinguishing between the surface functionality (i.e., functionality related to packet filtering) and the deeper, dataflow-related functionality of firewalls provides a framework for understanding vulnerabilities that have surfaced in firewalls.
Clustering sensor nodes increases the scalability and energy efficiency of communications among them. In hostile environments, unexpected failures or attacks on cluster heads (through which communication takes place) may partition the network or degrade application performance. In this work, we propose a new approach, REED (Robust Energy Efficient Distributed clustering), for clustering sensors deployed in hostile environments. Our primary objective is to construct a k (i.e., k-connected) network, where k is a constant determined by the application. Fault tolerance can be achieved by selecting k independent sets of cluster heads (i.e., cluster head overlays) on top of the physical network, so that each node can quickly switch to other cluster heads in case of failures or attacks on its current cluster head. The independent cluster head overlays also provide multiple vertex-disjoint routing paths for load balancing and security. Network lifetime is prolonged by selecting cluster heads with high residual energy and low communication cost, and periodically reclustering the network in order to distribute energy consumption among sensor nodes. We prove that REED can asymptotically achieve k tolerance if certain constraints on node density are satisfied. We also investigate via simulations the clustering properties of REED, and show that building multiple cluster head overlays does not consume significant energy.
Several sensor network applications, such as environmental monitoring, require data aggregation to an observer. For this purpose, a data aggregation tree, rooted at the observer, is constructed in the network. Node clustering can be employed to further balance load among sensor nodes and prolong the network lifetime. In this paper, we design and implement a system, iHEED, in which node clustering is integrated with multi-hop routing for TinyOS. We consider simple data aggregation operators, such as AVG or MAX. We use a simple energy consumption model to keep track of the battery consumption of cluster heads and regular nodes. We perform experiments on a sensor network testbed to quantify the advantages of integrating hierarchical routing with data aggregation. Our results indicate that the network lifetime is prolonged by a factor of 2 to 4, and successful transmissions are almost doubled. Clustering plays a dominant role in delaying the first node death, while aggregation plays a dominant role in delaying the last node death.
This paper gives a new definition of general weighted (GW) fairness and shows how this can achieve various fairness definitions, such as those mentioned in the ATM Forum TM 4.0 specifications. The GW fairness can be achieved by calculating the ExcessFairshare (weighted fairshare of the left over bandwidth) for each VC. We show how a switch algorithm can be modified to support the GW fairness by using the ExcessFairshare term. We use ERICA+ as an example switch algorithm and show how it can be modified to achieve the GW fairness. For simulations, the weight parameters of the GW fairness are chosen to map a typical pricing policy. Simulation results are presented to demonstrate that, the modified switch algorithm achieves GW fairness. An analytical proof for convergence of the modified ERICA+ algorithm is given in the appendix.
We design an efficient on-line approach, FlowMate, for partitioning flows at a busy server into flow groups that share bottlenecks. These groups are periodically input to congestion coordination, aggregation, load balancing, admission control, or pricing modules. FlowMate uses in-band packet delay measurements to the receivers to determine shared bottlenecks among flows. Packet delay information is piggybacked on returning feedback, or, if impossible, flow (e.g., TCP) round trip time estimates are used. We simulate FlowMate to examine the effect of network load, traffic burstiness, network buffer sizes, and packet drop policies on partitioning correctness. Our results demonstrate accurate partitioning of medium to long-lived flows even under heavy load and self-similar background traffic. Experiments with HTTP/1.1 flows demonstrate difficulties in partitioning bursty foreground traffic. We also study fairness of coordinated congestion management when integrated with FlowMate.
We investigate a heuristic application-level (overlay) multicast approach, which we refer to as Topology Aware Grouping (TAG). TAG exploits underlying network topology data to construct overlay multicast networks. Specifically, TAG uses the overlap among routes from the source to group members to construct an efficient overlay network in a distributed, low-overhead manner. We can easily integrate TAG with delay and bandwidth bounds to construct overlays that satisfy application requirements. We study the properties of TAG, and quantify its economies of scale factor, compared to unicast and IP multicast. In addition, we compare TAG with delay-first and bandwidth-first Narada/End System Multicast (ESM) in a variety of simulation configurations. We also implement and experiment with TAG on the PlanetLab wide-area platform. Our results demonstrate the effectiveness of our heuristic in reducing delays and duplicate packets, especially when underlying routes are of high quality.
We investigate the effect of TCP explicit congestion notification (ECN) with a new response strategy that is more aggressive in the short term, but preserves TCP long term behavior, without modifying the router marking rate. A less aggressive ECN decrease gives more incentives for end systems to become ECN-compliant, as ECN serves as an early warning sign in this case. Our analysis and simulation results demonstrate the effectiveness of the new algorithm in improving throughput and reducing fluctuations. We model a multiple bottleneck scenario with various types of traffic, and evaluate the effect of a number of parameters, including TCP flavor, increase/decrease parameters, buffer size, and random early detection (RED) parameters.