Twenty years after the publication of Patricia Sullivan’s ldquoBeyond a narrow conception of usability testingrdquo in the IEEE Transactions on Professional Communication, three scholars - all Sullivan’s students - reflect on the history and development of usability testing and research. Following Sullivan, this article argues that usability bridges the divide between science and rhetoric and asserts that usability is most effective when it respects the knowledge-making practices of a variety of disciplines. By interrogating trends in usability method, the authors argue for a definition of usability that relies on multiple epistemologies to triangulate knowledge-making. The article opens with a brief history of the development of usability methods and argues that usability requires a balance between empirical observation and rhetoric. Usability interprets human action and is enriched by articulating context and accepting contingency. Usability relies on effective collaboration and cooperation among stakeholders in the design of technology. Ultimately, professional and technical communication scholars are best prepared to coin new knowledge with a long and wide view of usability.
A P2P computing environment can be an ideal platform for resource-sharing services in an organisation if it provides trust mechanisms. Current P2P technologies offer content-sharing services for non-sensitive public domains in the absence of trust mechanisms. The lack of sophisticated trust mechanisms in the current P2P environment has become a serious constraint for broader applications of the technology although it has great potential. Therefore in this work an approach for securing transactions in the P2P environment is introduced, and ways to incorporate an effective and scalable access control mechanism – role-based access control (RBAC) – into current P2P computing environments has been investigated, proposing two different architectures: requesting peer-pull (RPP) and ultrapeer-pull (UPP) architectures. To provide a mobile, session-based authentication and RBAC, especially in the RPP architecture, lightweight peer certificates (LWPCs) are developed. Finally, to prove the feasibility of the proposed ideas, the RPP and UPP RBAC architectures are implemented and their scalability and performance are evaluated.
Current approaches to access control on the Web servers do not scale to enterprise-wide systems because they are mostly based on individual user identities. Hence we were motivated by the need to manage and enforce the strong and efficient RBAC access control technology in large-scale Web environments. To satisfy this requirement, we identify two different architectures for RBAC on the Web, called user-pull and server-pull. To demonstrate feasibility, we implement each architecture by integrating and extending well-known technologies such as cookies, X.509, SSL, and LDAP, providing compatibility with current web technologies. We describe the technologies we use to implement RBAC on the Web in different architectures. Based on our experience, we also compare the tradeoffs of the different approaches.
As information systems develop into larger and more complex implementations, the need for survivability increases. Also, as the need to protect information systems becomes increasingly vital as new threats are identified each day, it becomes more and more difficult to build systems that will identify and recover from such threats. This is particularly pressing for distributed mission-critical systems, which cannot afford a letdown in functionality even though there are internal component failures or compromises with malicious codes, especially in a downloaded component from an extremal organization. Therefore, when using such a component, we should check to see if the source of the component is trusted and that the code has not been modified in an unauthorized manner since it was created. Furthermore, once we find failures or malicious codes in the component, we should fix those problems and recover the original functionality of the component in runtime so that we can support survivability in the mission-critical system. In this paper we define our definition of survivability, discuss the survivability challenges in component-sharing in a large distributed system, identify the static and dynamic survivability models, and discuss their trade-offs. Consequently, we propose novel approaches for component survivability in runtime. Finally, we prove the feasibility of our ideas by implementing component recovery against component failures and malicious codes.
Providing automated responses to security incidents in a distributed computing environment has been an important area of research. This is due to the inherent complexity of such systems that makes it difficult to eliminate all vulnerabilities before deployment and costly to rely on humans for responding to incidents in real time. Here we formalize the process of providing automated responses in a distributed system and the criterion for asserting global optimality of the responses. We show that reaching the globally optimal solution is an NP-complete problem. Therefore we design a genetic algorithm framework for searching for good solutions. In the search for optimality, we exploit the similarities among attacks, and use the knowledge learnt from previous attacks to guide future search. The mechanism is demonstrated on a distributed e-commerce system called Pet Store with injection of real attacks and is shown to improve the survivability of the system over the previously reported ADEPTS system.