The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Checking For Secure Passwords Using Hash Functions

Stefan Dresler

* Users allowed to choose reusable passwords -> Weak passwords are often chosen. * A password is weak if it is 1. easy to guess, 2. simple to derive, or 3. likely to be found in a dictionary attack. * (attempted) solution: keep dictionary, try to look up newly chosen passwords. * Problem: the size of the dictionary makes storing it possibly searching it unattractive. *(New) solution: use a Bloom filter for reduced storage consumption and constant look-up time.

Added 2002-07-26

Keeping Intruders Away (solutions to common security problems)

James Ellis,Barbara Fraser,Linda Pesante

The Internet’s substantial growth has resulted in an increase in sophisticated security problems. The latest fad is network monitoring, or packet sniffing, whereby hackers collect account authorization data and use it to intrude on the Internet. To avoid this threat, users should get rid of any reusable, standard passwords and start utilizing one-time passwords only. Shadow passwords can also be used to avoid disclosing encrypted passwords. Security problems can also be avoided by verifying the proper system and service configurations. Systems managers should stay current with the latest software releases and bug fixes, utilize secure programming techniques and implement auditing programs to collect access data. Individual users should make an effort to understand and respect their site’s security policies, utilize available resources to protect their data, and follow Internet etiquette carefully. A list of resources for securing networks and systems is provided.

Added 2002-07-26

Totem of Taboo in Cyberspace

M. E. Kabay

Cyberspace, the realm of computer networks voice mail and long-distance telephone calls, is increasingly important in our lives. Unfortunately, morally immature phreaks, cyberpunks and criminal hackers are spoiling it for everyone. Security professionals must speak out in the wider community and change the moral universe to include cyberspace.

Added 2002-07-26

The HotJava Browser: A White Paper

The Internet is a vast sea of data represented in many formats and stored on many hosts. A large portion of the Internet is organized as the World Wide Web (WWW) which uses hypertext to make navigation easier than the traditional ways like anonymous FTP and Telnet. WWW browsers are used to navigate through the data found on the net.

Added 2002-07-26

Practical Alternative? - The Internet vs. Private lines

John Rendleman

Will the Internet replace private corporation data networks? Thanks to performance upgrades and new security schemes, it just might. The idea-once unthinkable because of the Net’s unpredictable performace and lack of security-has become viable thanks to the commercialization of the Internet backbone and the growing availability of sophisticated encryption authentication tools. Two or three years from now if a company is going to set up a wide-area-data- network, the Internet is going to be its first choice, says Pete Sinclair president and CEO of Smart Valley Inc., a Santa Clara, Calif., consortium formed to promote Internet-based electronic commerce in Silicon Valley.

Added 2002-07-26

A Standard Audit Trail Format Proc. of 19th National Information Systems Security Conference (Oct 1995)

Matt Bishop

The central role of audit trails, or (more properly) logs, in security monitor- ing needs little description, for it is too well known for any to doubt it. Auditing, or the analysis of logs, is a central part of security not only in computer system security but also in analyzing financial and other non-technical systems. As part of this process, it is often necessary to reconcile logs from different sources. Consider for example intrusion detection over a network. In this scenario, an intrusion detection system (IDS) monitors several host on a network, and from their logs it determines which actions are attempts to violate security (misuse detection) or which actions are not expected (anomaly detection). As some attacks involve the exploitation of concurrent commands, the log records may involve more than one user, process, and system. Further, should the system security officer decide to trace the connections back through other systems, he must be able to correlate the logs of the many different heterogenous systems through who the attacker may have come.

Added 2002-07-26

Network Liability Update July 1994

William J. Cook

This has been another controversial year in the “growth” field of network and computer law. Recent cases will continue to fuel discussions inside and outside the Clinton Administration over protection of copyrighted information dist- ributed on computer networks. (NYT 7/7/94)

Added 2002-07-26

Parallel Collision Search with Application to Hash Functions and Discrete Logarithms

Paul C. van Oorschot,Michael J. Wiener

Current techniques for collision search with feasible memory requirements involve pseudo-random walks through some space where one must wait for the result of the current step before the next step can begin. These techniques are serial in nature, and direct parallelization is inefficient. We present a simple new method of parallelization collisions that greatly extends the reach of practical attacks. The new method is illustrated with applications to hash functions and discrete logarithms in cyclic groups. In the case of hash functions, we begin with two messages; the first is a message that we want our target to digitally sign, and the second is a message that the target is willing to sign. Using collisions search adapted for hashing collisions, one can find slightly altered versions of these messages such that the two new messages give the same hash result. As a particular example, a $10 million custom machine for applying parallel collision search to the MD5 has function could complete an attack with an expected run time of 24 days. This machine would be specific to MD5, but could be used for any pair of messages. For discrete logarithms in cyclic groups, ideas from Pollard’s rho and lambda methods for index computation are combined to allow efficient parallel implementation using the new method. As a concrete example, we consider an elliptic curve cryptosystem over GF(2^155) with the order of the curve having largest prime factor of approximate size 10^36. A $10 million machine custom built for this finite field could compute a discrete logarithm with an expected run time of 36 days.

Added 2002-07-26

Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services

Barton P. Miller,David Koski,Cjin Pheow lee,Vivekananda Maganty,Ravi Murthy,Ajitkumar Natarajan,Jeff Steidl

We have tested the reliability of a large collection of basic UNIX utility programs, X-Window applications and servers, and network services. We used a simple testing method of subjecting these programs to a random input stream. Our testing methods and tools are largely automatic and simple to use. We tested programs on nine versions of the UNIX operating system, including seven commercial systems and freely-available GNU untilites and Linux. We report which programs failed on which systems, and identify and categorize the causes of these failures. The results of our testing is that we can crash (with core dump) or hang (infinite loop) over 40 (in the worst case) of the basic programs and over 25 of the X-Window applications. We were not able to crash any of the network services that we tested nor any of the X-Window servers. This study parallels our 1990 study (that tested only the basic UNIX utilities); all systems that we compared between 1990 and 1995 noticeably improved in reliability, but still had significant rates of failure. The reliability of the basic utilities from GNU and Linux were noticeably better than those of the commercial systems. We also tested how utility-programs checked their return codes from the memory allocation library routines by simulating the unavailability of virtual memory. We could crash almost half of the programs that we tested in this way.

Added 2002-07-26

A Proposal for a Postgraduate Curriculum in Information Security, Dependability and Safety

Sokratis K. Katsikas,Dimitris A. Gritzalis

A proposal for a postgraduate (MSc-type) programme in Information Security, Dependability ans Safety is described in detail in this report. However, implementation issues have not been included. The programme description (syllabus) proposed is full in the acedemic sense, i.e. it includes the programme overall structure, as well as course listings, credits per course, academic prerequisites, degree prerequisites, indicative textbooks per course, list of exsisting similar courses, course timings, etc.

Added 2002-07-26

Network Law Update: February 1995

William J. Cook

The headline in the New York Times was clear, “Pirated Copies of Latest Software From IBM, Others Posted on the Internet” (NYT 10/31/94). The ramifications were simple: the market for a new computer program can be quickly destroyed if it is posted on the Internet. Most new programs will enjoy at least a six month marketing “shelf life”. But a million dollar computer program created on Monday, stolen and uploaded to the Internet on Tuesday, can be worthless by Friday. The victim-author programmer may not even know his new, “heater” program now resides on a publicly accessible, anonymous file server in France. Nevertheless, his (and your) projected six month marketing window has now shrunk to four days.

Added 2002-07-26

Computer Viruses: A Global Perspective

Steve R. White,Jeffery O. Kephart,David M. Chess

Technical accounts of computer viruses usually focus on the microscopic details of individual viruses: their stucture, their function, the type of host programs they infect, etc. The media tends to focus on the social implications of isolated scares. Such views of the virus problem are useful, but limited in scope. One of the missions of IBM’s High Integrity Computing Laboratory is to understand the virus problem from a global stand perspective, and to apply that knowledge to the developement of anti-virus technology and measures. We have employed two complementary approaches: observational and theoretical virus epidemiology. Observation of a large sample population for six years has given us a good understanding of many aspects of virus prevalence and virus trends, while our theoretical work has bolstered this understanding by suggesting some of the mechanisms that govern the behavior that we have observed.

Added 2002-07-26

Asynchronous Optimistic Rollback Recovery Using Secure Distributed Time

Sean W. Smith,David B. Johnson,J.D. Tygar

In an asynchronous distributed computation, processes may fail and restart from saved state. A protocol for “optimistic rollback recovery” must recover the sytem when other processes may depend on lost states at failed processes. Previous work has used forms of partial order clocks to track potential causality. Our research addresses two crucial short- comings: the rollback problem also involves tracking a second level of partial order time (potential knowledge of failures and rollbacks), and protocols based on partial order clocks are open to inherent security and privacy risks. We have developed a “distributed time” framework that provides the tools for multiple levels of time abstraction, and for identifying and solving the corresponding security and privacy risks. This paper applies our framework to the rollback problem. We derive a new optimistic rollback recovery protocol that provides “completely asynchronous” recovery (thus directly supporting concurrent recovery and tolerating network partitions) and that enables processes to take full advantage of their maximum potential knowledge of orphans (thus reducing the worst case bound on asynchronous recovery after a single failure from exponetial to at most one rollback per process). By explicitly tracking and utilizing both levels of partial order time, our protocol substantially improves on previous work in optimistic recovery. Our work also provides a foundation for incorporating security and privacy in optimistic rollback recovery.

Added 2002-07-26

An Evening with Berferd In Which a Cracker is Lured, Endured, and Studied

Bill Cheswick

On 7 January 1991 a cracker, believing he had discovered the famous sendmail DEBUG hole in our Internet gateway machine, attempted to obtain a copy of our password file. I sent him one. For several months we led this cracker on a merry chase in order to trace his location and learn his techniques. This paper is a chronical of the cracker’s “success” and disappointments, the bait and traps used to lure and detect him, and the chroot “Jail” we built to watch his activities. We concluded that our cracker had a lot of time and persistence, and a good list of

Added 2002-07-26

There Be Dragons

Steven M. Bellovin

Our security gateway to the Internet, research.att.com provides only a limited set of services. Most of the standard servers have been replaced by a variety of trap programs that look for attacks. Using these, we have detected a wide variety of pokes, ranging from simple attemps to log in as “guest” to forged NFS packets. We believe that many other sites are being probed but are unaware of it: the standared network daemons do not provide administrators with controls and filters or with the logging necessary to detect attacks.

Added 2002-07-26