CMAD III --- 3rd Annual Workshop on Computer Misuse and Anomaly Detection Sonoma, California. January 10-12, 1995 S. Kumar, S. Lodin, Ch. Schuba COAST Laboratory Purdue University (This article is reprinted from v1(2) of "COAST Watch", the COAST Project electronic newsletter. An enhanced version is available via WWW from http://www.cs.purdue.edu/homes/swlodin/cmad/report.html) ------------------------------------------------------------------------------- This workshop was sponsored by the National Security Agency, Air Force Information Warfare Center & the University of California Davis. Attendance was by invitation only. The workshop was attended by members from the legal community, CERT and security experts specializing in intrusion detection. The vendor community seemed under-represented. The highlight of the talk was a presentation by Tsutomu Shimomura who described how he was able to detect and recover from the intrusion on his computer systems at the San Diego Super Computing Center. Jan 10 (Auditing Applications Software) Talks on the opening day were started by Marv Shaefer of ARCA Systems Inc. who outlined and emphasized the differing requirements of auditing from the perspective of the OS and the application. He said that reconciling these differences would be important to audit next generation applications effectively. He said that the difficulty with auditing applications is that the nature of controls for an application often changes over time and that the separate access control policies may compose surprisingly. He stated that the objective of audit logging is to produce an accurate, immutable, and persistent record of relevant activity that can provide valid evidence to an auditor or other officials once a malfeasance has been detected. The next speaker was Olin Sibert of Oxford Systems who said that low level auditing at the OS/TCB/kernel level was becoming increasingly irrelevant for lack of general mechanisms to deduce higher level application abstractions from these events. He mentioned the need for generic audit trail formats and API to log application events. Olin explained using examples how "Computer Oriented" breaches were simpler to detect using traditionally understood notions of auditing than "Organization Oriented" based breaches which were policy based and ill defined. He also said that the intrusion detection community has thus far focused more on outside intrusions than on inside abuse. After an intermission Steve Smaha of Haystack Labs went on to describe the typical customer attitude to security. He said that customers were spending less on host based security controls and more on boundary control measures like firewalls. He then went on to describe the details of Haystack Labs' commercial intrusion detector called "Stalker". He mentioned that the working goal of an intrusion detection system is to provide accountability, do *misuse* (not anomaly) detection and possibly provide a unitary audit trail derived from several sources and its analysis. Professor Karl Levitt of UC Davis followed Steve Smaha and his talk was titled "Toward the Auditing of Application Programs". He said that application audit trails (AAT) should supplement system audit trails (SAT). He asked what system support might be required to produce trusted AAT and which applications were good candidates for generating AATs. He felt that DBMS, editors, financial and medical applications were promising candidates for application auditing. Jan 10 (Network Management) After lunch, Bill Cheswick of AT&T Bell Labs spoke on the use of firewalls to protect a network. He suggested setting up a "honeypot" machine on the internal net that is near the network gateway and that holds what appear to be goodies. The existence of this honeypot machine is made known to very few and is watched carefully. This honeypot machine serves as a snare for intruders who manage to break in past the bastion host. The idea of the honeypot is similar to putting a burglar alarm inside your safe, as a last (and cheap) measure to see if someone got through the security. These can be implemented even if other security measures, like firewalls, are infeasible. Marcus Ranum of Trusted Information Systems followed Bill and rambled at length (ed. his words, not ours). He proposed that the security community reduce its commitment to tracking the sources of attacks and building cases for prosecuting them. Marcus claimed that the cost and difficulty of tracking hackers, combined with the difficulty of prosecution, and the "slap on the wrist" that they get when brought to trial shows there is no cost justification. He pointed out further that from a cost/benefit approach, deterring hackers by prosecution appears to be much less effective than deterring hackers via technological countermeasures like firewalls and secure systems. The only effective means of directly countering hackers would be to take questionable measures such as declaring all-out information warfare against the hacker community, effectively sinking to their level. He noted that in some cases, this process appears to have begun. Improving the situation, Marcus claimed, is a matter of taking incremental steps by identifying countermeasures that would block off whole avenues of attack. He described a "wouldn't it be nice?" firewall, which does nothing but stamp incoming packets as "infected" and pass them on to internal machines running with environments that support different types of access control against different types of data. Thus, a TELNET session from the outside might be able to log in, but would be incapable of executing (or even seeing) certain programs or files. Files imported from the outside might not be executable until manually "blessed". Marcus concluded by begging for people to focus on building simple tools from which complex security architectures could be assembled, rather than the other way around. Paul Traina of Cisco Systems outlined how cryptography cannot easily solve the problem of maintaining the integrity of routes in the internet. The chief problem is performance. He showed why it is insufficient to assign a public/private key to every router and sign/encrypt the routing information before sending it to the next hop gateway. The problem is that this method does not provide end-to-end authentication of routes. To achieve that, one would need a path encryption which would allow the verifier to check a route update all the way to the source (similar to the X.509 certification scheme). Jan 11 (System Vulnerabilities) Bob Abbott of Abbott Computers Partners said that the primary problem facing the security community is the loss of confidence in security. He said that software glitches are the key to penetrations. Penetrations might exploit single glitches or a combination of glitches. The primary cause is the incomplete or inconsistent validation of parameters. The problem, in his view, should be cheaper to fix the problem at the operating system level. The three major reasons for continued penetrations are: 1. Software change cycle is more frequent (because of the market being more money driven). 2. There is more and larger software to be subverted. 3. There is a lack of understanding of how software maintenance increases the potential for penetrations. The conclusion is that all points of penetration prevention and detection should be considered. These include before penetration checks (software analysis, integrity reviews, testing, programming standards), during penetration checks (checksums, table integrity), and after penetration checks (table status, audit trails). Following Bob Abbott, Christoph Schuba of the COAST laboratory, Purdue University described a vulnerability in the Domain Name Service (DNS). He abstracted the problem to say that if the binding process (for example, mapping internet address numbers to domain names) can not be trusted then names cannot be trusted. The vulnerable points are a corrupted sender, receiver or intermediary, and the service provider itself. The best point of detection is an open question. To prevent vulnerabilities in DNS several methods can be employed: 1. Harden DNS (watch Paul Vixie's version of BIND). 2. Harden application usage. 3. Employ careful protocol design with security as an important consideration. 4. Use cryptographically strong methods. 5. Watch the IETF Working Group on DNS. Following Christoph, Kevin Ziese of the Air Force Information Warfare Center spoke on the need to share vulnerability data among the security community. He also focused on the lack of a common, consistent way of dissecting vulnerabilities into common classes from which a researchable data base of vulnerabilities could be developed. He said that vulnerabilities tend to cluster in classes and that we often focus on fixing a particular vulnerability rather than attempting to fix the class. He said the security problem has taken a new dimension with the explosive growth of the WWW and that every connection is a potential threat. His recommendations include developing a taxonomy to understand the process, developing a methodology for dissecting vulnerabilities and implementing a measurement process. Vulnerabilities are a symptom, not the disease. The use of metrics should drive the countermeasures employed. The development of plug-and-play modules for security is needed. Tsutomu Shimomura followed Kevin Ziese and described an attack on his computer system at the San Diego Super Computing Center. The attack was a realization of the classic attack using IP spoofing described in the paper by Robert Morris and later by Steve Bellovin ("Security Problems in the TCP/IP Protocol Suite", 1989). Because of good instrumentation, the attack was monitored well. It involved wedging the TCP state machine, then predicting TCP sequence numbers. After the fake TCP connection was established, the intruders gained access by making the intruded machine believe that their machine was a trusted machine. The most disturbing aspect of this attack was that the attack seemed scripted or automated based on the timing of events. The attack also involved compiling and installing a kernel loadable module. There is a tool floating around called TAP which is a kernel module that allows you to watch streams on SunOS, and capture what a person is typing. It is easy to modify so that you could actually write to the stream thus emulating that person and hijacking their terminal connection. A method for stopping the IP spoofing attack is to make sure firewalls and screening routers are setup to block traffic that originates from the outside that has source addressing inside. A method for stopping the second attack is to disable the capability of the kernel to load modules dynamically after all valid modules are loaded. Der Mouse developed a script for SunOS 4.1.2 to do this. It is retrievable from ftp://coast.cs.purdue.edu/pub/tools/unix/disable_mod_cmds. The attack seemed specifically to target Tsutomu. He even played audio files of the attackers leaving voice mail. For the story that beat the CERT Advisory, see the Monday, January 23, 1995 issue of The New York Times. The front page story by John Markoff is titled "Data Network Is Found Open To New Threat". In the weeks that followed, nearly every newspaper, magazine, and TV news program carried information about the incident. Further references are the CERT Advisory on this intrusion and Steven Bellovin's response to the attack and the publicity. Jan 11 (Protection Mechanisms for CMAD Systems) Next, Dr. Matt Bishop of UC Davis spoke about protecting CMAD systems. He discussed a model with the following principals: Agent, Director, & Notifier. Then he examined the threats imposed on each of these principals by the following types of attacks: modification, masquerading, denial of service, flooding, interception, assurance & replay. Jan 12 (Legal Issues: Present and Future) Not surprisingly, one of the more interesting sessions involved the legal experts. Prosecuting attorney Bill Cook described some of the issues surrounding the development and execution of taking a computer-related case to trial. Some of the potential problem areas described by Bill include copyrighted material, patented programs, trade secrets, defamation, pornography, viruses, and technology transfer. Martha Stansell-Gamm from the US Department of Justice Computer Crime Unit discussed some of the goals the DOJ has been pursuing in the US and abroad. She explained recent legislative amendments to the wiretap statute in the Digital Telephony Act, and also discussed training programs for federal prosecutors and agents. Also in the legal session, Kevin Ziese described the legal issues encountered and the close interaction he had with the Department of Justice when the Air Force Information Warfare Center discovered an intrusion at an Air Force site. Their actions required many interpretations of the current legal situation. Stansell-Gamm concluded the session by saying "Kids, don't do this at home". Jan 12 (Customer Requirements: Present and Future) Tom Longstaff from CERT moderated the last session of the workshop. He briefly talked about the requirements of the customers of CERT and concluded that unobtrusive and free solutions are wanted. He then introduced the panelists who discussed the topics from their point of view: Dave Bailey, Galaxy Computer Services; Steve Lodin, Purdue University COAST Project & Delco Electronics Corp; Carolyn Tubyrfill Sun Microsystems; Toney Jennings, Trident Data Systems; Pete Hammes ASSIST; Susan Odneal, Kaiser Permanente; and Dan Essin, USC. Steve spoke from his experience as a system administrator at Delco Electronics Corp. He looked at customer requirements as present requirements, future requirements and the grand vision. Present requirements stress quick solutions that can avert the main threat and patch the currently poor state of security to some reasonable, but not necessarily perfect state. This means mainly perimeter defense to protect against outside threats. He did not spend much time on the grand vision, basically a perfect world without any threats, because what prevention cannot ward off, a highly configurable, reliable, and functionally correct IDS can detect and lead to almost instantaneous correction. The most interesting part of the talk was therefore the future requirements. Steve expanded the metaphor of perimeter defense to a more active border patrol providing firewall functionality, auditing capabilities, and an inclusion of future technologies such as mobile networking that will disrupt and blur the definition of a perimeter. All existing and future platforms of operating systems and networking technology have to be supported in a uniform way. A special role of support will fall to the vendors. He also raised the question why the vendor community was represented so poorly at the workshop - a point that was picked up in later talks and extensively discussed. Final points included next generation network protocols such as IPv6 and the necessity of multinational support for virtual network perimeters. Toney Jennings and Tim Grance, talked about the implementation of DIDS at an Air Force site with more that 250 workstations. The requirements for the product were generated after the product was implemented. Test sites seemed to get more interested in the product because of its network management capabilities than because of its original purpose. Susan Odneal talked about the restructuring that Kaiser Permanente is going through and the effects it will have on their security requirements. In conclusion, the workshop was enlightening. The need for more vendor representation was apparent. It was concluded by the participants that there is a need for another workshop next year. For more information about any particular session, contact the individual speakers. There will be workshop proceedings available later, contact Matt Bishop for more information.