The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Panel #1: Visualization of Security (Symposium Summary)

Share:

Tuesday, March 30, 2010

Panel Members:

  • Steve Dill, Lockheed Martin
  • Donald Robinson, Northrop Grumman
  • Ross Maciejewski, Purdue
  • Alok Chaturvedi, Purdue

Summary by Ryan Poyar

The first panel of the 2010 annual security symposium kicked things off to a great start and an interesting discussion. The topic was the Visualization of Security. The focus of the panel was to address the issue of how to use the vast amounts of data that is available in a way that can help predict and protect systems from future threats. Alok Chaturvedi, a professor at Purdue, initiated the discussion by describing how using visualization could potentially make it possible to display large amounts of data in a meaningful way. Donald Robinson from Northrop Grumman rationalized the use of using visualization with his argument that as humans we are naturally very good at recognizing patterns and making sense of visualizations as opposed to dealing with raw data. Currently, this technique is being researched through the project VACCINE (Visual Analytics for Command, Control, and Interoperability Environments) which is primarily focused on helping the mission of the Department of Homeland Security. As one of the researchers working on VACCINE, Ross Maciejewski described that the goal of the project was to be able to determine potential threats from an abundance of streaming real-time data sources and then further to provide real-time targeted counter measures against each threat. While all of this sounds very good in theory, getting it to work in practice requires many hurdles to be overcome. The discussion for the remainder of the panel was a debate on who should be responsible for making the threat determination from the data and then who should determine the correct response. Even in a non-real-time environment with only humans this is a very tricky endeavor. It seems that it is necessary for a specific expert in each field to analyze the data from their perspective and look for threats based on their expertise only. If a threat is found, it is then very difficult to determine who has the right background and is the best choice to mitigate it. Further, who has the ability to foresee threats that cross multiple disciplines? If we have a difficult time answering these questions in a detailed, comprehensive, non-real-time environment how will we be able to design a system a priori that can answer future questions in real-time?

Opening Keynote: Mike McConnell (Symposium Summary)

Share:

Tuesday, March 30, 2010

Summary by Jason Ortiz

Mike McConnell, retired Admiral of the Navy, former Director of NSA and former Director of National Intelligence delivered the opening keynote speech for the eleventh annual CERIAS Security Symposium. The majority of this keynote was devoted to recounting his experiences and efforts to move forward national cyber capabilities. The following is a summary of those efforts.

Admiral McConnell opened the address with a simple statement: “The nation is at significant risk.” He pointed out that the United States’ economy and livelihood is in information streams. If those streams are interrupted or tampered with, the United States could lose trillions of dollars almost instantly.

McConnell continued the keynote by making three predictions. The first of those was the idea that the United States will continue to talk about cyber defenses but not really do anything until after a catastrophic cyber event. The Admiral supported this idea by pointing out that if extremist groups were to focus their efforts on cyber attacks, they could disrupt transportation and the economy. As evidenced by attacks last spring in California (criminals cut fiber optic cables), they could also disrupt services such as 9-11 service, internet connectivity, and cellular phone service.

McConnell’s second prediction was that after a catastrophic event, the government of the United States would suddenly lurch into action. They will pass laws, appropriate money and work to prevent the same sort of catastrophe from reoccurring. After all, Washington D.C. responds to four things: crisis, the ballot box, money and law. A catastrophic cyber attack would generate changes or problems in all four of these areas.

McConnell then proceeded to explain the most important aspects of cyber security as he learned as Director of the NSA. The first most important aspect is authentication. The second most important aspect is data integrity. The third aspect is non-repudiation. The fourth is availability, and the least important aspect is the ciphertext itself (encryption).

Finally, the third prediction made by Admiral McConnell was that the United States would reengineer the internet. He explained how the military uses the internet and predicts that the entire national network will be implemented in a similar manner in the future. Concerning the government, it is McConnell’s belief that the government can help to implement the redesigned and more secure network.

Making the CWE Top 25, 2010 Edition

Share:
As last year, I was glad to be able to participate in the making of the CWE Top 25. The 2010 Edition has been more systematically and methodically produced than last year's. We adjusted the level of abstraction of the entries to be more consistent, precise and actionable. For that purpose, new CWE entries were created, so that we didn't have to include a high-level entry because there was no other way to discuss a particular variation of a weakness. There was a formal vote with metrics, with a debate about which metrics to use, how to vote, and how to calculate a final score. We moved the high-level CWE entries which could be described as "Didn't perform good practice X" or "Didn't follow principle Y" into a mitigations section which specifically addresses what X and Y are and why you should care about them. Those mitigations were then mapped against the top-25 CWE entries that they affected.

For the metrics, CWE entries were ranked by prevalence and importance. We used P X I to calculate scores. That makes sense to me because risk is defined as Potential loss x Probability of occurrence, so by this formula the CWE rankings are related to the risk those weaknesses pose to your software and business. Last year, the CWEs were not ranked; they instead had "champions" who argued for their inclusion in the Top-25.

I worked on creating an educational profile, with its own metrics (of course not alone; it wouldn't have happened without Steve Christey, his team at MITRE, and other CWE participants). The Top-25 now has profiles; so depending on your application and concerns, you may select a profile that ranks entries differently and appropriately. The educational profile used prevalence, importance but also emphasis. Emphasis relates to how difficult a concept is to explain and understand. Easy concepts can be learned in homeworks, labs, or are perhaps so trivial that they can be learned in the students own reading time. Harder concepts deserve more class time, provided that they are important enough. Another factor for emphasis was how much a particular CWE is helpful in learning others, and its general applicability. So, the educational profile tended to include higher-level weaknesses. Also, it considered all historical time periods for prevalence, whereas the Top-25 is more focused on data for the last 2 years. This is similar to the concept of regression testing -- we don't want problems that have been solved to reappear.

Overall, I have a good feeling about this year's work, and I hope that it will prove useful and practical. I will be looking for examples of its use and experiences with it, and of course I'd love to hear what you think of it. Tell us both the good and the bad -- I'm aware that it's not perfect, and it has some subjective elements, but perhaps comments will be useful for next year's iteration.

Cowed Through DNS

Share:
May 2010 will mark the 4th aniversary of our collective cowing by spammers, malware authors and botnet operators. In 2006, spammers squashed Blue Frog. They made the vendor of this service, Blue Security, into lepers, as everyone became afraid of being contaminated by association and becoming a casualty of the spamming war. Blue Frog hit spammers were it counted -- in the revenue stream, simply by posting complaints to spamvertized web sites. It was effective enough to warrant retaliation. DNS was battered into making Blue Security unreachable. The then paying commercial clients of Blue Security were targetted, destroying the business model; so Blue Security folded [1]. I was stunned that the "bad guys" won by brute force and terror, and the security community either was powerless or let it go. Blue Security was even blamed for some of their actions and their approach. Blaming the victims for daring to organize and attempt to defend people, err, I mean for provoking the aggressor further, isn't new. An open-source project attempting to revive the Blue Frog technology evaporated within the year. The absence of interest and progress has since been scary (or scared) silence.

According to most sources, 90-95% of our email traffic has been spam for years now. Not content with this, they subject us to blog spam, friendme spam, IM spam, and XSS (cross-site scripting) spam. That spam or browser abuse through XSS convinces more people to visit links and install malware, thus enrolling computers into botnets. Botnets then enforce our submission by defeating Blue Security type efforts, and extort money from web-based businesses. We can then smugly blame "those idiots" who unknowingly handed over the control over their computers, with a slight air of exasperation. It may also be argued that there's more money to be made selling somewhat effective spam-fighting solutions than by emulating a doomed business model. But in reality, we've been cowed.

I had been hoping that the open source project could make it through the lack of a business model; after all, the open source movement seems like a liberating miracle. However, the DNS problem remained. So, even though I didn't use Blue Frog at the time, I have been hoping for almost 4 years now that DNS would be improved to resist the denial of service attacks that took Blue Security offline. I have been hoping that someone else would take up the challenge. However, all we have is modest success at (temporarily?) disabling particular botnets, semi-effective filtering, and mostly ineffective reporting. Since then, spammers have ruled the field practically uncontested.

Did you hear about Comcast's deployment of DNSSEC [2]? It sounds like a worthy improvement; it's DNS with security extensions, or "secure DNS". However, Denial-of-service (DoS) prevention is out-of-scope of DNSSEC! It has no DoS protections, and moreover there are reports of DoS "amplification attacks" exploiting the larger DNSSEC-aware response size [3]. Hum. Integrity is not the only problem with DNS! A search of IEEE Explore and the ACM digital library for "DNS DoS" reveals several relevant papers [4-7], including a DoS-resistant backwards compatible replacement for the current DNS from 2004. Another alternative, DNSCurve has protection for confidentiality, integrity and availability (DoS) [8], has just been deployed by OpenDNS [9] and is being proposed to the IETF DNSEXT working group [10]. This example of leadership suggests possibilities for meaningful challenges to organized internet crime. I will be eagerly watching for signs of progress in this area. We've kept our head low long enough.

References
1. Robert Lemos (2006) Blue Security folds under spammer's wrath. SecurityFocus. Accessed at http://www.securityfocus.com/news/11392
2. Comcast DNSSEC Information Center Accessed at http://www.dnssec.comcast.net/
3. Bernstein DJ (2009) High-speed cryptography, DNSSEC, and DNSCurve. Accessed at: http://cr.yp.to/talks/2009.08.11/slides.pdf
4. Fanglu Guo, Jiawu Chen, Tzi-cker Chiueh (2006) Spoof Detection for Preventing DoS Attacks against DNS Servers. 26th IEEE International Conference on Distributed Computing Systems.
5. Kambourakis G, Moschos T, Geneiatakis D, Gritzalis S (2007) A Fair Solution to DNS Amplification Attacks. Second International Workshop on Digital Forensics and Incident Analysis.
6. Hitesh Ballani, Paul Francis (2008) Mitigating DNS DoS attacks. Proceedings of the 15th ACM conference on Computer and communications security
7. Venugopalan Ramasubramanian, Emin Gün Sirer (2004) The design and implementation of a next generation name service for the internet. Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications
8. DNSCurve: Usable security for DNS (2009). Accessed at http://dnscurve.org/
9. Matthew Dempsky (2010) OpenDNS adopts DNSCurve. Accessed at http://blog.opendns.com/2010/02/23/opendns-dnscurve/
10. Matthew Dempsky (2010) [dnsext] DNSCurve Internet-Draft. Accessed at http://www.ops.ietf.org/lists/namedroppers/namedroppers.2010/msg00535.html

Blast from the Past

Share:

Yes, I have been quiet (here) over the last few months, and have a number of things to comment on. This hiatus is partly because of schedule, partly because I had my laptop stolen, and partly health reasons. However, I'm going to try to start back into adding some items here that might be of interest.

To start, here is one item that I found while cleaning out some old disks: a briefing I gave at the NSA Research division in 1994. I then gave it, with minor updates, to the DOD CIO Council (or whatever their name was/is -- the CNSS group?), the Federal Infosec Research Council, and the Criticial Infrastructure Commission in 1998. In it, I spoke to what I saw as the biggest challenges in protecting government systems, and what were major research challenges of the time.

I have no software to read the 1994 version of the talk any more, but the 1998 version was successfully imported into Powerpoint. I cleaned up the fonts and gave it a different background (the old version was fugly) and that prettier version is available for download. (Interesting that back then it was "state of the art" grin

I won't editorialize on the content slide by slide, other than to note that I could give this same talk today and it would still be current. You will note that many of the research agenda items have been echoed in other reports over the succeeding years. I won't claim credit for that, but there may have been some influences from my work.

Nearly 16 years have passed by, largely wasted, because the attitude within government is still largely one of "with enough funding we can successfully patch the problems." But as I've quoted in other places, insanity is doing the same thing over and over again and expecting different results. So long as we believe that simple incremental changes to the existing infrastructure, and simply adding more funding for individual projects, is going to solve the problems then the problems will not get addressed -- they will get worse. It is insane to think that pouring ever more funding into attempts to "fix" current systems is going to succeed. Some of it may help, and much of it may produce some good research, but overall it will not make our infrastructure as safe as it should be.

Yesterday, Admiral (ret) Mike McConnell, the former Director of National Intelligence in the US, said in a Senate committee hearing that if there were a cyberwar today, the US would lose. That may not be quite the correct way of putting it, but we certainly would not come out of it unharmed and able to claim victory. What's more, any significant attack on the cyberinfrastructure of the US would have global repercussions because of the effects on the world's economy, communications, trade, and technology that are connected by the cyber infrastructure in the US.

As I have noted elsewhere, we need to do things differently. I have prepared and circulated a white paper among a few people in DC about one approach to changing the way we fund some of the research and education in the US in cybersecurity. I have had some of them tell me it is too radical, or too different, or doesn't fit in current funding programs. Exactly! And that is why I think we should try those things -- because doing more of the same in the current funding programs simply is not working.

But 15 years from now, I expect to run across these slides and my white paper, and sadly reflect on over three decades where we did not step up to really deal with the challenges. Of course, by then, there may be no working computers on which to read these!