[Note: the following is primarily about U.S. Government policies, but I believe several points can be generalized to other countries.]
I was editing a section of my website, when I ran across a link to a paper I had forgotten that I wrote. I'm unsure how many people actually saw it then or since. I know it faded from my memory! Other than CERIAS WWW sites and the AAAS itself, a Google search reveals almost no references to it.
As background, in early April of 2002, I was asked, somewhat at the last moment, to prepare a paper and some remarks on the state of information security for a forum, Technology in a Vulnerable World, held on science in the wake of 9/11. The forum was sponsored by the AAAS, and held later that month. There were interesting papers on public health, risk communication, the role of universities, and more, and all of them are available for download.
My paper in the forum wasn't one of my better ones, in that it was somewhat rushed in preparing it. Also, I couldn't find good background literature for some of what I was writing. As I reread what I wrote, many of the points I raised still don't have carefully documented sources in the open literature. However, I probably could have found some published backup for items such as the counts of computer viruses had I spent a little more time and effort on it. Mea culpa; this is something I teach my students about. Despite that, I think I did capture most of the issues that were involved at the time of the forum, and I don't believe there is anything in the paper that was incorrect at that time.
Why am I posting something here about that paper, One View of Protecting the National Information Infrastructure, written seven years ago? Well, as I reread it, I couldn't help but notice that it expressed some of the same themes later presented in the PITAC report, Cyber Security: A Crisis of Prioritization (2005), the NRC report Towards a Safer and More Secure Cyberspace (2007), and my recent Senate testimony (2009). Of course, many of the issues were known before I wrote my paper -- including coverage in the NRC studies Computers at Risk: Safe Computing in the Information Age (1991), Trust in Cyberspace (1999) and Cybersecurity Today and Tomorrow (2002) (among others I should have referenced). I can find bits and pieces of the same topics going further back in time. These issues seem to be deeply ingrained.
I wasn't involved in all of those cited efforts, so I'm not responsible for the repetition of the issues. Anyone with enough background who looks at the situation without a particular self-interest is going to come up with approximately the same conclusions -- including that market forces aren't solving the problem, there aren't enough resources devoted to long-term research, we don't have enough invested in education and training, we aren't doing enough in law enforcement and active defense, and we continue to spend massive amounts trying to defend legacy systems that were never designed to be secure.
Given these repeated warnings, it is troubling that we have not seen any meaningful action by government to date. However, that is probably preferable to government action that makes things worse: consider DHS as one notable example (or several).
Compounding the problem, too many leaders in industry are unwilling to make necessary, radical changes either, because such actions might disrupt their businesses, even if such actions are in the public good. It is one of those "tragedy of the commons" situations. Market forces have been shown to be ineffective in fixing the problems, and will actually lead to attempts to influence government against addressing urgent needs. Holding companies liable for their bad designs and mistakes, or restricting spending on items with known vulnerabilities and weaknesses would be in the public interest, but too many vendors affected would rather lobby against change than to really address the underlying problems.
Those of us who have been observing this problem for so long are therefore hoping that the administration's 60 day review provides strong impetus for meaningful changes that are actually adopted by the government. Somewhat selfishly, it would be nice to know that my efforts in this direction have not been totally in vain. But even if nothing happens, there is a certain sense of purpose in continuing to play the role of Don Quixote.
Sancho! Where did I leave my horse?
Why is it that Demotivators® seem so appropriate when talking about cyber security or government? If you are unfamiliar with Despair.com, let me encourage you to explore the site and view the wonderfully twisted items they have for sale. In the interest of full disclosure, I have no financial interest or ties to the company, other than as a satisfied and cynical customer.
On a more academic note, you can read or purchase the NRC reports cited above online via the National Academies Press website.
Over the last few months, CERIAS faculty members Jackie Rees and Karthik Kannan have been busy analyzing data collected from IT executives around the world, and have been interviewing a variety of experts in cybercrime and corporate strategy. The results of their labors were published yesterday by the McAfee Corporation (a CERIAS Tier II partner) as the report Unsecured Economies: Protecting Vital Information.
The conclusions of the report are somewhat pessimistic about prospects for cyber security in the coming few years. The combination of economic pressures, weak efforts at law enforcement, international differences in perceptions of privacy and security, and the continuing challenges of providing secured computing are combining to place vast amounts of valuable intellectual property (IP) at risk. The report presents estimates that IP worth billions of dollars (US) was stolen or damaged last year, and we can only expect the losses to increase.
Additionally, the report details five general conclusions derived from the data:
None of these should be a big surprise to anyone who has been watching the field or listening to those of us who are working in it. What is interesting about the report is the presented magnitude and distribution of the issues. This is the first truely global study of these issues, and thus provides an important step forward in understanding the scope of these issues.
I will repeat here some of what I wrote for the conclusion of the report; I have been saying these same things for many years, and the report simply underscores the importance of this advice:
“Information security has transformed from simply ’preventing bad things from happening ’into a fundamental business component.' C-level executives must recognize this change. This includes viewing cybersecurity as a critical business enabler rather than as a simple cost center that can be trimmed without obvious impact on the corporate bottom line; not all of the impact will be immediately and directly noticeable. In some cases, the only impact of degraded cybersecurity will be going from ‘Doing okay’ to ‘Completely ruined’ with no warning before the change.
Cybersecurity fills multiple roles in a company, and all are important for organizational health.
- First, cybersecurity provides positive control over resources that provide the company a competitive advantage: intellectual property, customer information, trends and projections,financial and personnel records and so on. Poor security puts these resources at risk.
- Second, good security provides executives with confidence that the data they are seeing is accurate and true, thus leading to sound decisions and appropriate compliance with regulation and policy
- Third, strong cybersecurity supports businesses taking new risks and entering new markets with confidence in their ability to respond appropriately to change
- And fourth, good cybersecurity is necessary to build and maintain a reputation for reliability and sound behavior, which in turn are necessary to attract and retain customers and partners.
This study clearly shows that some customers are unwilling to do business with entities they consider poorly secured. Given massive market failures, significant fraud and increasing threats of government oversight and regulation, companies with strong controls, transparent recordkeeping, agile infrastructures and sterling reputations are clearly at an advantage -- and strong cybersecurity is a fundamental component of all four. Executives who understand this will be able to employ cybersecurity as an organic element of company (and government) survival -- and growth.“
We are grateful to McAfee, Inc. for their support and assistance in putting this report together.
Update: You can now download the report sans-registration from CERIAS.
The report is available at no charge and the PDF can be downloaded (click on the image of the report cover to the left, or here). Note that to download the report requires registration.
Some of you may be opposed to providing your contact information to obtain the report, especially as that information may be used in marketing. Personally, I believe that the registration should be optional. However, the McAfee corporation paid for the report, and they control the distribution.
As such, those of us at CERIAS will honor their decision.
However, I will observe that many other people object to these kinds of registration requirements (the NY Times is another notable example of a registration-required site). As a result, they have developed WWW applications, such as BugMeNot, which are freely available for others to use to bypass these requirements. Others respond to these requests by identifying company personnel from information on corporate sites and then using that information to register -- both to avoid giving out their own information and to add some noise to the data being collected.
None of us here at CERIAS are suggesting that you use one of the above-described methods. We do, however, encourage you to get the report, and to do so in an appropriate manner. We hope you will find it informative.
Over the last few years, I have been involved in issues related to the use of computerization in voting. This has come about because of my concerns about computer security, privacy and reliability, and from my role as chair of the ACM U.S. Public Policy Committee (USACM). USACM has taken a strong position as regards use of computers as voting stations and voting over the internet.
Two recent items address the issue of voting over the Internet.
The first is a study released by NIST about the threats posed by internet voting. This is a well-written document describing problems that would be encountered with any online voting system. Their conclusion is that, for public elections, distribution of blank ballots (on paper) is the only reasonable improvement that we can make with current technology.
The second is a note from my colleague, Yvo Desmedt, one of the senior leaders in information security He has asked that I circulate this to a wider audience:
IACR (the International Association for Cryptologic Research) has changed its bylaws to allow e-voting over the internet to elect its board members and other purposes. IACR will likely move towards internet e-voting. The IACR Board of Directors subcommittee on internet e-voting has published a list of requirements for such a system at: http://www.iacr.org/elections/eVoting/requirements.html This is evidently a first step and the question remains whether the system the International Association for Cryptologic Research will choose will be easy to hack or not. So, security experts should follow this development.
The problems that need to be addressed by any voting technology are mostly obvious: impersonation of the voter, impersonation of the voting system, disclosure of the ballot, multiple voting, loss of votes, denial of access, and a number of other issues. The problems are complicated by the requirements of a fair voting system, one of which is that of vote deniability—that the voter is able to deny (or claim) that her/his vote was cast a particular way. This is important to prevent vote buying, or more importantly, retribution against voters who do not cast ballots in a particular way. It isn’t difficult to find stories where voters have been beaten or killed because of how they voted (or were presumed to have intended to vote). Thus, the tried-and-true concept of providing a receipt (ala ATM machines) is not a workable solution.
My intent in making this post isn’t to discuss all the issues behind e-voting—that is well beyond the scope of a single posting, and is covered well many other places. My main goal is to give some wider circulation to Yvo’s statement. However, in light of the recent problem with certificate issuance, it is also worth noting that schemes requiring encryption to secure voting may have hidden vulnerabilities that could lead to compromise and/or failures in the future.
In the end, it comes down to a tradeoff of risk/reward (as do all security choices): can we accurately quantify the risks with a particular approach, and are we willing to assume them? Do we have appropriate mechanisms to eliminate, mitigate or shift the risks? Are we willing to accept the risks associated with adopting a particular form of e-voting in return for the potential benefit of better access for remote voters? Or are accurate (fair) results all the time more important than complete results?
Note that one objection often raised to USACM as we argue these points is “There is no evidence there has ever been a failure or tampering with a vote.” In addition to being incorrect (there are numerous cases of computer-based voting failures), this misses two key issues:
In the case of IACR, it is obvious why this group of cryptography professionals would wish to adopt techniques that show confidence in cryptography. However, the example they set could be very damaging for other groups—and populations—if their confidence is misplaced. Given the long history of spectacular failures in cryptography—often going unannounced while being exploited—it is somewhat surprising that the IACR is not more explicit in their statement about the risks of technological failures.
Yesterday, I posted a long entry on the recent news about how some researchers obtained a “rogue” certificate from one of the Internet Certificate Authorities. There are some points I missed in the original post that should be noted.
I want to reiterate that there are more fundamental issues of trust involved than what hash function is used. The whole nature of certificates is based around how much we trust the certificate authorities that issue the certificates, and the correctness of the software that verifies those certificates then shows us the results. If an authority is careless or rogue, then the certificates may be technically valid but not match our expectations for validity. If our local software (such as a WWW browser) incorrectly validates a certificate, or presents the results incorrectly, we may trust a certificate we shouldn’t. Even such mundane issues as having one’s system at the correct time/date can be important: the authors of this particular hack demonstrated that by backdating their rogue certificate.
My continuing message to the community is to not lose sight of those things we assume. Sometimes, changes in the world around us render those assumptions invalid, and everything built on them becomes open to question. If we forget those assumptions—and our chains of trust built on them—we will continue to be surprised by the outcomes.
That is perhaps fitting to state (again) on the last day of the year. Let me observe that as human beings we sometimes take things for granted in our lives. Spend a few moments today (and frequently, thereafter) to pause and think about the things in your life that you may be taking for granted: family, friends, love, health, and the wonder of the world around you. Then as is your wont, celebrate what you have.
Best wishes for a happy, prosperous, safe—and secure—2009.
[12/31/08 Addition]: Steve Bellovin has noted that transition to the SHA-2 hash algorithm in certificates (and other uses) would not be simple or quick. He has written a paper describing the difficulties and that paper is online.
There are several news stories now appearing (e.g., Security News) about a serious flaw in how certificates used in online authentication are validated. Ed Felten gives a nice summary of how this affects online WWW site authentication in his Freedom to Tinker blog posting. Brian Krebs also has his usual readable coverage of the problem in his Washington Post article. Steve Bellovin has some interesting commentary, too, about the legal climate.
Is there cause to be concerned? Yes, but not necessarily about what is being covered in the media. There are other lessons to be learned from this.
Short tutorial
First, for the non-geek reader, I’ll briefly explain certificates.
Think about how, online, I can assure myself that the party at the other end of a link is really who they claim to be. What proof can they offer, considering that I don’t have a direct link? Remember that an attacker can send any bits down the wire to me and may access to faster computers than I do.
I can’t base my decision on how the WWW pages appear, or embedded images. Phishing, for instance, succeeds because the phishers set up sites with names and graphics that look like the real banks and merchants, and users trust the visual appearance. This is a standard difficulty for people—understanding the difference between identity (claiming who I am) and authentication (proving who I am).
In the physical world, we do this by using identity tokens that are issued by trusted third parties. Drivers licenses and passports are two of the most common examples. To get one, we need to produce sufficient proof of identity to a third party to meet its standards of proof. Then, the third party issues a document that is very difficult to forge (almost nothing constructed is impossible to forge or duplicate—but some things require so much time and expenditure it isn’t worthwhile). Because the criteria for proof of identity and strength of construction of the document are known, various other parties will accept the document as “proof” of identity. Of course, other problems occur that I’m not going to address—this USACM whitepaper (of which I was principal author) touches on many of them.
Now, in the online world we cannot issue or see physical documents. Instead, we use certificates. We do this by putting together an electronic document that gives the information we want some entity to certify as true about us. The format of this certificate is generally fixed by standards, the most common one being the X.509 suite. This document is sent to an organization known as a Certificate Authority (CA), usually along with a fee. The certificate authority is presumably well-known, and performs a check (to their own standards) that the information in the document is correct, and it has the right form. The CA then calculate a digital hash value of the data, and creates a digital signature of that hash value. This is then added to the certificate and sent back to the user. This is the equivalent of putting a signature on a license and then sealing it in plastic. Any alteration of the data will change the digital hash, and a third party will find that the new hash and the hash value signed with the key of the CA don’t match. The reason this works is that the hash function and encryption algorithm used are presumed to be so computationally difficult to forge that it is basically not possible.
As an example of a certificate , if you visit “https://www.cerias.purdue.edu” you can click on the little padlock icon that appears somewhere in the browser window frame (this is browser dependent) to view details of the CERIAS SSL certificate.
You can get more details on all this by reading the referenced Wikipedia pages, and by reading chapters 5 & 7 in Web Security, Privacy and Commerce.
Back to the hack
In summary, some CAs have been negligent about updating their certificate signing mechanisms in the wake of news that MD5 is weak, published back in 2004. The result is that malicious parties can generate and obtain a certificate “authenticating” them as someone else. What makes it worse is that the root certificate of most of these CAs are “built in” to browser and application trust lists to simplify look-up of new certificates. Thus, most people using standard WWW browsers can be fooled into thinking they have connected to real, valid sites—even through they are connecting to rogue sites.
The approach is simple enough: a party constructs two certificates. One is for the false identity she wishes to claim, and the other is real. She crafts the contents of the certificate so that the MD5 hash of the two, in canonical format, is the same. She submits the real identity certificate to the authority, which verifies her bona fides, and returns the certificate with the MD5 hash signed with the CA private key. Our protagonist then copies that signature to the false certificate, which has the same MD5 hash value and thus the same digital signature, and proceeds with her impersonation!
What makes this worse is that the false key she crafts is for a secondary certificate authority. She can publish this in appropriate places, and is now able to mint as many false keys as she wishes—and they will all have signatures that verify in the chain of trust back to the issuer! She can even issue these new certificates using a stronger hash algorithm than MD5!
What makes this even worse is that it has been known for years that MD5 is weak, yet some CAs have continued to use it! Particularly unfortunate is the realization that Lenstra, Wang and de Weger described how this could be done back in 2005. Methinks that may be grounds for some negligence lawsuits if anyone gets really burned by this….
And adding to the complexity of all this is the issue of certificates in use for other purposes. For example, certificates are used with encrypted S/MIME email to digitally sign messages. Certificates are used to sign ActiveX controls for Microsoft software. Certificates are used to verify the information on many identity cards, including (I believe) government-issued Common Access Cards (CAC). Certificates also provide identification for secured instant messaging sessions (e.g., iChat). There may be many other sensitive uses because certificates are a “known” mechanism. Cloud computing services , software updates, and more may be based on these same assumptions. Some of these services may accept and/or use certificates issued by these deficient CAs.
Fixes
Fixing this is not trivial. Certainly, all CAs need to start issuing certificates based on other message digests, such as SHA-1. However, this will take time and effort, and may not take effect before this problem can be exploited by attackers. Responsible vendors will cease to issue certificates until they get this fixed, but that has an economic impact some many not wish to incur.
We can try to educate end-users about this, but the problem is so complicated with technical details, the average person won’t know how to actually make a determination about valid certificates. It might even cause more harm by leading people to distrust valid certificates by mistake!
It is not possible to simply say that all existing applications will no longer accept certificates rooted at those CAs, or will not accept certificates based on MD5: there are too many extant, valid certificates in place to do that. Eventually, those certificates will expire, and be replaced. That will eventually take care of the problem—perhaps within the space of the next 18 months or so (most certificates are issued for only a year at a time, in part for reasons such as this).
Vendors of applications, and especially WWW browsers, need to give careful thought about updates to their software to flag MD5-based certificates as deserving of special attention. This may or may not be a worthwhile approach, for the reason given above: even with a warning, too few people will be able to know what to do.
Bigger issue
We base a huge amount of trust on certificates and encryption. History has shown how easy it is to get implementations and details wrong. History has also shown how quickly things can be destabilized with advances in technology.
In particular, too many people and organizations take for granted the assumptions on which this vast certificate system is based. For instance, we assume that the hash/digest functions in use are computationally difficult to reverse or cause collisions. We also assume that certain mathematical functions underlying public/private key encryption are too difficult to reverse or “brute force.” However, all it takes is some new insight or analysis, or maybe new, affordable technology (e.g., practical quantum computing, or massively parallel computing) to violate those assumptions.
If you look at the way our systems are constructed, too little thought is given to what happens to existing infrastructure when something breaks. Designs can include compensating and recovery code, but doing so requires some cost in space or time. However, all too often people are willing to avoid the investment by putting off the danger to “if and when that happens.” Thus, we instance such as the Y2K problems and the issues here with potentially rogue CAs.
(I’ll note as an aside, that when I designed the original version of Tripwire and what became the Signacert product, I specifically included simultaneous use of several different message digest functions in different families for this very reason. I knew it was a matter of time before one or two were broken. I still believe that it is beyond reason to find files that will match multiple, different algorithms simultaneously.)
Another issue is the whole question of who we trust, and for what. As noted in the USACM whitepaper, authentication is always relative to a third party. How much do we trust those third parties? How much trust have we invested in the companies and agencies issuing certificates? Are they properly verifying identities? How good is there internal security? How do we know, and how much is at risk from our trust in those entities?
Let me leave you with a final thought. How do we know that this problem has not already been quietly exploited? The basic concept has been in the open literature for years. The general nature of this attack on certificates has been known for well over a decade, if not two. Given the technical and infrastructure resources available to national agencies and organized criminals, and given the motivation to use this hack selectively and quietly, how can we know that it is not already being used?
[Added 12/31/2008]: A follow-up post to this one is available in the blog.
[tags]passwords, security practices[/tags]
The EDUCAUSE security mailing list has yet (another) discussion on password policies. I’ve blogged about this general issue several times in the past, but maybe it is worth revisiting.
Someone on the list wrote:
Here is my question - does anyone have the data on how many times a hack (attack) has occurred associated to breaking the “launch codes” from outside of the organization? The last information I gleaned from the FBI reports (several years ago) indicated that 70 percent of hackings (attacks) were internal.
My most recent experience with intrusions has had nothing to do with a compromised password, rather an exploit of some vunerability in the OS, database, or application.
I replied:
I track these things, and I cannot recall the last time I saw any report of an incident caused by a guessed password. Most common incidents are phishing, trojans, snooping, physical theft of sensitive media, and remote exploitation of bugs.
People devote huge amounts of effort to passwords because it is one of the few things they think they can control.
Picking stronger passwords won’t stop phishing. It won’t stop users downloading trojans. It won’t stop capture of sensitive transmissions. It won’t bring back a stolen laptop (although if the laptop has proper encryption it *might* protect the data). And passwords won’t ensure that patches are in place but flaws aren’t.
Creating and forcing strong password policies is akin to being the bosun ensuring that everyone on the Titanic has locked their staterooms before they abandon ship. It doesn’t stop the ship from sinking or save any lives, but it sure does make him look like he’s doing something important…..
That isn’t to say that we should be cavalier about setting passwords. It is important to try to set strong passwords, but once reasonably good ones are set in most environments the attacks are going to come from other places—password sniffing, exploitation of bugs in the software, and implantation of trojan software.
As a field, we spend waaaaay too much time and resources on palliative measures rather than fundamental cures. In most cases, fiddling with password rules is a prime example. A few weeks ago, I blogged about a related issue.
Security should be based on sound risk assessment, and in most environments weak passwords don’t present the most significant risk.
The “VMM Detection Myths and Realities” paper has been heavily reported and discussed before. It considers whether a theoretical piece of software could detect if it is running inside a Virtual Machine Monitor (VMM). An undetectable VMM would be “transparent”. Many arguments are made against the practicality or the commercial viability of a VMM that could provide performance, stealth and reproducible, consistent timings. The arguments are interesting and reasonably convincing that it is currently infeasible to absolutely guarantee undetectability.
However, I note that the authors are arguing from essentially the same position as atheists arguing that there is no God. They argue that the existence of a fully transparent VMM is unlikely, impractical or would require an absurd amount of resources, both physical and in software development efforts. This is reasonable because the VMM has to fail only once in preventing detection and there are many ways in which it can fail, and preventing each kind of detection is complex. However, this is not an hermetic, formal proof that it is impossible and cannot exist; a new breakthrough technology or an “alien science-fiction” god-like technology might make it possible.
Then the authors argue that with the spread of virtualization, it will become a moot point for malware to try to detect if it is running inside a virtual machine. One might be tempted to remark, doesn’t this argument also work in the other way, making it a moot point for an operating system or a security tool to try to detect if it is running inside a malicious VMM?
McAfee’s “secure virtualization”
The security seminar by George Heron answers some of the questions I was asking at last year’s VMworld conference, and elaborates on what I had in mind then. The idea is to integrate security functions within the virtual machine monitor. Malware nowadays prevents the installation of security tools and interferes with them as much as possible. If malware is successfully confined inside a virtual machine, and the security tools are operating from outside that scope, this could make it impossible for an attacker to disable security tools. I really like that idea.
The security tools could reasonably expect to run directly on the hardware or with an unvirtualized host OS. Because of this, VMM detection isn’t a moot point for the defender. However, the presentation did not discuss whether the McAfee security suite would attempt to detect if the VMM itself had been virtualized by an attacker. Also, would it be possible to detect a “bad” VMM if the McAfee security tools themselves run inside a virtualized environment on top of the “good” VMM? Perhaps it would need more hooks into the VMM to do this. Many, in fact, to attempt to catch any of all the possible ways in which a malicious VMM can fail to hide itself properly. What is the cost of all these detection attempts, which must be executed regularly? Aren’t they prohibitive, therefore making strong malicious VMM detection impractical? In the end, I believe this may be yet another race depending on how much effort each side is willing to put into cloaking and detection. Practical detection is almost as hard as practical hiding, and the detection cost has to be paid everywhere on every machine, all the time.
Which Singularity?
Microsoft’s Singularity project attempts to create an OS and execution environment that is secure by design and simpler. What strikes me is how it resembles the “white list” approach I’ve been talking about. “Singularity” is about constructing secure systems with statements (“manifests”) in a provable manner. It states what processes do and what may happen, instead of focusing on what must not happen.
Last year I thought that virtualization and security could provide a revolution; now I think it’s more of the same “keep building defective systems and defend them vigorously”, just somewhat stronger. Even if I find the name somewhat arrogant, “Singularity” suggests a future for security that is more attractive and fundamentally stable than yet another arms race. In the meantime, though, “secure virtualization” should help, and expect lots of marketing about it.
[tags]interview,certification[/tags]I was recently interviewed by Gary McGraw for his Silver Bullet interview series. He elicited my comments on a number of topics, including security testing, ethical hacking, and why security is difficult.If you like any of my blog postings, you might find the interview of some interest. But if not, you might some of the other interviews of interest – mine was #18 in the series.
So, you watch for advisories, deploy countermeasures (e.g., change firewall and IDS rules) or shut down vulnerable services, patch applications, restore services. You detect compromises, limit damages, assess the damage, repair, recover, and attempt to prevent them again. Tomorrow you start again, and again, and again. Is it worth it? What difference does it make? Who cares anymore?
If you’re sick of it, you may just be getting fatigued.
If you don’t bother defending anymore because you think there’s no point to this endless threadmill, you may be suffering from learned helplessness. Some people even consider that if you only passively wait for patches to be delivered and applied by software update mechanisms, you’re already in the “learned helplessness category”. On the other hand, tracking every vulnerability in the software you use by reading BugTraq, Full Disclosure, etc…, the moment that they are announced, and running proof of concept code on your systems to test them isn’t for everyone; there are diminishing returns, and one has to balance risk vs energy expenditure, especially when that energy could produce better returns. Of course I believe that using Cassandra is an OK middle ground for many, but I’m biased.
The picture may certainly look bleak, with talk of “perpetual zero-days”. However, there are things you can do (of course, as in all lists not every item applies to everyone):
Use the CIS benchmarks, and if evaluation tools are available for your platform, run them. These tools give you a score, and even as silly as some people may think this score is (reducing the number of holes in a ship from 100 to 10 may still sink the ship!), it gives you positive feedback as you improve the security stance of your computers. It’s encouraging, and may lift the feeling that you are sinking into helplessness. If you are a Purdue employee, you have access to CIS Scoring Tools with specialized features (see this news release). Ask if your organization also has access and if not consider asking for it (note that this is not necessary to use the benchmarks).
Use the NIST security checklists (hardening guides and templates). The NIST’s information technology laboratory site has many other interesting security papers to read as well.
Consider using Thunderbird and the Enigmail plugin for GPG, which make handling signed or encrypted email almost painless. Do turn on SSL or TLS-only options to connect to your server (both SMTP and either IMAP or POP) if it supports it. If not, request these features from your provider. Remember, learned helplessness is not making any requests or any attempts because you believe it’s not ever going to change anything. If you can login to the server, you also have the option of SSH tunneling, but it’s more hassle.
Watch CERIAS security seminars on subjects that interest you.
If you’re a software developer or someone who needs to test software, consider using the ReAssure system as a test facility with configurable network environments and collections of VMware images (disclosure: ReAssure is my baby, with lots of help from other CERIAS people like Ed Cates).
Good luck! Feel free to add more ideas as comments.
*A small rant about privacy, which tends to be another area of learned helplessness: Why do they need to know? I tend to consider all information that people gather about me, that they don’t need to know for tasks I want them to do for me, a (perhaps very minor) violation of my privacy, even if it has no measurable effect on my life that I know about (that’s part of the problem—how do I know what effect it has on me?). I like the “on a need to know basis” principle, because you don’t know which selected (and possibly out of context) or outdated information is going to be used against you later. It’s one of the lessons of life that knowledge about you isn’t always used in legal ways, and even if it’s legal, not everything that’s legal is “Good” or ethical, and not all agents of good or legal causes are ethical and impartial or have integrity. I find the “you’ve got nothing to hide, do you?” argument extremely stupid and irritating—and it’s not something that can be explained in a sentence or two to someone saying that to you. I’m not against volunteering information for a good cause, though, and I have done so in the past, but it’s rude to just take it from me without asking and without any explanation, or to subvert my software and computer to do so.
[tags]security marketplace, firewalls, IDS, security practices, RSA conference[/tags]
As I’ve written here before, I believe that most of what is being marketed for system security is misguided and less than sufficient. This has been the theme of several of my invited lectures over the last couple of years, too. Unless we come to realize that current “defenses” are really attempts to patch fundamentally faulty designs, we will continue to fail and suffer losses. Unfortunately, the business community is too fixated on the idea that there are quick fixes to really investigate (or support) the kinds of long-term, systemic R&D that is needed to really address the problems.
Thus, I found the RSA conference and exhibition earlier this month to be (again) discouraging this year. The speakers basically kept to a theme that (their) current solutions would work if they were consistently applied. The exhibition had hundreds of companies displaying wares that were often indistinguishable except for the color of their T-shirts—anti-virus, firewalls (wireless or wired), authentication and access control, IDS/IPS, and vulnerability scanning. There were a couple of companies that had software testing tools, but only 3 of those, and none marketing suites of software engineering tools. A few companies had more novel solutions—I was particular impressed by a few that I saw, such as the policy and measurement-based offerings by CoreTrace, ProofSpace, and SignaCert. (In the interest of full disclosure, SignaCert is based around one of my research ideas and I am an advisor to the company.) There were also a few companies with some slick packaging of older ideas (Yoggie being one such example) that still don’t fix underlying problems, but that make it simpler to apply some of the older, known technologies.
I wasn’t the only one who felt that RSA didn’t have much new to offer this year, either.
When there is a vendor-oriented conference that has several companies marketing secure software development suites that other companies are using (not merely programs to find flaws in C and Java code), when there are booths dedicated to secured mini-OS systems for dedicated tasks, and when there are talks scheduled about how to think about limiting functionality of future offerings so as to minimize new threats, then I will have a sense that the market is beginning to move in the direction of maturity. Until then, there are too many companies selling snake oil and talismans—and too many consumers who will continue to buy those solutions because they don’t want to give up their comfortable but dangerous behaviors. And any “security” conference that has Bill Gates as keynote speaker—renowned security expert that he is—should be a clue about what is more important for the conference attendees: real security, or marketing.
Think I am too cynical? Watch the rush into VoIP technologies continue, and a few years from now look at the amount of phishing, fraud, extortion and voice-spam we will have over VoIP, and how the market will support VoIP-enabled versions of some of the same solutions that were in Moscone Center this year. Or count the number of people who will continue to mail around Word documents, despite the growing number of zero-day and unpatched exploits in Word. Or any of several dozen current and predictable dangers that aren’t “glitches”—they are the norm. if you really pay attention to what happens, then maybe you’ll become cynical, too.
If not, there’s always next year’s RSA Conference.