Friday, July 20, 2012

Is Power Grid security being given up for convenience?

I live in an interesting area just outside of Washington DC. We have the suburbs that are old and established with the big beautiful oaks and then we have the growing suburbs that are sprouting out of old farm fields. The last few weeks have seen a rash of storms that have delivered devastating blows to the power supply at people's homes. Those in the newer suburbs have been less affected due to buried cable than those in the beautiful old neighborhoods where wire is still strung amongst those beautiful old oaks that tend to fall and take out the overhead infrastructure. Of course these storms are either summer thunderstorms, in this case, which happen during the hottest part of summer - so no power means no air conditioning in the DC heat. The other end of the spectrum is a nor'easter in February which takes out power when it is well below freezing. Welcome to living in the DC area.

Of course all of this has people talking about the power companies in terms of reliability and response.  Things like "How can a company not be prepared for this type of situation - people without power for a week" have been heard frequently over the last two weeks. Well I am not one to beat up a power company for nature unleashing its fury. Nature is unpredictable and when a storm does happen, as in this case, it can be a very large undertaking to get things coordinated to remove trees and then restring wires etc.

Where I do have an issue is when it comes to the things that they are doing which can be planned for over a long period of time. We have all seen recent articles on the hacking of the power grid in various magazines over the last few years. In c|net, Scientific American, and you can even go to YouTube to see a video on how to do it. Congress, the National Security Agency and others have highlighted the fact that we have this vulnerability. The National Institute of Standards and Technology (NIST) has been working with industry to develop a stronger set of security standards for the SmartGrid to try and build a better grid.

BUT .....

We still have people in the industry that appear to think that the problem is not that bad. The North American Energy Standards Board (NAESB) authorizes two organizations to issue certificates for the Grid today - Open Access Technology International (OATI) and GlobalSign (Yes the same folks who had their website hacked earlier this year). Both OATI and GlobalSign feel it is OK to have long life certificates within the infrastructure protecting the power grid. In fact both have stated that 30 year certificate lifetimes are ok from a security perspective.

I myself find that amazing as the criticality of this infrastructure and its impact on Defense, Homeland Security and the economy is well recognized. This is an infrastructure you want to protect. Part of the argument is the difficulty in updating but then the OATI webCares CPS indicates an 8 year lifetime for Root certificates. Globalsign does allow 30 and 40 year Root certificates in its Certificate Policy and goes as far as 5 year for end devices. They also allow SHA1 hashed certificates, with a 2048 RSA key. There does seem to be some contradiction in the Globalsign CP in that it indicates following of NIST guidance but is not all that specific on which guidance. Certainly today NIST does not recommend use of SHA1 for any certificate use and long life certificates for Root CA's or any issuing CA is also not recommended due to the rapidly evolving attack vectors.

So what we are left with is two companies that seem to think that they can mitigate the risk of technology obsolescence. If we look at history we learn some very hard lessons. MD5 went from a suspected problem (1997) to a demonstrated attack in 8 years. Within 7 years of this first demonstrated attack (2005) there was a usable attack vector that allowed an attacker to introduce malware without the victim knowing and apparently not knowing for a couple of years. So yes one can replace their certificates if someone sees an attack against the CA or the technology that was being used but will that be too late? Will the logic bombs already be in place? If they are can we find them in time? If we do not what will happen? And what is being attacked, industrial control systems, have been targeted very recently due to existing vulnerabilities.

The risks are high here so rather than playing with convenience should NAESB not make it simple for all involved and strengthen these standards to reduce the risk? I would hope that if I asked the folks that went without air conditioning for a week in 100 degree heat if they would risk losing power again, and maybe for much longer, that they would react strongly. I wonder how people in hospitals and Wall Street would see things?

Friday, July 13, 2012

Flame is STILL burning

In this case Flame continues to burn Microsoft. Microsoft have announced the termination of trust of 28 additional certificates within their infrastructure in addition to the 3 that were immediately untrusted when Flame was first brought to light. This new announcement is significant as it highlights the importance of certificates within an enterprise as large as Microsoft but also highlights the interconnectedness of systems. Microsoft's announcement was based on their believe that the newly untrusted CA's are "… outside our recommended secure storage practices". What exactly that means is certainly up for discussion but Microsoft itself states that this effort is being undertaken after their continued review of what happened with Flame. This likely means that they these certificates were protected only as well as those known to be compromised and based on the form of attack there is some level of certainty that these could be exploited as well.

This interconnectivity of systems is becoming a key element of security management. When I speak of interconnectivity I do not just mean network connectivity but I also mean trust connectivity. With todays growing base of interconnected devices, whether that is the traditional server/desktop/laptop, the simple extension to mobile devices such as smartphones/tablets, or whether we take it the next level to control systems such as those interconnected through networks such as the SmartGrid or ones run by companies such as Tridium, we need to consider what happens when a security gap in one element of that infrastructure exists.

This, of course, is also not the end of the conversation. Even if we can find all those certificates and systems that Microsoft "untrusted" on all the platforms that we own and manage, this is one example of how the use of a vulnerable algorithm can create a significant and broadly impacting issue. We knew MD5, the hashing algorithm that was exploited in the original attack, was vulnerable in 2005. In fact some would rightfully argue that we knew about it almost a decade before that when flaws in the algorithm were first identified. At the time these flaws were not considered catastrophic but between 2005 and 2008 attacks against MD5 were demonstrated multiple times. It was so well understood at the time that Microsoft published a recommendation not to use MD5. In 2009 though Microsoft issued a new CA certificate using MD5. A mistake on their part but one that got through and created a significant problem.

MD5 is not the only vulnerable algorithm. Other elements of the system are also vulnerable, thankfully just not exploited yet. These other areas include one of the most commonly used hashing algorithm, SHA1, which has been theoretically shown to have a vulnerability (recognized by NIST in early 2005) but no known attacks have been shown. However the National Institute of Standards and Technology (NIST), which provides guidance for the US Government on computer security requirements, published in 2004 that SHA-1 be phased out of use in the US Government beginning in 2010. The phase out is intended to be completed by the end of 2012.

Hashing algorithms are not the only weakness. Based on the strength and availability of computing systems, 80-bit cryptography is considered vulnerable. Of course everyone will say that they do not use 80 bit keys but what NIST is saying is that algorithms with certain key sizes only provide effective security strength of 80 bits. This includes RSA-1024, DSA-1024 (with specific characteristics) and Elliptic Curve where keys are less than 224. The specific sizes of these three algorithms are again being phased out of use within the US Government as they are considered vulnerable for their common purpose. They will be completely phased out by 2013.

So yes Flame did turn up the heat on Microsoft but it also raises the overall issue of technical obsolescence of cryptographic algorithms and key sizes. This is not the first time this has happened but the major difference is that today the problem is bigger due to the interconnectedness of the systems. We now need to consider how to mitigate the threat posed here and this is where we can learn something from Flame. Flame was created as a data gatherer – the old adage of "know your enemy". We need to do the same thing and in this case the enemy is the use of vulnerable cryptographic algorithms and key sizes. Assess your environment to determine what is used and why and plan to replace those algorithms and those certificates built around vulnerable algorithms as quickly as possible.

And for those of you that think Flame is not an issue - take a look at some of my recent posts and as you read them think about Sykipot and how that has evolved over the last year. Workable exploits do not die - they evolve as our defenses do.

- Posted using BlogPress from my iPad

Tuesday, July 3, 2012

Do you want to be scared?

In 25 plus years of working in the data comm industry and the majority of that working in the cybersecurity/data security realm I have diligently stayed away from fear-mongering. My basic approach was that there are plenty of business reasons to take cybersecurity seriously, whether it was to maintain your control of your investment, stay ahead of your competition, perform tasks more efficiently, reduce costs of shipping, paper, manpower, and many other benefits. There never has been a reason, from my perspective, to dangle the scythe of death over anyone's head.

Now there have been times when I have looked at cybersecurity from the scythe of death perspective. A number of years ago I worked with the US Government on Critical Infrastructure Protection and there you need to look at cybersecurity from that perspective because if you get it wrong very bad things can happen. So over the last few weeks with all of the discussion on Flame and its relation to Stuxnet and other attacks I started to look at what does this mean from the bigger cybersecurity perspective. 


Lots of people hear about these malware variants and when you talk to them their first response is "That won't affect me - it was targeted at Iran" or other Middle Eastern countries. The latter part of that statement is certainly true ... but .... let me pull out the scythe here. What Flame and Stuxnet ended up doing was writing a new chapter in the cyber-attackers handbook. Certainly the creators did not intend for this but through some carelessness the Pandora's box of cyber-warfare has been opened and in it is a very powerful toolkit. Note I did not say weapon. What Flame and Stuxnet has provided is an approach to attacks that is unique and inherently difficult to recognize. Certainly there are tools out there today that can recognize malware that is known about but what Flame and Stuxnet introduce is a way to use the inherent trust of the Internet architecture to allow the introduction of malware to your environment that may not be discovered through those normal processes and checks.

Think of it this way - today people get upset when they discover that you can go on the web and find the "How to make a pipe bomb" instructions. What if you could find the same instructions for a nuclear weapon that was undetectable? A weapon that could be delivered to any city without anyone knowing about it because you inherently trusted the way it was being delivered and there were no tests to check against it? A scary proposition - but potentially not as catastrophic as what can be done with a Stuxnet like attack using the Flame approach to delivery. Think of what could happen if operators could not control the power grid, the water supply chemical composition, the natural gas pipelines or potentially the mechanisms used to transfer funds between banks and brokerage houses. Now imagine that all of those things went wrong on the same day. That is cyber warfare and that is the handbook that can be written with the existing toolkits that are out there today.

But, that is the worst case scenario and things can be done to mitigate the risk. The National Institute of Standards and Technology (NIST) publishes documents which describe the appropriate ways to protect data including what algorithms to use and what policies to have in place. These include things like using appropriate crypto and avoiding algorithms with known weaknesses. Flame took advantage of the fact that not everyone is following these guidelines and for that attack they were able to spoof a Microsoft certificate that used an MD5 hashing algorithm.

So if you want to mitigate some risk look at what NIST has published (start at csrc.nist.gov) to see if there are things you can do better. And remember what one of my colleagues said after Flame was better studied ...

"Friends don't let friends sign with MD5" 
                                     ... Tim Sawyer

If you want an interesting look at a piece CBS 60 Minutes did on this topic then check out this video.