Being the father of 3 kids I frequently heard the refrain ... "It wasn't me". Quite often that was an accurate statement but every now and then ..... Whether it was Thing 1 or Thing 2 (or quite often Thing 3) it really did not matter - it happened and someone or something was injured in the process. The injury may have even been a side effect - the ball tossed - the catch missed - the glass knocked over or the eye hit. Not intentional but it happened.
In today's world of malware and cyber-warfare, attacks and spying, denial-of-service and data ransoming it is also true that you may not be the one being attacked but you may very well end up being a victim. This was the case of Chevron who recently found Stuxnet in their network. Now their investigation has not indicated any damage done but the fact that it was found in their network highlights the importance of being aware of not just what is in your network but also what is going on around you.
We have already talked about the concerns with malware that has been repurposed. What we are talking about in the Chevron case is malware gone wild. In either case knowing what is being successfully used as attack vectors is critical for corporate IT personnel to be aware and to some extent understand so that they can implement processes to properly begin to mitigate the risk. You may not be a target but you may end up being a victim.
Of course being a victim is more than just ending up with malware on your system. If you are using a service provider for any services you may end up being a victim if their infrastructure falls victim to an attack, either direct or indirect.
In any of these cases it all goes back to planning. Have appropriate business continuity plans. Not just plans to ensure services are properly configured and tested but plans that allow for restoration and if needed moving of services and data. Test these plans at least annually. Have service level agreements in place that encourage safe continuity of operations practices with your service providers. Ensure you have tools that monitor your infrastructure to be aware of any potential gaps that need to be addressed and to ensure that changes to one element of the infrastructure do not impact other elements. A common one is when an IT organization or application updates an SSL certificate but the business application owners are not aware of it. The application stops working and the application owners spend countless hours trying to determine the root cause. These types of situations highlight the need for plans to be broad enough to cover not just the infrastructure but the actual important elements of continuity of operations. Your tools should reflect this as well.
This type of planning will allow you to mitigate the risks that are increasing each and every day and will allow you to prevent or at least minimize downtimes.
Some personal thoughts on improving security for users of online services.
Monday, December 17, 2012
Wednesday, October 31, 2012
Sandy Takes its Toll ... But also on our Confidence?
As you can tell from my last few posts I have had a renewed interest in the critical infrastructure area and in particular how proper planning is a significant element of being prepared. Super-storm Sandy brought home some of those ideas. For many of us on the east coast we had not faced such a major storm. Here in the DC area we were lucky that we had but a glancing blow. Our friends further up the coast were much less lucky.
Certainly we can never be fully prepared for something as rare as Sandy but there are lessons we can pull from the last couple of days and I am sure there will be many more we can pull from the next few weeks. I did read an interesting article from the NY Times this morning that is related to thoughts on planning and a few items in particular stuck in my head.
I have talked about critical infrastructure as a "system of systems", tightly interwoven in some cases and in others loosely connected. One sentence from the article relates to this;
"As more of life moves online, damage to critical Internet systems affect more of the economy, and disasters like Hurricane Sandy reveal vulnerabilities from the sometimes ad hoc organization of computer networks."
Much like the interconnected systems of gas, electrical, transportation, finance, telecommunications and others, the Internet arose from the interconnection of very different systems which were built for very different reasons. As Internet services grew so did the companies that provide services and this in turn led to elements of geographic disbursement of capabilities and further interconnectedness through telecom systems and power systems. This growth naturally means greater opportunity for interruption based on the fact that the target space is greater. Of course, in theory it also means greater opportunity for high availability and reliability but that only works when the specific service is built with that in mind. The moral here is that one needs to ensure that the services that you pick at least meet the reliability needs of the service that you offer.
Another item that jumped out at me was raised in relation to the power situation.
"Power is the primary worry, since an abrupt network shutdown can destroy data, but problems can also stem from something as simple as not keeping a crisis plan updated."
So when should a crisis plan be updated? Certainly it is something that should be looked at annually to ensure that the plan itself is inline with business needs but awareness of the environment you are operating in should also cause one to consider if the situational environment will have an impact on business. Is a hurricane ,or some other naturally occurring but foreseeable event, bearing down on facilities that you rely on, whether they are your own or those of service providers? Has the geopolitical climate changed whereby the threat of cyber- or physical terrorism against a facility becoming a more significant risk? These are just some examples of situations that should have you pulling out your crisis plan to ensure that the plan does not need to be updated or altered.
Finally there was one element in this article that demonstrates the need for planning.
"Another downtown building ... had one generator in the basement, which was damaged by water. There is another generator, but it is on a higher floor. ... “We’ve got a truck full of diesel pulled up to the building, and now we’re trying to figure out how to get fuel up to the 19th floor.”"
It was great that they had planned for two generator but a 19th floor backup without a plan for getting the fuel to where it needs to be? When thinking about your plan do not overlook the little things. It is great to have redundancy but if the redundancy is reliant on other systems then make sure you are aware of that and have plans to address any potential gaps.
All of these ideas are ones raised due to a very rare and dramatic event but the underlying principles are the same whether it is physical infrastructure or cyber infrastructure:
Certainly we can never be fully prepared for something as rare as Sandy but there are lessons we can pull from the last couple of days and I am sure there will be many more we can pull from the next few weeks. I did read an interesting article from the NY Times this morning that is related to thoughts on planning and a few items in particular stuck in my head.
I have talked about critical infrastructure as a "system of systems", tightly interwoven in some cases and in others loosely connected. One sentence from the article relates to this;
"As more of life moves online, damage to critical Internet systems affect more of the economy, and disasters like Hurricane Sandy reveal vulnerabilities from the sometimes ad hoc organization of computer networks."
Much like the interconnected systems of gas, electrical, transportation, finance, telecommunications and others, the Internet arose from the interconnection of very different systems which were built for very different reasons. As Internet services grew so did the companies that provide services and this in turn led to elements of geographic disbursement of capabilities and further interconnectedness through telecom systems and power systems. This growth naturally means greater opportunity for interruption based on the fact that the target space is greater. Of course, in theory it also means greater opportunity for high availability and reliability but that only works when the specific service is built with that in mind. The moral here is that one needs to ensure that the services that you pick at least meet the reliability needs of the service that you offer.
Another item that jumped out at me was raised in relation to the power situation.
"Power is the primary worry, since an abrupt network shutdown can destroy data, but problems can also stem from something as simple as not keeping a crisis plan updated."
So when should a crisis plan be updated? Certainly it is something that should be looked at annually to ensure that the plan itself is inline with business needs but awareness of the environment you are operating in should also cause one to consider if the situational environment will have an impact on business. Is a hurricane ,or some other naturally occurring but foreseeable event, bearing down on facilities that you rely on, whether they are your own or those of service providers? Has the geopolitical climate changed whereby the threat of cyber- or physical terrorism against a facility becoming a more significant risk? These are just some examples of situations that should have you pulling out your crisis plan to ensure that the plan does not need to be updated or altered.
Finally there was one element in this article that demonstrates the need for planning.
"Another downtown building ... had one generator in the basement, which was damaged by water. There is another generator, but it is on a higher floor. ... “We’ve got a truck full of diesel pulled up to the building, and now we’re trying to figure out how to get fuel up to the 19th floor.”"
It was great that they had planned for two generator but a 19th floor backup without a plan for getting the fuel to where it needs to be? When thinking about your plan do not overlook the little things. It is great to have redundancy but if the redundancy is reliant on other systems then make sure you are aware of that and have plans to address any potential gaps.
All of these ideas are ones raised due to a very rare and dramatic event but the underlying principles are the same whether it is physical infrastructure or cyber infrastructure:
- Understand the business needs for operations in regular and emergency circumstances
- Understand the assets that you are reliant on and classify them into ones you have control of and those that are outsourced
- Create a Crisis Plan and test it to ensure it meets the business needs and is executable
- Review the plan on a regular basis and when significant events occur ensure to consider the impact on the plan
Know what you have, know what you need, monitor to ensure steady state and be prepared for events that disrupt the steady state.
Thursday, October 25, 2012
Will Flame Scorch US Utilities?
Over the past couple of months I have spent a good deal of my time speaking to utilities, companies that work with utilities and attending conferences surrounding the utility industry. This has all been done in conjunction with the work that I have been doing in cyber-security over the last 20+ years. It has been an interesting couple of months as it has been a re-introduction into the whole idea of Critical Infrastructure Protection (CIP), which was one of the areas I was focused on a decade ago, but also has allowed me to link together some of the interesting aspects of what has been happening in the last two years, in regards to cyber-attacks, with CIP.
There has been lots of conjecture as to attacks against the US utility infrastructure, and in fact ample evidence that there have been breaches at varying levels and with varying effects. I am not going to go down the path of highlighting these as you can do the web searches that will help you find them. Yes some of them are real, and based on some recent conversations, some of the ones that were "Not cyber-attacks" were very likely exactly that. The bottom line is that the utility infrastructure is vulnerable and we need to do a beter job of detecting and reacting to these vulnerabilities.
Now all that being said there is another side to this puzzle. Everyone has heard about Stuxnet and Flame. You can read past posts to get a refresher. I have even discussed what I feel is the most worrisome element of these, which is re-use. We have already seen some of that within the payloads of these systems themselves. We are seeing more of that in other payloads being used for similar purposes including a "mini-" Flame that has been identified in the Middle East. The worrisome element here is not that the guys who created these are re-using elements but the fact that others are also re-using elements. Elements of Stuxnet have been found in recent financial targeted malware. Elements of Flame were seen in the attack against Aramco, the most valuable company in the world that also suffered the broadest attack to date.
The Aramco attack should be the red flag for many, or at least I hope. What Aramco showed is a couple of things:
There has been lots of conjecture as to attacks against the US utility infrastructure, and in fact ample evidence that there have been breaches at varying levels and with varying effects. I am not going to go down the path of highlighting these as you can do the web searches that will help you find them. Yes some of them are real, and based on some recent conversations, some of the ones that were "Not cyber-attacks" were very likely exactly that. The bottom line is that the utility infrastructure is vulnerable and we need to do a beter job of detecting and reacting to these vulnerabilities.
Now all that being said there is another side to this puzzle. Everyone has heard about Stuxnet and Flame. You can read past posts to get a refresher. I have even discussed what I feel is the most worrisome element of these, which is re-use. We have already seen some of that within the payloads of these systems themselves. We are seeing more of that in other payloads being used for similar purposes including a "mini-" Flame that has been identified in the Middle East. The worrisome element here is not that the guys who created these are re-using elements but the fact that others are also re-using elements. Elements of Stuxnet have been found in recent financial targeted malware. Elements of Flame were seen in the attack against Aramco, the most valuable company in the world that also suffered the broadest attack to date.
The Aramco attack should be the red flag for many, or at least I hope. What Aramco showed is a couple of things:
- The insider threat is real. The recent Verizon 2012 DBIR highlights the threat to IP from the insider threat along with the rise of hacktivism which seems to be another element of the attack
- Malware does not die, nor do its delivery mechanisms. Both of these elements continue to live for a long time - they just evolve.
- If your business is supporting cyber warfare then make sure you, and your allies, are aware of the re-use capabilities of code so you and your allies are not bitten back.
So how does all of this tie into US utilities? Well Aramco did show us another thing - that there are those that are unfriendly to the US and its allies and they have capabilities which can deliver harm. They may need help to do it but leveraging the code re-use elements and the hacktivism that exists everywhere today creates a risk for all utilities and other large sectors of the Critical Infrastructure that we need to pay attention to so we can mitigate those risks. The utility sector does create some additional concern as the past idea of utility security has been to build an "impenetrable" wall around the systems since the systems themselves were designed before the threats of 21st century cyber-capabilities were known. The issue they face today is that once someone gets through the door, into that secure environment, the damage can be swift and extensive, as evidenced in Aramco. Ensuring that organizations mitigate the risk by understanding their environment, the resources that they must manage and how their systems securely interact with others, inside and outside their domain, are critical to protecting the overall infrastructure.
Thursday, September 20, 2012
Attack Elements Showing Up Elsewhere
So it was a few weeks back that I had last posted on the rash of newly discovered attacks, their methods and payloads. One of the cautions I had tried to raise over the summer is that even though many people said that this was a specific attack, targeted at specific environments and that major vendors like Microsoft had reacted to shutdown the certificate based threat, that there was still a risk.
The risk i brought up was the risk that cyber criminals would take the basis of these attacks as a "cookbook" of types that would allow them to launch similar types of attacks on a whole new set of users. Today I came across an article published by MIT that confirmed my concern. The article highlighted that cyber criminals are using code from Stuxnet in attacks today and that the design of Flame makes it an even more attractive target for use because of its modular design.
So while we may think these much discussed pieces of malware and attack mechanisms are no longer a threat we need to be diligent in following the research and understand what is being done with the code and how it is being reused.
- Posted using BlogPress from my iPad
The risk i brought up was the risk that cyber criminals would take the basis of these attacks as a "cookbook" of types that would allow them to launch similar types of attacks on a whole new set of users. Today I came across an article published by MIT that confirmed my concern. The article highlighted that cyber criminals are using code from Stuxnet in attacks today and that the design of Flame makes it an even more attractive target for use because of its modular design.
So while we may think these much discussed pieces of malware and attack mechanisms are no longer a threat we need to be diligent in following the research and understand what is being done with the code and how it is being reused.
- Posted using BlogPress from my iPad
Friday, August 10, 2012
For those that thought it was over ....
June and July gave lots of opportunity for people to talk about Flame and I will bet all of you are tired of hearing about it - and I would say rightfully so. The reality is that Flame is not likely to affect you. I know a few people who will hate that line but it is the truth.
The TRUE reality is that the attacks vectors and malware elements are not used once and then discarded - and that is why we have a problem in the world of cybersecurity. People see the headlines about Sykipot and Flame and then see days later mitigation mechanisms and that they feel is the end of the story - it truly is not.
Sykipot had a number of variants that have done damage in the wild and they have been seen over many many months. Some would say that the similarities between Stuxnet, Duqu and Flame are indicative of malware reuse with some additions in the attack vectors.
Now we have another variant that leverages elements of Flame and attacks the financial sector and could also contain elements to attack other critical infrastructure elements. Read about Gauss here.
The Flame may be out, according to the pundits, but the embers are still causing havoc and you need to be aware that the attack vector used is a dangerous one and that you need to understand your infrastructure to protect against attack. The malware side of these attacks will eventually be signatured but until they are you need to stop allowing strangers in your networks. In previous posts I have given some basic guidance on what you need to do but it truly does start with understanding your infrastructure: managing the trust domains you use through the Root Certificate Authorities you trust; ensuring you have a strong policy for user authentication; and when using certificates as part of that have a good policy for key length, algorithms used and lifetimes and then manage them properly.
Those embers will burn as long as there is money to be made attacking other people so you need to protect yourself from getting burnt.
- Posted using BlogPress from my iPad
The TRUE reality is that the attacks vectors and malware elements are not used once and then discarded - and that is why we have a problem in the world of cybersecurity. People see the headlines about Sykipot and Flame and then see days later mitigation mechanisms and that they feel is the end of the story - it truly is not.
Sykipot had a number of variants that have done damage in the wild and they have been seen over many many months. Some would say that the similarities between Stuxnet, Duqu and Flame are indicative of malware reuse with some additions in the attack vectors.
Now we have another variant that leverages elements of Flame and attacks the financial sector and could also contain elements to attack other critical infrastructure elements. Read about Gauss here.
The Flame may be out, according to the pundits, but the embers are still causing havoc and you need to be aware that the attack vector used is a dangerous one and that you need to understand your infrastructure to protect against attack. The malware side of these attacks will eventually be signatured but until they are you need to stop allowing strangers in your networks. In previous posts I have given some basic guidance on what you need to do but it truly does start with understanding your infrastructure: managing the trust domains you use through the Root Certificate Authorities you trust; ensuring you have a strong policy for user authentication; and when using certificates as part of that have a good policy for key length, algorithms used and lifetimes and then manage them properly.
Those embers will burn as long as there is money to be made attacking other people so you need to protect yourself from getting burnt.
- Posted using BlogPress from my iPad
Friday, July 20, 2012
Is Power Grid security being given up for convenience?
I live in an interesting area just outside of Washington DC. We have the suburbs that are old and established with the big beautiful oaks and then we have the growing suburbs that are sprouting out of old farm fields. The last few weeks have seen a rash of storms that have delivered devastating blows to the power supply at people's homes. Those in the newer suburbs have been less affected due to buried cable than those in the beautiful old neighborhoods where wire is still strung amongst those beautiful old oaks that tend to fall and take out the overhead infrastructure. Of course these storms are either summer thunderstorms, in this case, which happen during the hottest part of summer - so no power means no air conditioning in the DC heat. The other end of the spectrum is a nor'easter in February which takes out power when it is well below freezing. Welcome to living in the DC area.
Of course all of this has people talking about the power companies in terms of reliability and response. Things like "How can a company not be prepared for this type of situation - people without power for a week" have been heard frequently over the last two weeks. Well I am not one to beat up a power company for nature unleashing its fury. Nature is unpredictable and when a storm does happen, as in this case, it can be a very large undertaking to get things coordinated to remove trees and then restring wires etc.
Where I do have an issue is when it comes to the things that they are doing which can be planned for over a long period of time. We have all seen recent articles on the hacking of the power grid in various magazines over the last few years. In c|net, Scientific American, and you can even go to YouTube to see a video on how to do it. Congress, the National Security Agency and others have highlighted the fact that we have this vulnerability. The National Institute of Standards and Technology (NIST) has been working with industry to develop a stronger set of security standards for the SmartGrid to try and build a better grid.
BUT .....
We still have people in the industry that appear to think that the problem is not that bad. The North American Energy Standards Board (NAESB) authorizes two organizations to issue certificates for the Grid today - Open Access Technology International (OATI) and GlobalSign (Yes the same folks who had their website hacked earlier this year). Both OATI and GlobalSign feel it is OK to have long life certificates within the infrastructure protecting the power grid. In fact both have stated that 30 year certificate lifetimes are ok from a security perspective.
I myself find that amazing as the criticality of this infrastructure and its impact on Defense, Homeland Security and the economy is well recognized. This is an infrastructure you want to protect. Part of the argument is the difficulty in updating but then the OATI webCares CPS indicates an 8 year lifetime for Root certificates. Globalsign does allow 30 and 40 year Root certificates in its Certificate Policy and goes as far as 5 year for end devices. They also allow SHA1 hashed certificates, with a 2048 RSA key. There does seem to be some contradiction in the Globalsign CP in that it indicates following of NIST guidance but is not all that specific on which guidance. Certainly today NIST does not recommend use of SHA1 for any certificate use and long life certificates for Root CA's or any issuing CA is also not recommended due to the rapidly evolving attack vectors.
So what we are left with is two companies that seem to think that they can mitigate the risk of technology obsolescence. If we look at history we learn some very hard lessons. MD5 went from a suspected problem (1997) to a demonstrated attack in 8 years. Within 7 years of this first demonstrated attack (2005) there was a usable attack vector that allowed an attacker to introduce malware without the victim knowing and apparently not knowing for a couple of years. So yes one can replace their certificates if someone sees an attack against the CA or the technology that was being used but will that be too late? Will the logic bombs already be in place? If they are can we find them in time? If we do not what will happen? And what is being attacked, industrial control systems, have been targeted very recently due to existing vulnerabilities.
The risks are high here so rather than playing with convenience should NAESB not make it simple for all involved and strengthen these standards to reduce the risk? I would hope that if I asked the folks that went without air conditioning for a week in 100 degree heat if they would risk losing power again, and maybe for much longer, that they would react strongly. I wonder how people in hospitals and Wall Street would see things?
Of course all of this has people talking about the power companies in terms of reliability and response. Things like "How can a company not be prepared for this type of situation - people without power for a week" have been heard frequently over the last two weeks. Well I am not one to beat up a power company for nature unleashing its fury. Nature is unpredictable and when a storm does happen, as in this case, it can be a very large undertaking to get things coordinated to remove trees and then restring wires etc.
Where I do have an issue is when it comes to the things that they are doing which can be planned for over a long period of time. We have all seen recent articles on the hacking of the power grid in various magazines over the last few years. In c|net, Scientific American, and you can even go to YouTube to see a video on how to do it. Congress, the National Security Agency and others have highlighted the fact that we have this vulnerability. The National Institute of Standards and Technology (NIST) has been working with industry to develop a stronger set of security standards for the SmartGrid to try and build a better grid.
BUT .....
We still have people in the industry that appear to think that the problem is not that bad. The North American Energy Standards Board (NAESB) authorizes two organizations to issue certificates for the Grid today - Open Access Technology International (OATI) and GlobalSign (Yes the same folks who had their website hacked earlier this year). Both OATI and GlobalSign feel it is OK to have long life certificates within the infrastructure protecting the power grid. In fact both have stated that 30 year certificate lifetimes are ok from a security perspective.
I myself find that amazing as the criticality of this infrastructure and its impact on Defense, Homeland Security and the economy is well recognized. This is an infrastructure you want to protect. Part of the argument is the difficulty in updating but then the OATI webCares CPS indicates an 8 year lifetime for Root certificates. Globalsign does allow 30 and 40 year Root certificates in its Certificate Policy and goes as far as 5 year for end devices. They also allow SHA1 hashed certificates, with a 2048 RSA key. There does seem to be some contradiction in the Globalsign CP in that it indicates following of NIST guidance but is not all that specific on which guidance. Certainly today NIST does not recommend use of SHA1 for any certificate use and long life certificates for Root CA's or any issuing CA is also not recommended due to the rapidly evolving attack vectors.
So what we are left with is two companies that seem to think that they can mitigate the risk of technology obsolescence. If we look at history we learn some very hard lessons. MD5 went from a suspected problem (1997) to a demonstrated attack in 8 years. Within 7 years of this first demonstrated attack (2005) there was a usable attack vector that allowed an attacker to introduce malware without the victim knowing and apparently not knowing for a couple of years. So yes one can replace their certificates if someone sees an attack against the CA or the technology that was being used but will that be too late? Will the logic bombs already be in place? If they are can we find them in time? If we do not what will happen? And what is being attacked, industrial control systems, have been targeted very recently due to existing vulnerabilities.
The risks are high here so rather than playing with convenience should NAESB not make it simple for all involved and strengthen these standards to reduce the risk? I would hope that if I asked the folks that went without air conditioning for a week in 100 degree heat if they would risk losing power again, and maybe for much longer, that they would react strongly. I wonder how people in hospitals and Wall Street would see things?
Friday, July 13, 2012
Flame is STILL burning
In this case Flame continues to burn Microsoft. Microsoft have announced the termination of trust of 28 additional certificates within their infrastructure in addition to the 3 that were immediately untrusted when Flame was first brought to light. This new announcement is significant as it highlights the importance of certificates within an enterprise as large as Microsoft but also highlights the interconnectedness of systems. Microsoft's announcement was based on their believe that the newly untrusted CA's are "… outside our recommended secure storage practices". What exactly that means is certainly up for discussion but Microsoft itself states that this effort is being undertaken after their continued review of what happened with Flame. This likely means that they these certificates were protected only as well as those known to be compromised and based on the form of attack there is some level of certainty that these could be exploited as well.
This interconnectivity of systems is becoming a key element of security management. When I speak of interconnectivity I do not just mean network connectivity but I also mean trust connectivity. With todays growing base of interconnected devices, whether that is the traditional server/desktop/laptop, the simple extension to mobile devices such as smartphones/tablets, or whether we take it the next level to control systems such as those interconnected through networks such as the SmartGrid or ones run by companies such as Tridium, we need to consider what happens when a security gap in one element of that infrastructure exists.
This, of course, is also not the end of the conversation. Even if we can find all those certificates and systems that Microsoft "untrusted" on all the platforms that we own and manage, this is one example of how the use of a vulnerable algorithm can create a significant and broadly impacting issue. We knew MD5, the hashing algorithm that was exploited in the original attack, was vulnerable in 2005. In fact some would rightfully argue that we knew about it almost a decade before that when flaws in the algorithm were first identified. At the time these flaws were not considered catastrophic but between 2005 and 2008 attacks against MD5 were demonstrated multiple times. It was so well understood at the time that Microsoft published a recommendation not to use MD5. In 2009 though Microsoft issued a new CA certificate using MD5. A mistake on their part but one that got through and created a significant problem.
MD5 is not the only vulnerable algorithm. Other elements of the system are also vulnerable, thankfully just not exploited yet. These other areas include one of the most commonly used hashing algorithm, SHA1, which has been theoretically shown to have a vulnerability (recognized by NIST in early 2005) but no known attacks have been shown. However the National Institute of Standards and Technology (NIST), which provides guidance for the US Government on computer security requirements, published in 2004 that SHA-1 be phased out of use in the US Government beginning in 2010. The phase out is intended to be completed by the end of 2012.
Hashing algorithms are not the only weakness. Based on the strength and availability of computing systems, 80-bit cryptography is considered vulnerable. Of course everyone will say that they do not use 80 bit keys but what NIST is saying is that algorithms with certain key sizes only provide effective security strength of 80 bits. This includes RSA-1024, DSA-1024 (with specific characteristics) and Elliptic Curve where keys are less than 224. The specific sizes of these three algorithms are again being phased out of use within the US Government as they are considered vulnerable for their common purpose. They will be completely phased out by 2013.
So yes Flame did turn up the heat on Microsoft but it also raises the overall issue of technical obsolescence of cryptographic algorithms and key sizes. This is not the first time this has happened but the major difference is that today the problem is bigger due to the interconnectedness of the systems. We now need to consider how to mitigate the threat posed here and this is where we can learn something from Flame. Flame was created as a data gatherer – the old adage of "know your enemy". We need to do the same thing and in this case the enemy is the use of vulnerable cryptographic algorithms and key sizes. Assess your environment to determine what is used and why and plan to replace those algorithms and those certificates built around vulnerable algorithms as quickly as possible.
And for those of you that think Flame is not an issue - take a look at some of my recent posts and as you read them think about Sykipot and how that has evolved over the last year. Workable exploits do not die - they evolve as our defenses do.
- Posted using BlogPress from my iPad
This interconnectivity of systems is becoming a key element of security management. When I speak of interconnectivity I do not just mean network connectivity but I also mean trust connectivity. With todays growing base of interconnected devices, whether that is the traditional server/desktop/laptop, the simple extension to mobile devices such as smartphones/tablets, or whether we take it the next level to control systems such as those interconnected through networks such as the SmartGrid or ones run by companies such as Tridium, we need to consider what happens when a security gap in one element of that infrastructure exists.
This, of course, is also not the end of the conversation. Even if we can find all those certificates and systems that Microsoft "untrusted" on all the platforms that we own and manage, this is one example of how the use of a vulnerable algorithm can create a significant and broadly impacting issue. We knew MD5, the hashing algorithm that was exploited in the original attack, was vulnerable in 2005. In fact some would rightfully argue that we knew about it almost a decade before that when flaws in the algorithm were first identified. At the time these flaws were not considered catastrophic but between 2005 and 2008 attacks against MD5 were demonstrated multiple times. It was so well understood at the time that Microsoft published a recommendation not to use MD5. In 2009 though Microsoft issued a new CA certificate using MD5. A mistake on their part but one that got through and created a significant problem.
MD5 is not the only vulnerable algorithm. Other elements of the system are also vulnerable, thankfully just not exploited yet. These other areas include one of the most commonly used hashing algorithm, SHA1, which has been theoretically shown to have a vulnerability (recognized by NIST in early 2005) but no known attacks have been shown. However the National Institute of Standards and Technology (NIST), which provides guidance for the US Government on computer security requirements, published in 2004 that SHA-1 be phased out of use in the US Government beginning in 2010. The phase out is intended to be completed by the end of 2012.
Hashing algorithms are not the only weakness. Based on the strength and availability of computing systems, 80-bit cryptography is considered vulnerable. Of course everyone will say that they do not use 80 bit keys but what NIST is saying is that algorithms with certain key sizes only provide effective security strength of 80 bits. This includes RSA-1024, DSA-1024 (with specific characteristics) and Elliptic Curve where keys are less than 224. The specific sizes of these three algorithms are again being phased out of use within the US Government as they are considered vulnerable for their common purpose. They will be completely phased out by 2013.
So yes Flame did turn up the heat on Microsoft but it also raises the overall issue of technical obsolescence of cryptographic algorithms and key sizes. This is not the first time this has happened but the major difference is that today the problem is bigger due to the interconnectedness of the systems. We now need to consider how to mitigate the threat posed here and this is where we can learn something from Flame. Flame was created as a data gatherer – the old adage of "know your enemy". We need to do the same thing and in this case the enemy is the use of vulnerable cryptographic algorithms and key sizes. Assess your environment to determine what is used and why and plan to replace those algorithms and those certificates built around vulnerable algorithms as quickly as possible.
And for those of you that think Flame is not an issue - take a look at some of my recent posts and as you read them think about Sykipot and how that has evolved over the last year. Workable exploits do not die - they evolve as our defenses do.
- Posted using BlogPress from my iPad
Subscribe to:
Posts (Atom)