Monday, December 17, 2012

"But I wasn't a target!"

Being the father of 3 kids I frequently heard the refrain ... "It wasn't me". Quite often that was an accurate statement but every now and then ..... Whether it was Thing 1 or Thing 2 (or quite often Thing 3) it really did not matter - it happened and someone or something was injured in the process. The injury may have even been a side effect - the ball tossed - the catch missed - the glass knocked over or the eye hit. Not intentional but it happened.

In today's world of malware and cyber-warfare, attacks and spying, denial-of-service and data ransoming it is also true that you may not be the one being attacked but you may very well end up being a victim. This was the case of Chevron who recently found Stuxnet in their network. Now their investigation has not indicated any damage done but the fact that it was found in their network highlights the importance of being aware of not just what is in your network but also what is going on around you.

We have already talked about the concerns with malware that has been repurposed. What we are talking about in the Chevron case is malware gone wild. In either case knowing what is being successfully used as attack vectors is critical for corporate IT personnel to be aware and to some extent understand so that they can implement processes to properly begin to mitigate the risk. You may not be a target but you may end up being a victim.

Of course being a victim is more than just ending up with malware on your system. If you are using a service provider for any services you may end up being a victim if their infrastructure falls victim to an attack, either direct or indirect.

In any of these cases it all goes back to planning. Have appropriate business continuity plans. Not just plans to ensure services are properly configured and tested but plans that allow for restoration and if needed moving of services and data. Test these plans at least annually. Have service level agreements in place that encourage safe continuity of operations practices with your service providers. Ensure you have tools that monitor your infrastructure to be aware of any potential gaps that need to be addressed and to ensure that changes to one element of the infrastructure do not impact other elements. A common one is when an IT organization or application updates an SSL certificate but the business application owners are not aware of it. The application stops working and the application owners spend countless hours trying to determine the root cause. These types of situations highlight the need for plans to be broad enough to cover not just the infrastructure but the actual important elements of continuity of operations. Your tools should reflect this as well.

This type of planning will allow you to mitigate the risks that are increasing each and every day and will allow you to prevent or at least minimize downtimes.

Wednesday, October 31, 2012

Sandy Takes its Toll ... But also on our Confidence?

As you can tell from my last few posts I have had a renewed interest in the critical infrastructure area and in particular how proper planning is a significant element of being prepared. Super-storm Sandy brought home some of those ideas. For many of us on the east coast we had not faced such a major storm. Here in the DC area we were lucky that we had but a glancing blow. Our friends further up the coast were much less lucky.

Certainly we can never be fully prepared for something as rare as Sandy but there are lessons we can pull from the last couple of days and I am sure there will be many more we can pull from the next few weeks. I did read an interesting article from the NY Times this morning that is related to thoughts on planning and a few items in particular stuck in my head.

I have talked about critical infrastructure as a "system of systems", tightly interwoven in some cases and in others loosely connected. One sentence from the article relates to this;

"As more of life moves online, damage to critical Internet systems affect more of the economy, and disasters like Hurricane Sandy reveal vulnerabilities from the sometimes ad hoc organization of computer networks.

Much like the interconnected systems of gas, electrical, transportation, finance, telecommunications and others, the Internet arose from the interconnection of very different systems which were built for very different reasons. As Internet services grew so did the companies that provide services and this in turn led to elements of geographic disbursement of capabilities and further interconnectedness through telecom systems and power systems. This growth naturally means greater opportunity for interruption based on the fact that the target space is greater. Of course, in theory it also means greater opportunity for high availability and reliability but that only works when the specific service is built with that in mind. The moral here is that one needs to ensure that the services that you pick at least meet the reliability needs of the service that you offer.

Another item that jumped out at me was raised in relation to the power situation. 

"Power is the primary worry, since an abrupt network shutdown can destroy data, but problems can also stem from something as simple as not keeping a crisis plan updated.

So when should a crisis plan be updated? Certainly it is something that should be looked at annually to ensure that the plan itself is inline with business needs but awareness of the environment you are operating in should also cause one to consider if the situational environment will have an impact on business. Is a hurricane ,or some other naturally occurring but foreseeable event, bearing down on facilities that you rely on, whether they are your own or those of service providers? Has the geopolitical climate changed whereby the threat of cyber- or physical terrorism against a facility becoming a more significant risk? These are just some examples of situations that should have you pulling out your crisis plan to ensure that the plan does not need to be updated or altered.

Finally there was one element in this article that demonstrates the need for planning. 

"Another downtown building ... had one generator in the basement, which was damaged by water. There is another generator, but it is on a higher floor. ... “We’ve got a truck full of diesel pulled up to the building, and now we’re trying to figure out how to get fuel up to the 19th floor.”"

It was great that they had planned for two generator but a 19th floor backup without a plan for getting the fuel to where it needs to be? When thinking about your plan do not overlook the little things. It is great to have redundancy but if the redundancy is reliant on other systems then make sure you are aware of that and have plans to address any potential gaps.

All of these ideas are ones raised due to a very rare and dramatic event but the underlying principles are the same whether it is physical infrastructure or cyber infrastructure: 

  • Understand the business needs for operations in regular and emergency circumstances
  • Understand the assets that you are reliant on and classify them into ones you have control of and those that are outsourced
  • Create a Crisis Plan and test it to ensure it meets the business needs and is executable
  • Review the plan on a regular basis and when significant events occur ensure to consider the impact on the plan
Know what you have, know what you need, monitor to ensure steady state and be prepared for events that disrupt the steady state.

Thursday, October 25, 2012

Will Flame Scorch US Utilities?

Over the past couple of months I have spent a good deal of my time speaking to utilities, companies that work with utilities and attending conferences surrounding the utility industry. This has all been done in conjunction with the work that I have been doing in cyber-security over the last 20+ years. It has been an interesting couple of months as it has been a re-introduction into the whole idea of Critical Infrastructure Protection (CIP), which was one of the areas I was focused on a decade ago, but also has allowed me to link together some of the interesting aspects of what has been happening in the last two years, in regards to cyber-attacks, with CIP.

There has been lots of conjecture as to attacks against the US utility infrastructure, and in fact ample evidence that there have been breaches at varying levels and with varying effects. I am not going to go down the path of highlighting these as you can do the web searches that will help you find them. Yes some of them are real, and based on some recent conversations, some of the ones that were "Not cyber-attacks" were very likely exactly that. The bottom line is that the utility infrastructure is vulnerable and we need to do a beter job of detecting and reacting to these vulnerabilities.

Now all that being said there is another side to this puzzle. Everyone has heard about Stuxnet and Flame. You can read past posts to get a refresher. I have even discussed what I feel is the most worrisome element of these, which is re-use. We have already seen some of that within the payloads of these systems themselves. We are seeing more of that in other payloads being used for similar purposes including a "mini-" Flame that has been identified in the Middle East. The worrisome element here is not that the guys who created these are re-using elements but the fact that others are also re-using elements. Elements of Stuxnet have been found in recent financial targeted malware. Elements of Flame were seen in the attack against Aramco, the most valuable company in the world that also suffered the broadest attack to date.

The Aramco attack should be the red flag for many, or at least I hope. What Aramco showed is a couple of things:

  • The insider threat is real. The recent Verizon 2012 DBIR highlights the threat to IP from the insider threat along with the rise of hacktivism which seems to be another element of the attack
  • Malware does not die, nor do its delivery mechanisms. Both of these elements continue to live for a long time - they just evolve.
  • If your business is supporting cyber warfare then make sure you, and your allies, are aware of the re-use capabilities of code so you and your allies are not bitten back.
So how does all of this tie into US utilities? Well Aramco did show us another thing - that there are those that are unfriendly to the US and its allies and they have capabilities which can deliver harm. They may need help to do it but leveraging the code re-use elements and the hacktivism that exists everywhere today creates a risk for all utilities and other large sectors of the Critical Infrastructure that we need to pay attention to so we can mitigate those risks. The utility sector does create some additional concern as the past idea of utility security has been to build an "impenetrable" wall around the systems since the systems themselves were designed before the threats of 21st century cyber-capabilities were known. The issue they face today is that once someone gets through the door, into that secure environment, the damage can be swift and extensive, as evidenced in Aramco. Ensuring that organizations mitigate the risk by understanding their environment, the resources that they must manage and how their systems securely interact with others, inside and outside their domain, are critical to protecting the overall infrastructure.

Thursday, September 20, 2012

Attack Elements Showing Up Elsewhere

So it was a few weeks back that I had last posted on the rash of newly discovered attacks, their methods and payloads. One of the cautions I had tried to raise over the summer is that even though many people said that this was a specific attack, targeted at specific environments and that major vendors like Microsoft had reacted to shutdown the certificate based threat, that there was still a risk.
The risk i brought up was the risk that cyber criminals would take the basis of these attacks as a "cookbook" of types that would allow them to launch similar types of attacks on a whole new set of users. Today I came across an article published by MIT that confirmed my concern. The article highlighted that cyber criminals are using code from Stuxnet in attacks today and that the design of Flame makes it an even more attractive target for use because of its modular design.

So while we may think these much discussed pieces of malware and attack mechanisms are no longer a threat we need to be diligent in following the research and understand what is being done with the code and how it is being reused.

- Posted using BlogPress from my iPad

Friday, August 10, 2012

For those that thought it was over ....

June and July gave lots of opportunity for people to talk about Flame and I will bet all of you are tired of hearing about it - and I would say rightfully so. The reality is that Flame is not likely to affect you. I know a few people who will hate that line but it is the truth.

The TRUE reality is that the attacks vectors and malware elements are not used once and then discarded - and that is why we have a problem in the world of cybersecurity. People see the headlines about Sykipot and Flame and then see days later mitigation mechanisms and that they feel is the end of the story - it truly is not.

Sykipot had a number of variants that have done damage in the wild and they have been seen over many many months. Some would say that the similarities between Stuxnet, Duqu and Flame are indicative of malware reuse with some additions in the attack vectors.

Now we have another variant that leverages elements of Flame and attacks the financial sector and could also contain elements to attack other critical infrastructure elements. Read about Gauss here.

The Flame may be out, according to the pundits, but the embers are still causing havoc and you need to be aware that the attack vector used is a dangerous one and that you need to understand your infrastructure to protect against attack. The malware side of these attacks will eventually be signatured but until they are you need to stop allowing strangers in your networks. In previous posts I have given some basic guidance on what you need to do but it truly does start with understanding your infrastructure: managing the trust domains you use through the Root Certificate Authorities you trust; ensuring you have a strong policy for user authentication; and when using certificates as part of that have a good policy for key length, algorithms used and lifetimes and then manage them properly.

Those embers will burn as long as there is money to be made attacking other people so you need to protect yourself from getting burnt.


- Posted using BlogPress from my iPad

Friday, July 20, 2012

Is Power Grid security being given up for convenience?

I live in an interesting area just outside of Washington DC. We have the suburbs that are old and established with the big beautiful oaks and then we have the growing suburbs that are sprouting out of old farm fields. The last few weeks have seen a rash of storms that have delivered devastating blows to the power supply at people's homes. Those in the newer suburbs have been less affected due to buried cable than those in the beautiful old neighborhoods where wire is still strung amongst those beautiful old oaks that tend to fall and take out the overhead infrastructure. Of course these storms are either summer thunderstorms, in this case, which happen during the hottest part of summer - so no power means no air conditioning in the DC heat. The other end of the spectrum is a nor'easter in February which takes out power when it is well below freezing. Welcome to living in the DC area.

Of course all of this has people talking about the power companies in terms of reliability and response.  Things like "How can a company not be prepared for this type of situation - people without power for a week" have been heard frequently over the last two weeks. Well I am not one to beat up a power company for nature unleashing its fury. Nature is unpredictable and when a storm does happen, as in this case, it can be a very large undertaking to get things coordinated to remove trees and then restring wires etc.

Where I do have an issue is when it comes to the things that they are doing which can be planned for over a long period of time. We have all seen recent articles on the hacking of the power grid in various magazines over the last few years. In c|net, Scientific American, and you can even go to YouTube to see a video on how to do it. Congress, the National Security Agency and others have highlighted the fact that we have this vulnerability. The National Institute of Standards and Technology (NIST) has been working with industry to develop a stronger set of security standards for the SmartGrid to try and build a better grid.

BUT .....

We still have people in the industry that appear to think that the problem is not that bad. The North American Energy Standards Board (NAESB) authorizes two organizations to issue certificates for the Grid today - Open Access Technology International (OATI) and GlobalSign (Yes the same folks who had their website hacked earlier this year). Both OATI and GlobalSign feel it is OK to have long life certificates within the infrastructure protecting the power grid. In fact both have stated that 30 year certificate lifetimes are ok from a security perspective.

I myself find that amazing as the criticality of this infrastructure and its impact on Defense, Homeland Security and the economy is well recognized. This is an infrastructure you want to protect. Part of the argument is the difficulty in updating but then the OATI webCares CPS indicates an 8 year lifetime for Root certificates. Globalsign does allow 30 and 40 year Root certificates in its Certificate Policy and goes as far as 5 year for end devices. They also allow SHA1 hashed certificates, with a 2048 RSA key. There does seem to be some contradiction in the Globalsign CP in that it indicates following of NIST guidance but is not all that specific on which guidance. Certainly today NIST does not recommend use of SHA1 for any certificate use and long life certificates for Root CA's or any issuing CA is also not recommended due to the rapidly evolving attack vectors.

So what we are left with is two companies that seem to think that they can mitigate the risk of technology obsolescence. If we look at history we learn some very hard lessons. MD5 went from a suspected problem (1997) to a demonstrated attack in 8 years. Within 7 years of this first demonstrated attack (2005) there was a usable attack vector that allowed an attacker to introduce malware without the victim knowing and apparently not knowing for a couple of years. So yes one can replace their certificates if someone sees an attack against the CA or the technology that was being used but will that be too late? Will the logic bombs already be in place? If they are can we find them in time? If we do not what will happen? And what is being attacked, industrial control systems, have been targeted very recently due to existing vulnerabilities.

The risks are high here so rather than playing with convenience should NAESB not make it simple for all involved and strengthen these standards to reduce the risk? I would hope that if I asked the folks that went without air conditioning for a week in 100 degree heat if they would risk losing power again, and maybe for much longer, that they would react strongly. I wonder how people in hospitals and Wall Street would see things?

Friday, July 13, 2012

Flame is STILL burning

In this case Flame continues to burn Microsoft. Microsoft have announced the termination of trust of 28 additional certificates within their infrastructure in addition to the 3 that were immediately untrusted when Flame was first brought to light. This new announcement is significant as it highlights the importance of certificates within an enterprise as large as Microsoft but also highlights the interconnectedness of systems. Microsoft's announcement was based on their believe that the newly untrusted CA's are "… outside our recommended secure storage practices". What exactly that means is certainly up for discussion but Microsoft itself states that this effort is being undertaken after their continued review of what happened with Flame. This likely means that they these certificates were protected only as well as those known to be compromised and based on the form of attack there is some level of certainty that these could be exploited as well.

This interconnectivity of systems is becoming a key element of security management. When I speak of interconnectivity I do not just mean network connectivity but I also mean trust connectivity. With todays growing base of interconnected devices, whether that is the traditional server/desktop/laptop, the simple extension to mobile devices such as smartphones/tablets, or whether we take it the next level to control systems such as those interconnected through networks such as the SmartGrid or ones run by companies such as Tridium, we need to consider what happens when a security gap in one element of that infrastructure exists.

This, of course, is also not the end of the conversation. Even if we can find all those certificates and systems that Microsoft "untrusted" on all the platforms that we own and manage, this is one example of how the use of a vulnerable algorithm can create a significant and broadly impacting issue. We knew MD5, the hashing algorithm that was exploited in the original attack, was vulnerable in 2005. In fact some would rightfully argue that we knew about it almost a decade before that when flaws in the algorithm were first identified. At the time these flaws were not considered catastrophic but between 2005 and 2008 attacks against MD5 were demonstrated multiple times. It was so well understood at the time that Microsoft published a recommendation not to use MD5. In 2009 though Microsoft issued a new CA certificate using MD5. A mistake on their part but one that got through and created a significant problem.

MD5 is not the only vulnerable algorithm. Other elements of the system are also vulnerable, thankfully just not exploited yet. These other areas include one of the most commonly used hashing algorithm, SHA1, which has been theoretically shown to have a vulnerability (recognized by NIST in early 2005) but no known attacks have been shown. However the National Institute of Standards and Technology (NIST), which provides guidance for the US Government on computer security requirements, published in 2004 that SHA-1 be phased out of use in the US Government beginning in 2010. The phase out is intended to be completed by the end of 2012.

Hashing algorithms are not the only weakness. Based on the strength and availability of computing systems, 80-bit cryptography is considered vulnerable. Of course everyone will say that they do not use 80 bit keys but what NIST is saying is that algorithms with certain key sizes only provide effective security strength of 80 bits. This includes RSA-1024, DSA-1024 (with specific characteristics) and Elliptic Curve where keys are less than 224. The specific sizes of these three algorithms are again being phased out of use within the US Government as they are considered vulnerable for their common purpose. They will be completely phased out by 2013.

So yes Flame did turn up the heat on Microsoft but it also raises the overall issue of technical obsolescence of cryptographic algorithms and key sizes. This is not the first time this has happened but the major difference is that today the problem is bigger due to the interconnectedness of the systems. We now need to consider how to mitigate the threat posed here and this is where we can learn something from Flame. Flame was created as a data gatherer – the old adage of "know your enemy". We need to do the same thing and in this case the enemy is the use of vulnerable cryptographic algorithms and key sizes. Assess your environment to determine what is used and why and plan to replace those algorithms and those certificates built around vulnerable algorithms as quickly as possible.

And for those of you that think Flame is not an issue - take a look at some of my recent posts and as you read them think about Sykipot and how that has evolved over the last year. Workable exploits do not die - they evolve as our defenses do.

- Posted using BlogPress from my iPad

Tuesday, July 3, 2012

Do you want to be scared?

In 25 plus years of working in the data comm industry and the majority of that working in the cybersecurity/data security realm I have diligently stayed away from fear-mongering. My basic approach was that there are plenty of business reasons to take cybersecurity seriously, whether it was to maintain your control of your investment, stay ahead of your competition, perform tasks more efficiently, reduce costs of shipping, paper, manpower, and many other benefits. There never has been a reason, from my perspective, to dangle the scythe of death over anyone's head.

Now there have been times when I have looked at cybersecurity from the scythe of death perspective. A number of years ago I worked with the US Government on Critical Infrastructure Protection and there you need to look at cybersecurity from that perspective because if you get it wrong very bad things can happen. So over the last few weeks with all of the discussion on Flame and its relation to Stuxnet and other attacks I started to look at what does this mean from the bigger cybersecurity perspective. 


Lots of people hear about these malware variants and when you talk to them their first response is "That won't affect me - it was targeted at Iran" or other Middle Eastern countries. The latter part of that statement is certainly true ... but .... let me pull out the scythe here. What Flame and Stuxnet ended up doing was writing a new chapter in the cyber-attackers handbook. Certainly the creators did not intend for this but through some carelessness the Pandora's box of cyber-warfare has been opened and in it is a very powerful toolkit. Note I did not say weapon. What Flame and Stuxnet has provided is an approach to attacks that is unique and inherently difficult to recognize. Certainly there are tools out there today that can recognize malware that is known about but what Flame and Stuxnet introduce is a way to use the inherent trust of the Internet architecture to allow the introduction of malware to your environment that may not be discovered through those normal processes and checks.

Think of it this way - today people get upset when they discover that you can go on the web and find the "How to make a pipe bomb" instructions. What if you could find the same instructions for a nuclear weapon that was undetectable? A weapon that could be delivered to any city without anyone knowing about it because you inherently trusted the way it was being delivered and there were no tests to check against it? A scary proposition - but potentially not as catastrophic as what can be done with a Stuxnet like attack using the Flame approach to delivery. Think of what could happen if operators could not control the power grid, the water supply chemical composition, the natural gas pipelines or potentially the mechanisms used to transfer funds between banks and brokerage houses. Now imagine that all of those things went wrong on the same day. That is cyber warfare and that is the handbook that can be written with the existing toolkits that are out there today.

But, that is the worst case scenario and things can be done to mitigate the risk. The National Institute of Standards and Technology (NIST) publishes documents which describe the appropriate ways to protect data including what algorithms to use and what policies to have in place. These include things like using appropriate crypto and avoiding algorithms with known weaknesses. Flame took advantage of the fact that not everyone is following these guidelines and for that attack they were able to spoof a Microsoft certificate that used an MD5 hashing algorithm.

So if you want to mitigate some risk look at what NIST has published (start at csrc.nist.gov) to see if there are things you can do better. And remember what one of my colleagues said after Flame was better studied ...

"Friends don't let friends sign with MD5" 
                                     ... Tim Sawyer

If you want an interesting look at a piece CBS 60 Minutes did on this topic then check out this video.

Wednesday, June 20, 2012

Collaboration - Walk into any partnership with open eyes

The title of this blog may seem to have little to do with security at first glance but the thought came to me as I continue to follow the developments in the Flame arena.

As always the discussion here is based on information that we know. There is always a chance that what we know has been planted so always consider this when planning actions - and I guess that ties into the whole point of this post.

I was reading an item on Flame that seemed to confirm some of the original thinking, that the recently found incarnation is directly related to the development of Stuxnet as well as Duqu (and likely others). This should not be earth shattering to anyone as the pointers to that linkage are many including shared code, common targets and when looking at the bigger picture the fact that Flame was a data gatherer while Stuxnet included action elements. This follows the "know your enemy before acting" mentality.

The interesting thing for both Stuxnet and Flame is that discovery came only after one of the collaborators in developing the platform decided to take action with the platform outside the original scope. Stuxnet was discovered when it went beyond its target platforms and into the Internet after a poorly implemented code change. It now appears Flame was discovered after it was directed to another environment. Both these actions were unilateral actions taken by one of the parties involved in the development.

These types of actions should not surprise anyone either. When you have a vehicle such as this that has been successful for a period of time the temptation to manipulate it for your benefit is significant. What we need to think about though is the impact it had on the overall program. Did these actions raise the risk of other similar platforms being discovered or other actions being taken to reduce risk? Does this now impact how successful the original program is going to be in halting or delaying the original intent? We need to remember the goal here is to halt or delay nuclear weapons development so the stakes are high.

But in the general business environment the same thing can happen. I am not suggesting you cannot have good partnerships between companies that are "frenemies" but you do need to make sure your eyes are open. Of course it is more than just redirecting the partnership - when companies collaborate you also need to make sure that the shared environment is protected to the highest common denominator of security. It is no longer just our data at risk - it is also your collaborator's data and that loss could pose bigger problems in the long run.

It appears that the Stuxnet-Duqu-Flame attacks has brought to light more issues than just the security issues. Yes those security issues are many including very important ones like managing your trust environment and evaluating what certificates and algorithms are in use and/or trusted in your environment, but we now also need to consider that this effort really became known only because of mistakes made by one partner in the trust relationship. That may be the bigger lesson - security is not just about what you do but also what those that you deal with do as well.

That is something to spend some time thinking about.


- Posted using BlogPress from my iPad

Friday, June 8, 2012

Flame Extinguished?

I will give Microsoft credit in reacting quickly to Flame and its use of faked Microsoft CA certificates. Microsoft quickly came out and moved the two MD5 certificates that were faked and the SHA1 certificate that was abused to its untrusted store. The fact that they did this through the Microsoft update, which was one of the transaction sets that was hijacked by these certs, is kind of funny but that is an aside.

So did Microsoft solve the problem. The simple answer is NO!

Microsoft solved the problem that was created by their certificates. The problem that was created by generating a CA certificate and having the signature applied using an algorithm that was known to have weaknesses and was demonstrated to be attacked in a very similar way the year BEFORE the certificate was generated. In putting their certs in the untrusted store they closed this door - but this is not the only door that exists.

What Flame did was highlight that this attack is not only feasible but something that is executable. Did this take a high degree of knowledge and expertise - of course. The MD5 attack was a bit different then that demonstrated in the past - but this was something that was going to happen. The original demonstration of an MD5 collision attack was done on a cluster of high-end IBM UNIX servers. A few short years later the attack was performed on a network of 200 Playstation 3s. Now this says a lot about technological advancement in processing power for sure but also says a lot about the threat and how rapidly it grows. In 2005 when this was an active topic Ron Rivest (the R of RSA) said ""md5 and sha1 are both clearly broken", when speaking in terms of collision resistance. 


So what do we take away from this? Organizations, including groups like the CA Browser Forum, need to get very diligent about what CA certificates are in browsers, applications, and hardware devices. These need to be assessed for requirement, strength and validity periods and a clear strategy needs to be put in place to understand what is there and how to replace what should not be there. If you look in your out-of-the-box browser today you will not only find MD5 based signatures but also MD2 based signatures. Yes these have been around for some time but we must think that if they have been replaced with a new infrastructure why are we keeping the old ones around? We must also start to question the lifetime of the SHA1 based signatures. There are some CA certificates that are out there that are SHA1 with very long lifetimes - and some with weak key algorithms (RSA-1024 keys for example).


So it is time to get diligent and start to manage your environments - or the next flame may be one licking at your feet.

Monday, June 4, 2012

It did not take long ... FLAME on

OK - I promise no further Marvel references.


I do know it has been a while since I have updated this blog but there have been a few changes going on so I sat back and took it all in first. That being said I could not sit back and not comment on the latest discoveries that have come out of the FLAME discussions given how it touches so much of what I have been doing and now what I am doing.


So if you are reading this you likely know what FLAME is - or at least you know what has been published. I suspect that there is still a lot to discover but I wanted to highlight some very important things that come out of what we know so far.



The FLAME malware has been dissected over the last few days and one of the most disturbing finds that has come out, in my view, is the discovery of 3 certificates that appear to be rooted in the Microsoft Root CA.

The certificates in question are:

Microsoft Enforced Licensing Intermediate PCA (2a 83 e9 02 05 91 a5 5f c6 dd ad 3f b1 02 79 4c 52 b2 4e 70) - Issued by Microsoft Root Authority 
Microsoft Enforced Licensing Intermediate PCA (3a 85 00 44 d8 a1 95 cd 40 1a 68 0c 01 2c b0 a3 b5 f8 dc 08) - Issued by Microsoft Root Authority
Microsoft Enforced Licensing Registration Authority CA (fa 66 60 a9 4a b4 5f 6a 88 c0 d7 87 4d 89 a8 63 d7 4d ee 97) – Issued by Microsoft Root Certificate Authority 

The disturbing part of this is that the PCA certs were generated using MD5 as a hashing algorithm. This created an ideal environment against which to attack since MD5 is known to have weaknesses.
Now, of course, we are not yet certain that it was MD5 that was the vector of attack but there certainly is suspicion based on the purported parties behind FLAME (Nation State suspected), along with the fact that the delivery of code was done using these certificates. These facts combined with the fact that the MD5 attacks have been published since 2005 raises the suspicion level even greater.

The fact that the creators/designers took advantage of this attack to go after the MSUpdate service and the MS license registration process is powerful. This style of attack creates a number of vulnerabilities:
  • Raises the risks when performing the required MS registration of any product that may have been officially registered. Today it appears it was only against the Terminal Services environment but the complete story may be different than this.
  • The attack creates a Man-in-the-middle (MITM) risk for any machine where these certs were injected.
  • The PCA certs have a broad use base defined within their extended key usage including code signing and possibly more worrisome CRL signing. The extended uses for these certificates were critical in allowing the attack to happen (code signing usage) and potentially for preventing discovery with the fact that they also have CRL signing capabilities.

The CRL signing risk is significant since as long as a system has these certs in the trusted store the CRLs are inherently untrusted. The timeframe for these certs to have been in place and the certificates that may have been trusted through the path defined by these certs also raises the potential of further down stream risk.

This of course was a directed attack against the system using known vulnerabilities but it also raises questions/concerns with processes within Microsoft. These concerns include:
-       Use of MD5 for signature hashing
-       Broad definition of key usage for given certificates
-       Lifetime of intermediate CAs given the ever changing technology environment

So outside of the obvious issues for anyone that was impacted directly by FLAME this does highlight a number of weaknesses in managing environments. Ones own environment may be greatly influenced by those you deal with. In this case an injection occurred, likely, through a software registration process. It was achievable due to poor management of signature algorithms and certificate usage settings. This attack demonstrates that these types of attacks are achievable and likely have been for at least 5 years.

So if you were not paying attention to what certificates were in your stores before I hope you will now. You need to:
  • Look for known bad ones;
  • Evaluate what roots you trust, and why;
  • Look at what signing, hashing and key lengths are used and;
  • Look at what certificate lifetimes are indicated
This area of Certificate and Key management is becoming more critical as it no longer just touches how I secure my running shoe purchase on Zappos but now it affects how I get software and how I trust that and in todays cloud-based world that is a potentially disconcerting issue.

Friday, May 11, 2012

Nice to see success

Many years ago I met a young man who trained with me at our dojo. He was a young guy that was driven and smart and one of those guys that you knew would succeed. Well last night while we we getting supper ready my daughter showed me a web site for two musicians who go by the name Tritonal. Their homepage showed a photo of the two guys and she asked if I recognized either of them. At first I knew the one guy looked familiar but I could not place him. When I saw a second picture I knew right away who it was.

Reading how they got together was interesting but it showed me two things .... with drive you will find your passion and you will manage to be successful with having that in your life .... but also that the world is becoming a much smaller place thanks to the Internet. Everyone has heard the latter said many times but to see it in action ... two guys with similar ideas and drive who go from disjoint lives in Virginia and Texas now traveling internationally and bringing their music to their fans - kind of inspiring.

The power of the Internet ... in real life.

Congrats Dave


- Posted using BlogPress from my iPad

Thursday, March 22, 2012

Adobe Moving Forward with Smartcard Usage ... or is it moving forward?

Over the last couple of days I have had numerous people point me to a post on Adobe's blog about PIV card usage. For those of you not familiar, the PIV card is the US governments implementation of an end-to-end specification for identity issuance to its employees and approved contractors. The standards that support PIV come out of the work that followed on from Homeland Security Presidential Directive 12 (HSPD-12) and they address the technical specifications for the card including the specifications of the digital credentials on the card as well as the process for issuance and most of the things that build around that. The PIV specification was then leveraged to implement a credential for non-Federal entities that may wish to interoperate with the US Federal government and this is called PIV-I for PIV Interoperable. PIV and PIV-I credentials have been rolling out over the last few years and the PIV-I market is growing quickly with interest from companies that provide products and services to the government, state and local governments and now growing into the healthcare arena.

The post by Adobe was very good in that it talked about usage - and how these cards can be used to sign documents electronically and then provide a way to validate them - but there were a few things in there that hit the wrong chord. Let me explain .....

First off the post talks about validating credentials per the US Federal Common Policy. Well the US Federal Common Policy does not tell you how to validate a credential. It specifies the policy under which you would operate a PKI such that it could be trusted by the Federal agencies. NIST did create a set of tests, PKITS, that would let you know if your product could validate a certificate, and thereby signature, properly through the Federal PKI architecture ... maybe that is what was being thought of.

But another bad chord .... the post goes on to say "A recommendation to make this easier is for all of the issuing certificate authority public key certificates to be stored on the smartcard and available to the OS+applications." The example they give has a signature that is tied through two bridges to the Common Policy. So if I am reading this correctly I need to put the Common Policy Root, the Federal Bridge certificate, the Certipath certificate, the Root of the issuing architecture and the issuing CA certificate all on my card. The idea is that this is what I need to validate the signature. Well yes I need this data but what of revocation data for that chain? So do I put that on there as well - a rhetorical question since we need to get that live - so I need network connectivity. Well if I have network connectivity why not just use the data in the certificates in my trust chain, issuer and it's root, to discover and validate the path that is appropriate based on policy identifiers and business rules?

That was the idea behind the PKITS test - to do the path validation real time using software that did it completely. Do I need this software on my desktop? The answer is no - numerous solutions also provide this capability on server based systems using implementations of SCVP. There are desktop solutions that do it right but it is not the only way.

The other issue that comes up is that the cross certificates that are used between these CAs have shorter lifetimes than the cards and certainly are not in sync with user updates so how do I update these root, issuing and cross certificates on the card?

Yes Adobe you did the right thing by presenting a usability case that truly is needed - we just need to make sure that the system is truly usable in the end-to-end implementation. I think that is where this has fallen short.


- Posted using BlogPress from my iPad

Thursday, March 15, 2012

Some Thoughts from IDTrust 2012

I spent the last two days at the IDTrust Conference which was held at NIST in Gaithersburg. This conference started about 11 years ago as a PKI centric conference but over the years it has evolved into a broader discussion on identity. Ian Glazer did a great job of laying this out in his presentation early on the first day. This move from an almost pure PKI discussion to a broader identity discussion was seen even at the opening with the initial presentation given by Jeremy Grant, who leads the NSTIC program, and re-enforced the desire to get industry to move ahead with innovative ways to improve the authentication discussion and move towards real implementations.

The discussions held over the two days were great. There was good focus on authentication but also very broad discussions around attributes and their role in improving the confidence levels of the parties involved in transactions. The two days did generate some interesting thoughts, three of which are discussed here.

There appears to be a growing need to handle the lexicon for attributes - this is something that I wrote about quite a while back. The context for my previous discussion was a broker for managing the lexicon - handling the differences between the varying attribute terms and definitions that are being used. This does require considerable cooperation between organizations but a managed central service that is participatory and leverages recognized standards group involvement should address the majority of the interoperability issues.

Identity management appears to be taking on a new scope. When we speak of identity management today we speak of things like registration for authentication credentials, usage of these credentials and maintenance. It does appear though that even within this there is some aspect of attribute management as part of the identity. Now there are some that feel that everything is an attribute, including your name, and I will not be debating that here, but whatever we cover as an attribute we must contextualize those attributes and their reliability, relevance and effectiveness and consider how this may change over time. A simple example is something like address. Even today I can go to a store that has had a record of me from an online purchase and they will still have my address from 4 years ago, even though it is no linger relevant/accurate. Management of these elements of data, including weighting them, is becoming a critical element of the personal data economy. Companies need to know what is current and also what is more likely to be accurate when they access these elements.

A third, and final thought for this post, is the need that comes from the prior two points - how do we effectively manage the attribute lexicon and the data represented within it? One would assume that the data is the users but is the user the only one that can manage it? Do existing attribute brokers/holders such as EQUIFAX and Experian have some level of control or responsibility to handle the weighting or accuracy of the data? Do we provide an easy interface for the user to handle their data and how do we link that to the brokers?

As you can see there was considerable discussion on attributes and attribute management during the sessions and in between them. There was also a lot more data and information and some of the presentations are available on the NIST/OASIS IDTrust 2012 site.


Let's get the discussions going and let's see if we can help move this yardstick forward some.

- Posted using BlogPress from my iPad

Tuesday, March 6, 2012

Is my smart-phone smart enough?

I read an interesting article this morning that came out of the RSA 2012
conference. Two researchers had found that cell phones leaked data through their transistors which could reveal private keys in use within the running application. One would think after seeing this headline it was a case of poor implementation but these researchers demonstrated this on multiple platforms.


Should we be worried? Is there now an easy way for people to get at your data? The research did show it is achievable to gain access to the keys that are protecting data. An overall successful attack would require multiple elements of course. The attacker needs to get the keys and then gain access to data, either over the air or through a hosted server. Again none of this is impossible but it certainly would be a coordinated attack. So should we be worried? Well if you or your employees are using your phone to protect sensitive data then maybe there is a reason here to start looking at protection mechanisms and procedures that would mitigate some of the risk.

- be aware of your surroundings when you use applications where sensitive data is accessed;
- limit the sensitivity of information that is stored on the device
- start looking to phone vendors that have external validation of their devices or cryptographic implementations whether that be a FIPS style validation or Common Criteria
- have a plan in place to update keys on a regular basis if you need to store sensitive data on your phone

The news of this research is fresh so there is still lots to learn about the risk and mitigations but some of the things above are common sense guidelines that will help to mitigate some of the risk


- Posted using BlogPress from my iPad

Monday, February 27, 2012

Where do I start looking?

This past weekend I was lucky enough to get to hear some great conversations and presentations. It was part of my company's (Entrust) annual conference. The many conversations were with colleagues, partners and customers. The presentations that stood out were from customers, all saying what wonderful things we have done to help them but two in particular stand out that were more general. These two were talks given by Michael Chertoff (former head of DHS) and John Adams (former head of CSE, sort of NSA in Canada for those that need a basic explanation of CSE). Both men have a wonderful breadth of experience and a great view of what is needed to help protect the nations better.

Both men presented an interesting view of things and they certainly have the experience to be able to support their views. Their stories of how things were uncovered are beyond entertaining - the stories truly are frightening when it comes to what could have happened.

Given their experiences in very different environments they did have fairly common views as to what is needed:
- mitigation is key
- layering security is critical to achieve this
- there is no silver bullet
- you cannot protect the network - you must protect the data
- identity is single most valuable asset
- security challenges start with the individual

When one starts to look at these elements it can be broken down to items that apply specifically to businesses but also items that carry across both business and individuals. One important element of that is education. We need to do a better job of educating people with the elements of security that they need to be aware of and address themselves. We do this fairly well in medium to large businesses as to how they protect data with strong passwords and changing them often; keeping security software current; and managing patch updates. What we do not consistently do is to carry those ideas to end users in their homes. End users should not be using the same userid and password on all accounts. Grades of passwords are an effective way of reducing risk. One view that was shared was that 80% of security issues can be addressed by patch and password management. I am not sure that is measurable but certainly we could mitigate a lot of elements with these guidelines.

To re-enforce the above idea, in his discussion former Secretary Chertoff presented an interesting analogy - for some environments security is like an M&M - it has a hard shell but some soft, good eating on the inside. This goes directly to the idea that if users do not effectively manage their security they can create this false sense of security. "I have Microsoft Security Essentials - I am good". This view certainly does not address the password breach issue.

The idea of common userid/passwords across multiple applications at different assurance levels also challenges the implementation of your identity as your most valuable asset. Today too many of us use our email address and password as the login mechanism for a variety of applications, including things like online shopping, access to medical records and other applications. This opens users to phishing attacks that expose more than just access to email accounts but also to this potentially sensitive and possibly damaging information. We need to do a better job at making people aware of these things.

So the title of this post was "Where do I start looking?". Well maybe we need to start with looking at what we do ourselves and how we teach our children and friends about what is important to do when it comes to computer security. I know my kids are aware - are yours?

- Posted using BlogPress from my iPad

Location:Ellis St,San Francisco,United States

Monday, January 30, 2012

Some Thoughts Generated at ShmooCon

This past weekend I attended ShmooCon. Depending who you talk to, it is a white hat style conference. I suspect, as with any gathering of 1800 computer security people, you may end up with some black hats and some greys but generally speaking the sessions and discussions that I attended were white hat oriented. It was a good couple of days. For personal reasons I could only be there Friday and Saturday but even that short period of time generated lots of ideas.

Two ideas that stuck with me, even after thinking about them some more, are consumer oriented security thoughts. The first is the idea that not all Certificate Authorities (CAs) are created, or operated, equally. I think this is clearly evident in the issues we have seen with some over the last few years where we have had breaches of administrative accounts either due to the back end implementation or due to poor implementation/operations of user administrative accounts (aka Registration Authorities, RAs).

There are many CAs that are out the that are well operated and that are diligent with external audits and security reviews. These CAs implement policies that are at least as strong as the level of assurance that they deliver to their end customers and these policies are shown to be implemented properly through their external reviews. These CAs are listed in the same security store as other CAs that appear to not have the same level of policies, operations control or external review. In any browser, an Entrust or Verisign CA is treated the same as a DigiNotar, or at least they were before the DigiNotar breach. Is that beneficial to the consumer? There has to be a way to grade these CAs, outside of the EV versus standard certificate ideas. I am thinking something ala the FISMA grading mechanisms used in the US Federal government. In the FISMA grading system agencies are penalized for gaps, the bigger the gap in the operations versus policy, the bigger the hit. The grading then looks like the Green, Yellow, Red rating system. If this style of system was implemented in conjunction with Browser vendors the browser could allow a user to define a tolerance level - I will accept yellow and greens without warning, unless there is another issue, and I want to see warnings for red or block red all together. The variations between the levels could be worked through the CA Browser Forum and the determination of what triggers a downgrade or what is needed to bump a CA level back up.


The other thought is in a similar vein. Today when I authorize app access, whether on my phone or through some form of account connect, think Facebook Connect as an example, I see what the application is accessing and in some scenarios I have the option to be selective on if I want the application to have all the privileges it is asking for. For example - do I want the app to post on my Facebook Wall? Or, Do I want the app to have default access to my GPS location? This sort of flexibility may be lost on some but I do appreciate it as I often wonder why an app requires certain access. Now grant it, not all application platforms provide this flexibility but I see more and more doing this. What is interesting is when I access a website with Java code I do not get the same level of control. I can decide to only trust signed code but given the issue I described above is that enough? Do I want my code signed by a CA that I would not normally trust? Since Windows update dumps the full Root trust list to me and I have to manually go through and clean it there seems to be an opportunity here to do some better refinement. Again this would require browser cooperation but the end result could be powerful.


So the idea is two fold: "encourage" app developers to identify, in a standards way, likely in Metadata, the permissions that the app requires. With this definition lets give the user the option of choosing how to operate. Again the browser vendors could default settings but give the user the option of choosing whether he wants to see the data and how they want to react to the data. Settings to allow "application operations for JAVA" for example. If the code has permissions meta data then display that and let the user choose. If not then notify the user of the CA level (see above) that signed the data and make them aware of risk.


I am the first one to realize that the weakness in most of transactions is the end user but I believe that these types of implementations would increase awareness as they are already seeing elements of these in other areas. Of course this will require cooperation between developers and browser companies to implement and likely not a tomorrow thing but I believe something that should be thought about in a broader forum.

Tuesday, January 17, 2012

Sykipot Update

As I mentioned in my last post - one of my concerns was the possibility that a hacker could leverage the PIN access and the card update capability of the ActivClient to introduce malware on the card. After some investigation it appears that with the use of the Global Platform implementation it would be an extremely complex feat to execute. I do not believe it is impossible but the level of effort does not appear to have been taken and it would only be capable of happening during an actual card update which in most cases would be CMS initiated. There does not appear to be anything the data that has been released to indicate that there is a trigger for the action - so maybe one less concern.

Friday, January 13, 2012

Is this Sykipot something new?

Undoubtedly you have all seen the news of the alleged attack via Sykipot against US government smartcards. Of course the press has taken ahold of this with all of its usual gusto but is this really something new?

Well yes there are new elements to it - it appears to be the first Sykipot variant that appears to have specifically targeted a specific client and middleware to access smartcards for the purpose of utilizing the private keys for access to data. That being said the actually attack vector is not new and has been looked at for many years. This attack has the same sort of path as any man-in-the-browser style attack - deliver command and control elements to the target; install a key logger to capture and; once having determined that the target is viable then deliver the elements necessary to execute a complete attack. The major issue here is that fundamentally this was once again initiated through spear-phishing to deliver the required infrastructure to build the attack and possibly leverage it.

We are once again facing a massive push against a technology that fundamentally is not at fault here. If we look back at some of the attacks like this that occurred last year - it was not the technology but the implementation and the processes around the technology that are being leveraged to attack. It fundamentally did not matter what the underlying technology was. This Sykipot attack, ten years ago, would have been a key logger capturing userid and passwords, and just as likely could have been that today for many systems. However because it is smartcards it is now big news.

So is there really an issue - well quite possibly yes - and it could be big. Yes it is a problem that the card can be used when inserted without the user knowing it is being used - this is of course a major issue. There is however a potentially larger issue and the outcome of the investigations will determine if it is a real issue or not. The ActivClient does have a variant that is deployed to allow the local user to update their card. If in fact this Sykipot variant is hijacking the interaction with the ActivClient is it possible that the card can then be infected with malware? The threat of malware on the card is likely the worse case scenario. I know of no virus software that scans cards on insertion and it could be possible that this malware could be transmitted to devices via the contact and contact-less interface which would mean delivery to many platforms, possibly without knowledge. Of course right now this is speculation but hopefully one of the paths that is being investigated.

It will be interesting to see what comes out of the investigations and what gets publicized. For this interested in getting a base set of info on the attack check out the Alienvault article.


- Posted using BlogPress from my iPad

Wednesday, January 4, 2012

FINALLY!

I am not sure if anyone has noticed but the tag-line "Posted using BlogPress from my iPad" has been missing from posts for the last couple of months. In fact the reason for that is also part of the reason why there has been limited posting on tridentityideas over the last couple of months. The issue has been BlogPress on my iPad has not worked effectively (or at all) since upgrading to IOS 5 until an app upgrade today. I am thankful that it is back but the next time I have to go months without access to an app will likely mean I will be changing my blog site. Hopefully that does not happen as mobile blogging is when I do most of my writing. I guess the new update and renewed access mean that I can now officially kick off the new year with some more serious work on my blogs. - Posted using BlogPress from my iPad