Wednesday, January 16, 2013

Is the Energy Sector Really a Cyber Target?

For years we have heard about cyber warfare - whether it was the categorization of cyber Pearl Harbor or the cyber equivalent of 9/11. Over the last couple of years we have definitely seen the increase in targeted attacks. Some of them generated in Western Nation States while others have been generated in Middle Eastern, Eastern European or Asian nation states. we have even seen, what appears to be, pure cyber-criminal attacks that have targeted resources to manipulate (banks and their transactions) as well as data to sell. The most recent case that has come to our attention is the 5 year odyssey that is now known as Red October.

What has been interesting is that some of these attacks have been built to be very targeted against industrial control systems. People are familiar with the term if they have looked at Flame or Stuxnet. In the case of Stuxnet it very much was a part of a larger operation to leverage the industrial control systems to halt the use of centrifuges. What many people do not realize is that these same control systems are implemented every where. Power plants, manufacturing facilities, water filtration and gas pipelines and the list goes on.

So what we have is a target in a broad environment space that is proven to be attackable. So what does it take to attack these systems? Well an understanding of what type of system is implemented and then basically access to the internet to get the command and control language that is used within the system. Some would say that it is not that simple and that is largely correct as I need to get at the system and these are within environments that are protected by firewalls etc.

That last statement is the false sense of security that we seem to have lived behind for quite some time. DHS recently released a report that indicated that 40% of cyber attacks were against the energy sector. An example was the discovery of advanced viruses/malware at 2 US energy plants late last year. Both of the attacks were apparently delivered through the same mechanism that was used to deliver Stuxnet (so not only are people re-using the code they are re-using the methods). One can surmise that the two plant attacks could have been prevented by following some very basic security procedures, including up to date software and not carrying drives between enclaves without safety mechanisms in place.

It is this last point that becomes the slap in the face to all of us. Congress has repeatedly refused to provide requirements for security for critical systems. There is the attitude that the government should not be telling private industry what to do. I do not necessarily disagree with the sentiment in most cases but we are not dealing with most cases here. There are many critical infrastructure segments but lets focus on energy here. If proper secure protocols are not followed the attacks against the energy sector will continue to be successful and to greater and greater degrees. Yes that is bad for he energy sector because of reputation and and actual financial loss but guess what I am using electricity now to write this blog. You are using it to read it. Your bank is using it to perform transactions that allow economic active to flow. Hospitals are using it to keep people alive. Of course I could go on. It is time that we recognize what has been demonstrated to be true and it is time it is responded to. If Congress cannot pass a "Here is how you fix the problem bill" then lets look to California and their data loss bills and pass legislation that is not prescriptive about how to protect your infrastructure but hold the companies HIGHLY accountable for not properly protecting their infrastructure. There is just too much at risk.

Stepping off the soap box.

Friday, January 4, 2013

Happy New Year ... and you could still end up being a target

Well it is a new year and with that we can all expect to face new challenges.

That may sound doom-and-gloom-esque but it is not intended to. I truly believe that the problems we have and will continue to face as professionals can largely be mitigated through thoughtful application of combined intelligence and careful planning. Let's be honest - most of the problems we saw the last couple of years were not taking radical new technical advancements into account but rather application of existing processes in different ways, leveraging open doors that people just forgot about or were created by poor process implementation. Guess what - it seems 2013 will not be all that different.

The first "breach" news story out in 2013 is an attack on the Google channel. Did it start as a planned attack against that channel - unknown - but when we look at this breach we will see that it was poor process implementation that allowed it to happen.

So what are we talking about here? Google just announced that they had discovered certificates that were illegitimately issued under their name. Now what is different here is that the credentials that were issued were not apparently issued by breach of a CA or its RA but instead by poor process. The CA involved, Turktrust, issued two certificates in August 2011, with bits set that effectively made these certificates capable of signing certificates, effectively looking like intermediate CAs. According to Turktrust this was caused by the loading of test certificate profiles in the production environment. After this was done the two certificates were generated before anyone noticed the profile error. Once the error was noticed the profiles were removed from the production environment. What did not happen was no one went back through the logs to identify any credentials issued under those profiles.

Now the skeptical may say that the latter action could not be an oversight but was part of an intended process to get certificates issued that would then be allowed for "someone" to monitor all google traffic coming from the domain, including secured gmail traffic. It certainly is possible that this was the case but it is obvious as well that proper process was not followed and if the process was properly audited then this would have been caught. The things that never should have happened:

  • Test profiles loaded into a production environment - this is easily solved through proper checking of server identification and not crossing platforms with profile files
  • Generation of production certificates using test profiles - should not happen as the production system RAs should not even have test profile names available for choice.
  • No audit checks - once any error of issuance is discovered all issuance logs should be checked and all issuance verified to ensure all issuance was accomplished per policy.

Now it is one thing for us to talk about what Turktrust did wrong but the reality of the situation is that we, as users and relying parties, need to be able to ensure that no matter if it was a set of errors or if it was an intended process to create a backdoor to the Google channel we need to be able to mitigate our risk. What this means is that we need to be able to quickly identify where we have certificates that may be at risk, whether that be in web servers, browsers, root store, local Java stores, in routers, VPN devices, network devices, or anywhere else. Personally I went through and double checked my new Android Jelly Bean install to make sure I was comfortable with its root stores. For organizations however this is a larger task and looking at systems that monitor and manage your certificates is an easy and important way to mitigate risk and one that seems to be getting more and more critical given the number of types of certificate attacks that are appearing.

Finally - this is not a "do not use certificates" or "CA providers are bad" type message - this is a "Take responsibility for your environment ... know who you trust and why ... and ensure that you understand when and why things change by monitoring the environment" message.

Maybe that last part is a good New Years resolution.