Monday, December 9, 2013

Sacré bleu! Another CA Scam

This weekend Google security engineer, Adam Langley, (we will get to the irony of the last name in a bit) blogged about an agency within the French Government and its misuse of its intermediate CA. The French cyber-defense agency, ANSSI, had improperly issued "duplicates" of certificates for some Google domains. It appears they went as far to ensure that the certificates carried enough information to display to any user that the certificate was the legitimate site certificate even though it was issued improperly. Google has started to work with the browser manufacturers to effectively block this intermediate so look for browser updates over the next few days. ARSTechnica article is here.

This does raise some interesting thoughts. First off is the fact that Langley (has anyone else made the link to the other "famed" Langley location and the fact that ANSSI is the "cyber defense" agency in France?) was looking at these types of organizations given the recent announcements from Google, Yahoo, Twitter etc as to the push to rein in the NSA programs for monitoring. It appears that Google, and likely others, will be doing some checks that the operators of the Global Roots should be doing all along and sampling the certificates issued by the intermediate CAs. This sampling usually only takes place after the car has gone off the cliff. I had to go through every issued certificate for our CAs after Diginotar - "just to make sure". But a sampling on an irregular basis would do wonders for issuance confidence.

So if we are to assume that the Root operators are not doing their jobs to protect the issuance process what do we do? Well options have and are further developing. One of the most promising is certificate pinning - which provides a way to link a site to a specific certificate and even if another certificate from a valid CA appears to be valid it will be ignored. It does seem to me that we should not have to do these things but since we cannot rely on Root operators to protect their brand then we need to do things to protect ourselves.

For those CA operators out there - make sure you either have good control over issuance OR put in place mechanisms to randomly audit issued credentials so you can provide a way to protect your brand.

Monday, November 25, 2013

Another case of what is old is new again

It has happened again and one has to ask what it takes for companies to learn from their mistakes.

I think everyone has read the articles on the latest Adobe breach and the disclosure of records of users. Of course we are not talking about a small number of users - we are talking 150 Million accounts, 38 Million of them active and 2.9 Million accounts with credit card information. The data that was taken included:

  • clear-text email addresses
  • hashed passwords
  • encrypted credit card information
  • clear-text hint lists
It is unknown how well protected the encrypted credit card information is but certainly the clear-text email addresses/usernames and clear-text hint lists create significant threat to users especially when some of the hints were "Same as bank account". 

The hashed password list is a significant issue as it appears that these were just hashed - no salt values were used to make the hashed password more complex.

Of course this is not a good situation for Adobe customers or Adobe itself. Even worse for Adobe - this is not the first time this has happened - not even the first time recently. In 2012 the Connect conferencing forum back-end was hacked and a similar data trove taken. In 2012 approximately 150,000 users were exposed. The data taken - clear-text email addresses and MD5 hashed passwords with no salt values used. It was not like this breach was not made public as the purported perpetrator released a screenshot of 230 of the user accounts. 

So why has Adobe not fixed their back-end for storing of customer data? Adobe does know that they are a target - these were not the first attacks against Adobe - in the same 2012-2013 period you also had the hack into the signing process that allowed malware to be signed using Adobe credentials and; the theft of source code for Acrobat, Cold Fusion and Photoshop that eventually led to two well known attacks against PR Newswire and the Washington State Court System.

We have talked about the simple processes before - be aware of what you are doing and using in your systems such that when vulnerabilities are being exploited against those same tools and processes you use you can be aware and implement changes to protect yourself. Adobe did not even do this when it was a vulnerability that they had before.

I am not suggesting here that Adobe needs to be put in the corner with the dunce hat on but for users that may have accounts with Adobe learn the lesson that Adobe did not - and rather than exposing yourself to attacks because of password reuse use a methodology of unique passphrases across your essential accounts. If Adobe cannot do anything to protect your data - you certainly can.

Wednesday, August 28, 2013

The Little Things

Yesterday I had an opportunity to catch up, over lunch, with a good friend and colleague. One of those lunches that is truly to catch up but also to see how business is going and to see where the next opportunity is. 

During the course of our conversation he was commenting on the lack of funding that agencies had dedicated to his area and commented that even though he sees lots of heads nodding in understanding of the problems he addresses that they still do not seem to see it. To them it is a little detail that seems like it can wait. 

The comment about 'little detail made me think about all the little things that are missed and cause problems today. I do not mean just in the cyber security arena but in day-to-day life: the driver who does not look left and right when entering an intersection and causes an accident; the driver who is not attentive when backing out of a parking spot and destroys their passenger side mirror on a post (saw that one yesterday after lunch); the parent who does not secure their firearm properly only to have their 8 year-old shot their grandmother; and there are many more. 

Of course some of the impacts are trivial but others are clearly catastrophic in their effect. Cyber-security is not that different. Inattentive implementers may leave an opening that allows someone to get into the network where they should not be. Improper design or implementation can lead to that false sense of security and make your environment a haven for cyber-criminals or terrorists. Of course it is not just about the design and implementation, it is also about the planning, policy, people, audit, testing and operations. These things are all important. 

For my friend it is all about monitoring and managing identity. An expired credential, a credential that should not be on a system or, a credential that does not meet policy. All these seem small and easily managed on that one system - but who has one credential on one system? There are hundreds of systems, with thousands of credentials within most environments, whether on your premises or in a cloud implementation. Managing that environment now requires some thought, planning and resources. Is it really a small thing now?

Be aware of the small things .... they can lead to the big problem if ignored or trivialized.

Thursday, July 11, 2013

M2M Making Buildings for Greener

I read an interesting article the other day about "smart buildings". The concept is not new but recent advances have made it even easier to cost-effectively implement systems that allow more efficient control and even provisioning within these buildings.  The article stated ROI within 2 years.

Being a self-confessed geek I loved this thought.  Think of remote management of building control systems, whether power, water, alarm, heat/cooling, entry, etc. The operational benefits are huge and potential reduction in personnel costs, or automation is great.

Being a security guy the personal alarms were going off since this article did not speak of the need for effective security management. When I think of M2M the first thing that comes to mind is strong identification and authorization of systems and secure channels to protect from alteration of data in transit.  The possibility of abuse without effective controls is significant and dependent on the tenancy of the building the risk will vary greatly.

So yes I love the idea of smart buildings using M2M to improve the operational and cost-effectiveness but please ensure you think about the security implications in the context of the business drivers before you start.

Tuesday, July 9, 2013

The attack against Cyber-Identity continues

I have many friends who may be considered cynics. Now these are not all people who are running around with foil lining their hats, but there are a couple that may just be in that category as well, but they are people that tend to believe first and maybe look into the verification aspect later.

Now coming from a security guy that opening paragraph may seem odd, especially given the title of this post, but let me explain. I am not one to believe that things that happen are inherently altruistic or inherently exploitative, I instead tend to believe that at any point either may be true and one needs to look at the context around something to see where on that "altruistic-exploitative" spectrum that something may fall. The key point is it rarely is black or white and that often there is some other colour injected into a circumstance that changes the hue. It is this base of idea that I work from when I look at cyber-security as well. when we look at things from a singular context we often lose sight of the environment and as such make decisions that may end up masking the real issues.

When it comes to Cyber-Identity this is often very true as when many people look at term cyber-identity they immediately fall into the "login credential" mindset. yes the login credential may be a cyber-identity but cyber-identity is so much more than that and this is where organizations are missing the big picture and thereby leaving real risks unmitigated. A couple of recent examples of this are in the vulnerability within the Android operating system and a SSH key compromise in Emergency Alert System (EAS) devices.

In the case of the Android vulnerability (documented by Bluebox) this was not a stealing of a credential but an attack that has some similarities to the FLAME attack. The vulnerability that exists has been around a while and effectively allows an attacker to modify signed code without the operating system being able to detect that the code has been altered. What does that have to do with identity you ask? Well the reality is the signature itself is intended to assure you that the code comes from its creator and has not be altered .... in fact to identify who created the code and has control of it. The reality is that with this vulnerability there are circumstances where we can no longer be assured of the identity of who produced the code/application.

In the case of the EAS device someone was able to obtain the SSH key that is embedded in the firmware of a certain set of a vendor's EAS devices. This effectively allowed them to take control of the devices and to create some interesting alerts - "Zombie Apocalypse". Steve Ragan wrote an interesting piece on it. This case highlights the threat of an embedded, permanent identity, that apparently was shared across the devices as it was embedded in the firmware. As soon as this key became know then all devices with those firmware instances became vulnerable. This situation is one where both sides can debate the threat/risk of using a shared identity as it certainly makes manufacturing more efficient but at the same time opens the door to whole scale firmware upgrade when an exposure happens. This is a great case of why the business need and security need have to be looked at together - and when I say security need I am referring to not just that of the vendor but the downstream effect, which in this case could be catastrophic.


What both vulnerabilities have is a totally different view of what identity is in the cyber-world and the very different need to consider when addressing mitigation. Both of these situations were avoidable, from a pure technical perspective. The question becomes when these systems were designed how big was the threat at the time and did that impact the threat-risk equation? We may never know but it does suggest that maybe, just maybe, we need to be more diligent about revisiting our equations when the system cycles through the technological generations.

Saturday, February 23, 2013

Trust is the New Attack Vector

OK it may not be the "new" attack vector but it has become a popular one to exploit.

So what does one mean when they say "trust is an attack vector". Lets go back to the simplest ways that have been used to garner information about people or companies - the phone call. Your office phone rings and when you answer it the person on the other end immediately starts talking in terminology that you are comfortable with and dropping names of people in your organization. At this point a majority of people will drop their guard, at least to some level. If the questions start to get more and more deep into information that you think this person, who you have begun to trust, should know then the guard begins to go back up. This is the simplest form of the "trust attack vector" and we commonly refer to it as social engineering.

In the electronic world it is a bit different as there is no person to interact with and secure protocols exist so we know the entity we are dealing with ... or do we. Over the last couple of years there have been a number of attacks that utilize this assumed trust to deliver malicious payload by doing much the same as the "social engineer", that is they provide enough information that allow the process that they are dealing with to trust the transaction. In the online world that first trust transaction will usually involve you getting in the door of the system or application.

So how does this really work? Trust in the online realm is based around a shared secret or a cryptographic operation - either I have a password to use or; there is a common shared cryptographic key or a public/private key pair that is used to establish the transaction. More and more systems are utilizing the cryptographic option since passwords can be directly attacked and are hard to share when that is needed (of course symmetric key distribution can also be a challenge but that is a side conversation). So today many servers and applications use private/public key pairs to establish trust. These key pairs are used to establish secure channels such as SSL/TLS or IPSEC tunnels. Symmetric keys are commonly used to establish SSH connections. If these systems are properly configured and use strong algorithmic choices and key sizes the cryptographic aspect is very difficult to attack. In reality the cost of the attack is not worth the value so attacking the cryptography is rarely seen (exceptions are in cases of weak crypto such as seen in the Flame attack). So instead an attacker will go after a system to gain access to the keys themselves. This can be done by going after the Registration Authority that is the interface to key issuance (Comodo attack) or hack into the system using other attack vectors to gain access to the keys to the process that uses the keys to sign data (Adobe attack). Once they have this access then they can generate attacks against other systems.

A recent example of this is the Bit9 attack. Recent data suggests that the attack was initiated with a SQL injection attack against an internet facing web server which had been turned up with an old certificate. This attack planted a root kit that was signed by a stolen certificate. Once inside Bit9 the attackers used Bit9's signing keys to sign their own malware. Bit9 customers that were targeted would believe that the code they were executing was trusted as it was signed by the Bit9 keys. of course this is just one example of keys and certificates being used to obfuscate the trust chain.

So trust is being used as an attack vector- what can be done about it? There are a number of things:
  1. Know what is in your network when it comes to the trust infrastructure. This means:
    1. Know what certificates and keys are being used and why
    2. Ensure that cryptographic assets that are used in your environment meet your policy for strength, lifetime and algorithmic uses
    3. Ensure every cryptographic asset has an owner assigned to it and that you can keep that data up to date
  2. Clean your Root stores. In any organization you will have a variety of Root stores. These Root stores are used by applications and the operating system to help build the trust chain. The reality of the situation is that off-the-shelf Root stores delivered in applications and operating systems has many more Roots installed than you will ever encounter. It is important for an organization to trim down those Root stores and maintain oversight of them to ensure that Roots that should not be in the stores are not introduced or re-introduced.
  3. Maintain a central view of the trust environment. The two pieces above can, in themselves, be challenging so it is important that you have central oversight of the environment, be able to recognize changes and then react accordingly.
As I have always said, there is no silver bullet for security other than disconnecting all external communications and interaction from a box. Outside of that the best actions are those that mitigate risk and two important steps to mitigating risks are knowing what is in your network now and knowing what gets introduced into your network. This is especially true when it comes to trust.

Wednesday, February 13, 2013

Some Thoughts from Suits & Spooks DC

It was an interesting two days at the end of last week. Enough "security professionals" to fill a room and then some at the Waterview Conference Center in Arlington, overlooking the Potomac River.  All of these people were brought together by Jeffrey Carr as part of his ongoing Suits and Spooks conference series. Jeffrey always has a great set of speakers and more often than not the bringing together of such diverse talents, backgrounds and personalities creates some intense discussions. Suits and Spooks DC was not any different.

There was a lot of discussion during the two days on the international aspects of cybersecurity. The ongoing risk of state sponsored activities for intelligence collection and IP theft along with the international efforts on reaching agreement on cybercrime cooperation, as discussed by ITU representatives. We also had opportunity to hear from people who were involved with some of the international cases including the Russian Government efforts in Georgia and Estonia as well as the recently published Red October attacks. Other sessions brought up the Duqu/Flame/Stuxnet series of attacks and shared some of the research done in investigating these attacks. In all of the attack discussions it was clear that the speakers and the participants at the conference felt that the majority of large scale attacks were not based on new vulnerabilities or new approaches but were based on implementation of existing attack vectors with some modifications. In many cases some of the attacks were successful based on combinations of spearphishing attacks and taking advantage of existing vulnerabilities such as SQL injection attacks.

One of the other interesting aspects of the conference was an ongoing, and at times heated, discussion on the idea of cyber-vigilantism. Many at the conference felt that the government has not moved, and some felt is incapable of moving, fast enough to respond to cyberthreats. By the time the government is ready to take action much damage is feared to have been done and like those that are out buying Day 1 vulnerabilities the ship has already sailed. To address this issue some felt that cyber-vigilantism, in varying degrees, would help to allow organizations to respond in a near immediate manner. The discussion involved former government and law enforcement personnel, those at senior levels within private corporations and lawyers in attendance, as well as the general unidentified masses. Many valid points were brought up but the thought that seemed to polarize most was that attacking an adversary without clear knowledge who your adversary is would be a serious mistake. Not knowing who you are interacting with makes it impossible to develop an effective strategy and without an effective strategy you are likely to simply instigate a cyber-arms race with you as a target. That being said there did seem to be broad agreement that action on the private sector needed to be done to ensure stability within your system when it has been attacked and action could or should be taken to ensure the attack is mitigated and your environment stabilized such that business operations can continue. This should be done in a manner which would preserve evidence for future civil or criminal prosecutorial action or government involvement. It was a continued and, at times, interesting discussion.

One of the other presentation I thoroughly enjoyed and felt was very informational, from a business operations perspective, was by Josh Corman and David Etue. They quickly laid out a CxO level view of how to look at cyber threat and how to weigh response investment. It was something that peaked many attendants interest and certainly warrants looking at further as it is a methodology that in the shortened presentation seemed to take the logical business view to cybersecurity.

The lessons that came out of this conference are interesting based on the original conference premise - Cyber Offensive Strategies. I think many left the conference with the view that building your organization cyber plan around the idea that "Offense is the best Defense" is not the best investment. Instead it was obvious that many attacks today relied on organizations missing the simple things. It was interesting that the conference started on the day that Bit9 announced their breach and it appears from Bit9's own admission that theirs was a case of missing the simple thing of installing their own software on all their servers. The Bit9 attack itself is still being investigated but the methodology of using the Bit9 code signing service is again very familiar to those that saw the Adobe attack late last year.

So what is old is new again and we must be diligent about our security planning and operations. We must know and understand what is in our networks and what we should trust. We should ensure we patch vulnerabilities when the appropriate patch is available and in the meantime mitigate against those vulnerabilities. We must pay attention to the attack vectors that are being used as part of our ongoing awareness and then build appropriate actions into our plans. We must understand what are the priorities as it relates to our assets and resources and understand who is coming after them and plan and defend proportionally. Those are the things that will help us stay mitigate the risks we have. If we want to extend that help then the best thing we can do is to share what happens to us and to share best practices as to how to mitigate the risks. Think of it as paying forward.

Monday, February 11, 2013

What is old is new again .... again

The timing could not be more interesting. Friday and Saturday I spent with a bunch of "security professionals" at Jeffrey Carr's Suits & Spooks. Of course one of the topics that came up was the Bit9 hack which KrebsonSecurity did a great job of highlighting. The hack was fresh of course but it also highlighted something else that was talked about frequently over the two days ... not all attacks are new. In fact numerous discussions highlighted the fact that most attacks are in fact based on using existing vulnerabilities that have not been patched or using existing techniques that still work.

Bit9 is still being looked at but it appears that the attackers goal was to gain access to the digital signing capabilities within Bit9 to sign their malware. The method would allow the signed malware to run unchallenged in a Bit9 customers environment. If they were someone who thoroughly drank the Bit9 kool-aid then they may not even have anti-virus running. It is interesting to note that the Bit9 blog had just posted an article on why a/v is not effective but it seems that the malware was caught in one of their customers environments by a/v software.

Of course the point here is that there is no one silver bullet. While whitelisting can be effective it is not the only answer. Anti-virus can find issues but on its own it leaves many gaps due to how vulnerabilities are identified and updates distributed. Security is about having a comprehensive plan targeted to your environment, utilizing process and tools that work together to mitigate the identified risks. Many of these ideas came out during the S&S conference and I will be posting some more on those thoughts in the next day or so.

Tuesday, February 5, 2013

The Future of Trust

We talk a lot about trust in the world of security. "Do we trust the code?" "Do we trust that the user is doing what they should?" "Do we trust that the email or website is safe?" But what do we mean by trust in these circumstances?

Trust was once one of those things that laregly involved experience. It may be your experience or an acquaintance's experience but it was based on experience. I put trust in a mechanic because my best friend recommended him based on his experience. My experience may change the degree of trust I have but that initial trust is based on my friend's experience. I trust that my doctor will give me good advice when it comes to my healthcare because my experience tells me that he has not done anything for me to expect anything else.

In my mind trust has to do with expectations. Will the outcome of some event be what was expected and desired. When I receive an email from an email address that indicates it is from a work colleague will I discover that it actually is from that work colleague, that they created and sent it to me, and that it has not been altered from the time they created it until the time I read it. Of course there are all kinds of elements to this idea of trust but I believe that, fundamentally, trust comes down to the result of some action requiring me to "trust" something being what I expected to happen given my believe of the factors around that trust decision.

Now this is where it gets interesting as trust does come with "qualifiers". I may go to a restaurant, based on a recommendation from a friend, but I may have a different expectation then going to a restaurant I have visited in the past. This differing expectation may be the result of knowledge that I have different tastes or expectations as to quality than my friend. So my level of trust that I will have a GREAT meal may be different depending on why I choose this restaurant.

Of course these are very simplistic views of trust and largely based on known personal relationships. This environment is not the world we operate in today. Today, beyond the personal relationships, elements of trust are in just about every facet of our electronic life. Zappos' web servers trust me based on the fact that I know a username and password combination. Zappos raises the level of trust based on past successful transactions and knowledge that I demonstrate in the transaction process. I trust websites based on data presented to me about the SSL or TLS connection. The Hootsuite authentication server trusts in the MyOpenID authentication service when I use MyOPenID to logon to my Hootsuite account. Whether it is machine to person, person to machine or machine to machine there are elements of trust that affect us each and every day.

Of course for businesses they need to ensure that they are mitigating the risks associated with the trust they are putting into these transactions, based on many factors. These same businesses must also demonstrate to other businesses that they are implementing processes that will raise the trust level to an appropriate level for transactions. This may be in the form of strong authentication protocols, properly protecting data in transit and at rest, and effectively protecting the infrastructure from damage. A gap in the processes may allow bad transactions, a loss of data or a loss of service. A business that faces these exposures then faces the possibility of financial loss, brand damage or public exposure of the loss which in turn has follow-on consequences.

Of course all of that is today, in a world which is vastly more impacted by technology than 100 years ago, or for that matter even 20 years ago. Now lets think about ten years from now ....

Today we have UAVs flying overhead but ten years from now there will be UMVs (unmanned motor vehicles). What will be our expectation then of the trust infrastructure. I live in the DC area and my expectation of manned vehicles is relatively low today but today I know someone is behind the wheel and can react. When these vehicles are unmanned one will need to trust that the intelligence behind the vehicle will be able to react but it will need reliable data from other vehicles, highway signs and characteristics (Slow curve ahead ---- Steep hill ---- Bridge freezes before roadway) and possibly some central facility for routing due to traffic etc. The trust infrastructure here must be able to provide strong authentication and reliability of the data, and in many cases provide privacy of the data as I may not want my home address sent clear text across airwaves.

We need to make sure that today we look at trust as the core element of what we do and what we are building. We have for too long added security, and the trust elements, to applications and business processes after the fact. These ideas of trust must be part of the base design principle. As we move forward with these new ideas of the automated world we will not be able to "learn from our lessons" as the impact of bad design decisions may be significant. Lets design security and trust in from the beginning.

Trust me on this


Wednesday, January 16, 2013

Is the Energy Sector Really a Cyber Target?

For years we have heard about cyber warfare - whether it was the categorization of cyber Pearl Harbor or the cyber equivalent of 9/11. Over the last couple of years we have definitely seen the increase in targeted attacks. Some of them generated in Western Nation States while others have been generated in Middle Eastern, Eastern European or Asian nation states. we have even seen, what appears to be, pure cyber-criminal attacks that have targeted resources to manipulate (banks and their transactions) as well as data to sell. The most recent case that has come to our attention is the 5 year odyssey that is now known as Red October.

What has been interesting is that some of these attacks have been built to be very targeted against industrial control systems. People are familiar with the term if they have looked at Flame or Stuxnet. In the case of Stuxnet it very much was a part of a larger operation to leverage the industrial control systems to halt the use of centrifuges. What many people do not realize is that these same control systems are implemented every where. Power plants, manufacturing facilities, water filtration and gas pipelines and the list goes on.

So what we have is a target in a broad environment space that is proven to be attackable. So what does it take to attack these systems? Well an understanding of what type of system is implemented and then basically access to the internet to get the command and control language that is used within the system. Some would say that it is not that simple and that is largely correct as I need to get at the system and these are within environments that are protected by firewalls etc.

That last statement is the false sense of security that we seem to have lived behind for quite some time. DHS recently released a report that indicated that 40% of cyber attacks were against the energy sector. An example was the discovery of advanced viruses/malware at 2 US energy plants late last year. Both of the attacks were apparently delivered through the same mechanism that was used to deliver Stuxnet (so not only are people re-using the code they are re-using the methods). One can surmise that the two plant attacks could have been prevented by following some very basic security procedures, including up to date software and not carrying drives between enclaves without safety mechanisms in place.

It is this last point that becomes the slap in the face to all of us. Congress has repeatedly refused to provide requirements for security for critical systems. There is the attitude that the government should not be telling private industry what to do. I do not necessarily disagree with the sentiment in most cases but we are not dealing with most cases here. There are many critical infrastructure segments but lets focus on energy here. If proper secure protocols are not followed the attacks against the energy sector will continue to be successful and to greater and greater degrees. Yes that is bad for he energy sector because of reputation and and actual financial loss but guess what I am using electricity now to write this blog. You are using it to read it. Your bank is using it to perform transactions that allow economic active to flow. Hospitals are using it to keep people alive. Of course I could go on. It is time that we recognize what has been demonstrated to be true and it is time it is responded to. If Congress cannot pass a "Here is how you fix the problem bill" then lets look to California and their data loss bills and pass legislation that is not prescriptive about how to protect your infrastructure but hold the companies HIGHLY accountable for not properly protecting their infrastructure. There is just too much at risk.

Stepping off the soap box.

Friday, January 4, 2013

Happy New Year ... and you could still end up being a target

Well it is a new year and with that we can all expect to face new challenges.

That may sound doom-and-gloom-esque but it is not intended to. I truly believe that the problems we have and will continue to face as professionals can largely be mitigated through thoughtful application of combined intelligence and careful planning. Let's be honest - most of the problems we saw the last couple of years were not taking radical new technical advancements into account but rather application of existing processes in different ways, leveraging open doors that people just forgot about or were created by poor process implementation. Guess what - it seems 2013 will not be all that different.

The first "breach" news story out in 2013 is an attack on the Google channel. Did it start as a planned attack against that channel - unknown - but when we look at this breach we will see that it was poor process implementation that allowed it to happen.

So what are we talking about here? Google just announced that they had discovered certificates that were illegitimately issued under their name. Now what is different here is that the credentials that were issued were not apparently issued by breach of a CA or its RA but instead by poor process. The CA involved, Turktrust, issued two certificates in August 2011, with bits set that effectively made these certificates capable of signing certificates, effectively looking like intermediate CAs. According to Turktrust this was caused by the loading of test certificate profiles in the production environment. After this was done the two certificates were generated before anyone noticed the profile error. Once the error was noticed the profiles were removed from the production environment. What did not happen was no one went back through the logs to identify any credentials issued under those profiles.

Now the skeptical may say that the latter action could not be an oversight but was part of an intended process to get certificates issued that would then be allowed for "someone" to monitor all google traffic coming from the domain, including secured gmail traffic. It certainly is possible that this was the case but it is obvious as well that proper process was not followed and if the process was properly audited then this would have been caught. The things that never should have happened:

  • Test profiles loaded into a production environment - this is easily solved through proper checking of server identification and not crossing platforms with profile files
  • Generation of production certificates using test profiles - should not happen as the production system RAs should not even have test profile names available for choice.
  • No audit checks - once any error of issuance is discovered all issuance logs should be checked and all issuance verified to ensure all issuance was accomplished per policy.

Now it is one thing for us to talk about what Turktrust did wrong but the reality of the situation is that we, as users and relying parties, need to be able to ensure that no matter if it was a set of errors or if it was an intended process to create a backdoor to the Google channel we need to be able to mitigate our risk. What this means is that we need to be able to quickly identify where we have certificates that may be at risk, whether that be in web servers, browsers, root store, local Java stores, in routers, VPN devices, network devices, or anywhere else. Personally I went through and double checked my new Android Jelly Bean install to make sure I was comfortable with its root stores. For organizations however this is a larger task and looking at systems that monitor and manage your certificates is an easy and important way to mitigate risk and one that seems to be getting more and more critical given the number of types of certificate attacks that are appearing.

Finally - this is not a "do not use certificates" or "CA providers are bad" type message - this is a "Take responsibility for your environment ... know who you trust and why ... and ensure that you understand when and why things change by monitoring the environment" message.

Maybe that last part is a good New Years resolution.