Saturday, February 23, 2013

Trust is the New Attack Vector

OK it may not be the "new" attack vector but it has become a popular one to exploit.

So what does one mean when they say "trust is an attack vector". Lets go back to the simplest ways that have been used to garner information about people or companies - the phone call. Your office phone rings and when you answer it the person on the other end immediately starts talking in terminology that you are comfortable with and dropping names of people in your organization. At this point a majority of people will drop their guard, at least to some level. If the questions start to get more and more deep into information that you think this person, who you have begun to trust, should know then the guard begins to go back up. This is the simplest form of the "trust attack vector" and we commonly refer to it as social engineering.

In the electronic world it is a bit different as there is no person to interact with and secure protocols exist so we know the entity we are dealing with ... or do we. Over the last couple of years there have been a number of attacks that utilize this assumed trust to deliver malicious payload by doing much the same as the "social engineer", that is they provide enough information that allow the process that they are dealing with to trust the transaction. In the online world that first trust transaction will usually involve you getting in the door of the system or application.

So how does this really work? Trust in the online realm is based around a shared secret or a cryptographic operation - either I have a password to use or; there is a common shared cryptographic key or a public/private key pair that is used to establish the transaction. More and more systems are utilizing the cryptographic option since passwords can be directly attacked and are hard to share when that is needed (of course symmetric key distribution can also be a challenge but that is a side conversation). So today many servers and applications use private/public key pairs to establish trust. These key pairs are used to establish secure channels such as SSL/TLS or IPSEC tunnels. Symmetric keys are commonly used to establish SSH connections. If these systems are properly configured and use strong algorithmic choices and key sizes the cryptographic aspect is very difficult to attack. In reality the cost of the attack is not worth the value so attacking the cryptography is rarely seen (exceptions are in cases of weak crypto such as seen in the Flame attack). So instead an attacker will go after a system to gain access to the keys themselves. This can be done by going after the Registration Authority that is the interface to key issuance (Comodo attack) or hack into the system using other attack vectors to gain access to the keys to the process that uses the keys to sign data (Adobe attack). Once they have this access then they can generate attacks against other systems.

A recent example of this is the Bit9 attack. Recent data suggests that the attack was initiated with a SQL injection attack against an internet facing web server which had been turned up with an old certificate. This attack planted a root kit that was signed by a stolen certificate. Once inside Bit9 the attackers used Bit9's signing keys to sign their own malware. Bit9 customers that were targeted would believe that the code they were executing was trusted as it was signed by the Bit9 keys. of course this is just one example of keys and certificates being used to obfuscate the trust chain.

So trust is being used as an attack vector- what can be done about it? There are a number of things:
  1. Know what is in your network when it comes to the trust infrastructure. This means:
    1. Know what certificates and keys are being used and why
    2. Ensure that cryptographic assets that are used in your environment meet your policy for strength, lifetime and algorithmic uses
    3. Ensure every cryptographic asset has an owner assigned to it and that you can keep that data up to date
  2. Clean your Root stores. In any organization you will have a variety of Root stores. These Root stores are used by applications and the operating system to help build the trust chain. The reality of the situation is that off-the-shelf Root stores delivered in applications and operating systems has many more Roots installed than you will ever encounter. It is important for an organization to trim down those Root stores and maintain oversight of them to ensure that Roots that should not be in the stores are not introduced or re-introduced.
  3. Maintain a central view of the trust environment. The two pieces above can, in themselves, be challenging so it is important that you have central oversight of the environment, be able to recognize changes and then react accordingly.
As I have always said, there is no silver bullet for security other than disconnecting all external communications and interaction from a box. Outside of that the best actions are those that mitigate risk and two important steps to mitigating risks are knowing what is in your network now and knowing what gets introduced into your network. This is especially true when it comes to trust.

Wednesday, February 13, 2013

Some Thoughts from Suits & Spooks DC

It was an interesting two days at the end of last week. Enough "security professionals" to fill a room and then some at the Waterview Conference Center in Arlington, overlooking the Potomac River.  All of these people were brought together by Jeffrey Carr as part of his ongoing Suits and Spooks conference series. Jeffrey always has a great set of speakers and more often than not the bringing together of such diverse talents, backgrounds and personalities creates some intense discussions. Suits and Spooks DC was not any different.

There was a lot of discussion during the two days on the international aspects of cybersecurity. The ongoing risk of state sponsored activities for intelligence collection and IP theft along with the international efforts on reaching agreement on cybercrime cooperation, as discussed by ITU representatives. We also had opportunity to hear from people who were involved with some of the international cases including the Russian Government efforts in Georgia and Estonia as well as the recently published Red October attacks. Other sessions brought up the Duqu/Flame/Stuxnet series of attacks and shared some of the research done in investigating these attacks. In all of the attack discussions it was clear that the speakers and the participants at the conference felt that the majority of large scale attacks were not based on new vulnerabilities or new approaches but were based on implementation of existing attack vectors with some modifications. In many cases some of the attacks were successful based on combinations of spearphishing attacks and taking advantage of existing vulnerabilities such as SQL injection attacks.

One of the other interesting aspects of the conference was an ongoing, and at times heated, discussion on the idea of cyber-vigilantism. Many at the conference felt that the government has not moved, and some felt is incapable of moving, fast enough to respond to cyberthreats. By the time the government is ready to take action much damage is feared to have been done and like those that are out buying Day 1 vulnerabilities the ship has already sailed. To address this issue some felt that cyber-vigilantism, in varying degrees, would help to allow organizations to respond in a near immediate manner. The discussion involved former government and law enforcement personnel, those at senior levels within private corporations and lawyers in attendance, as well as the general unidentified masses. Many valid points were brought up but the thought that seemed to polarize most was that attacking an adversary without clear knowledge who your adversary is would be a serious mistake. Not knowing who you are interacting with makes it impossible to develop an effective strategy and without an effective strategy you are likely to simply instigate a cyber-arms race with you as a target. That being said there did seem to be broad agreement that action on the private sector needed to be done to ensure stability within your system when it has been attacked and action could or should be taken to ensure the attack is mitigated and your environment stabilized such that business operations can continue. This should be done in a manner which would preserve evidence for future civil or criminal prosecutorial action or government involvement. It was a continued and, at times, interesting discussion.

One of the other presentation I thoroughly enjoyed and felt was very informational, from a business operations perspective, was by Josh Corman and David Etue. They quickly laid out a CxO level view of how to look at cyber threat and how to weigh response investment. It was something that peaked many attendants interest and certainly warrants looking at further as it is a methodology that in the shortened presentation seemed to take the logical business view to cybersecurity.

The lessons that came out of this conference are interesting based on the original conference premise - Cyber Offensive Strategies. I think many left the conference with the view that building your organization cyber plan around the idea that "Offense is the best Defense" is not the best investment. Instead it was obvious that many attacks today relied on organizations missing the simple things. It was interesting that the conference started on the day that Bit9 announced their breach and it appears from Bit9's own admission that theirs was a case of missing the simple thing of installing their own software on all their servers. The Bit9 attack itself is still being investigated but the methodology of using the Bit9 code signing service is again very familiar to those that saw the Adobe attack late last year.

So what is old is new again and we must be diligent about our security planning and operations. We must know and understand what is in our networks and what we should trust. We should ensure we patch vulnerabilities when the appropriate patch is available and in the meantime mitigate against those vulnerabilities. We must pay attention to the attack vectors that are being used as part of our ongoing awareness and then build appropriate actions into our plans. We must understand what are the priorities as it relates to our assets and resources and understand who is coming after them and plan and defend proportionally. Those are the things that will help us stay mitigate the risks we have. If we want to extend that help then the best thing we can do is to share what happens to us and to share best practices as to how to mitigate the risks. Think of it as paying forward.

Monday, February 11, 2013

What is old is new again .... again

The timing could not be more interesting. Friday and Saturday I spent with a bunch of "security professionals" at Jeffrey Carr's Suits & Spooks. Of course one of the topics that came up was the Bit9 hack which KrebsonSecurity did a great job of highlighting. The hack was fresh of course but it also highlighted something else that was talked about frequently over the two days ... not all attacks are new. In fact numerous discussions highlighted the fact that most attacks are in fact based on using existing vulnerabilities that have not been patched or using existing techniques that still work.

Bit9 is still being looked at but it appears that the attackers goal was to gain access to the digital signing capabilities within Bit9 to sign their malware. The method would allow the signed malware to run unchallenged in a Bit9 customers environment. If they were someone who thoroughly drank the Bit9 kool-aid then they may not even have anti-virus running. It is interesting to note that the Bit9 blog had just posted an article on why a/v is not effective but it seems that the malware was caught in one of their customers environments by a/v software.

Of course the point here is that there is no one silver bullet. While whitelisting can be effective it is not the only answer. Anti-virus can find issues but on its own it leaves many gaps due to how vulnerabilities are identified and updates distributed. Security is about having a comprehensive plan targeted to your environment, utilizing process and tools that work together to mitigate the identified risks. Many of these ideas came out during the S&S conference and I will be posting some more on those thoughts in the next day or so.

Tuesday, February 5, 2013

The Future of Trust

We talk a lot about trust in the world of security. "Do we trust the code?" "Do we trust that the user is doing what they should?" "Do we trust that the email or website is safe?" But what do we mean by trust in these circumstances?

Trust was once one of those things that laregly involved experience. It may be your experience or an acquaintance's experience but it was based on experience. I put trust in a mechanic because my best friend recommended him based on his experience. My experience may change the degree of trust I have but that initial trust is based on my friend's experience. I trust that my doctor will give me good advice when it comes to my healthcare because my experience tells me that he has not done anything for me to expect anything else.

In my mind trust has to do with expectations. Will the outcome of some event be what was expected and desired. When I receive an email from an email address that indicates it is from a work colleague will I discover that it actually is from that work colleague, that they created and sent it to me, and that it has not been altered from the time they created it until the time I read it. Of course there are all kinds of elements to this idea of trust but I believe that, fundamentally, trust comes down to the result of some action requiring me to "trust" something being what I expected to happen given my believe of the factors around that trust decision.

Now this is where it gets interesting as trust does come with "qualifiers". I may go to a restaurant, based on a recommendation from a friend, but I may have a different expectation then going to a restaurant I have visited in the past. This differing expectation may be the result of knowledge that I have different tastes or expectations as to quality than my friend. So my level of trust that I will have a GREAT meal may be different depending on why I choose this restaurant.

Of course these are very simplistic views of trust and largely based on known personal relationships. This environment is not the world we operate in today. Today, beyond the personal relationships, elements of trust are in just about every facet of our electronic life. Zappos' web servers trust me based on the fact that I know a username and password combination. Zappos raises the level of trust based on past successful transactions and knowledge that I demonstrate in the transaction process. I trust websites based on data presented to me about the SSL or TLS connection. The Hootsuite authentication server trusts in the MyOpenID authentication service when I use MyOPenID to logon to my Hootsuite account. Whether it is machine to person, person to machine or machine to machine there are elements of trust that affect us each and every day.

Of course for businesses they need to ensure that they are mitigating the risks associated with the trust they are putting into these transactions, based on many factors. These same businesses must also demonstrate to other businesses that they are implementing processes that will raise the trust level to an appropriate level for transactions. This may be in the form of strong authentication protocols, properly protecting data in transit and at rest, and effectively protecting the infrastructure from damage. A gap in the processes may allow bad transactions, a loss of data or a loss of service. A business that faces these exposures then faces the possibility of financial loss, brand damage or public exposure of the loss which in turn has follow-on consequences.

Of course all of that is today, in a world which is vastly more impacted by technology than 100 years ago, or for that matter even 20 years ago. Now lets think about ten years from now ....

Today we have UAVs flying overhead but ten years from now there will be UMVs (unmanned motor vehicles). What will be our expectation then of the trust infrastructure. I live in the DC area and my expectation of manned vehicles is relatively low today but today I know someone is behind the wheel and can react. When these vehicles are unmanned one will need to trust that the intelligence behind the vehicle will be able to react but it will need reliable data from other vehicles, highway signs and characteristics (Slow curve ahead ---- Steep hill ---- Bridge freezes before roadway) and possibly some central facility for routing due to traffic etc. The trust infrastructure here must be able to provide strong authentication and reliability of the data, and in many cases provide privacy of the data as I may not want my home address sent clear text across airwaves.

We need to make sure that today we look at trust as the core element of what we do and what we are building. We have for too long added security, and the trust elements, to applications and business processes after the fact. These ideas of trust must be part of the base design principle. As we move forward with these new ideas of the automated world we will not be able to "learn from our lessons" as the impact of bad design decisions may be significant. Lets design security and trust in from the beginning.

Trust me on this