Tuesday, June 28, 2011

Report from NSTIC Privacy at MIT

I have attended the initial two NSTIC conferences and I think I can safely say that things are ..... interesting. My first comment is that it is quite obvious that there is still a lot of work to be done. I firmly believe that in the governance and privacy areas there is one big thing that needs to be done and that is to take what has been accomplished in other areas and map those to see where there is intersection and then see if that is something that will be useful for NSTIC. I say this because there are lots of good things going on in a number of areas but right now I think trying to take each one of those and to map to NSTIC or to see which piece is useful may be overwhelming. I hope that as responses to the NOI come in that a process like this will help to ease the burden.

The conference itself was thought provoking. Some ideas that struck me were ideas of data ownership and pseudonymity versus anonymity. There were many others but these two struck me in particular.

On the data ownership side there was much discussion on data ownership. Certainly it is easier to define ownership of some elements of data including things like credit card numbers, social security numbers, birthdate, address, weight, height etc .... but what of other types of identifying data? When I buy something online at apple.com is the fact that I bought something make that my data element or does it belong to Apple? Certainly Apple needs to know who to charge and what and where to ship but outside of that, once the transaction is complete, do they need to keep that data if I do not want them to? Should they be allowed to tell Verizon that I just bought a 3G iPad 2 with a Verizon chip in it? These "data breadcrumbs" are left by all kinds of transactions and the question of ownership is interesting.

But of course it is not just ownership - once I have ownership how do I protect that data from improper use or for that matter any use that I do not want? This is an interesting challenge in terms of privacy and in the process does it step on things like tracking (web tracking being looked at legislatively today)? Does it also step on business model? Experian, Transunion and others keep data on me that they use to provide market targets to other service and product providers. What happens to these entities and the downstream providers, who use the information, if we change how those breadcrumbs get picked up?

The other interesting data point was the pseudonymity versus anonymity thoughts. For those of us that believe we can be anonymous on the Internet I present an excerpt from an LA Times blog on the possible exposing of LulzSec. "The A-Team said LulzSec's members were a product of the hacking culture found on the Website 4chan, which is rooted in anonymity, making some feel invincible. .... "The Internet by definition is not anonymous," the group said. "Computers have to have attribution. If you trace something back far enough you can find its origins.""

So do we accept that we will at best be pseudonymous? Does that lead to multiple identities or multiple personae within a single identity? In either case it becomes critical that we prevent linkage between these unless that linkage is driven by the identity owner. This idea is one I will be thinking about some more - it is definitely interesting.

There were many more great ideas shared and I would encourage anyone with interest to visit the NIST NSTIC site to follow the updates.

- Posted using BlogPress from my iPad

Sunday, June 26, 2011

Re-application of Technology

As I was thinking about the upcoming NSTIC Privacy Conference my mind wandered to some of the technical challenges that exist. Some of these are things that we have been discussing for some time and are directly related to privacy. One of the core ideas is that in our online lives we have different degrees of relationships with other entities. Some of these relationships require a high degree of assurance as to who I am, for example, accessing my health records online, others do not require a high degree of assurance (commenting on blogs is one of them). That being said the question becomes how do I maintain a single identity but use it differently in different places?

There are a couple of technical solutions that are out there today that are being discussed:

  • Backend Attribute Exchange (BAE) leverages SAML 2.0 and a cooperative architecture to build a system whereby a relying party can request further information about an authenticated entity using a set of standard protocols. This system works very well in an environment like one of first responders where the community is well defined and the sources of attribute information are well recognized. I also think it can be extended to a more general use but a broader infrastructure for identifying Attribute Providers needs to be architected and a mechanism for predefined release needs to be implemented within these. I do believe all the pieces are there and I know some of this work is ongoing so we may not be that far away.
  • uProve is driven largely by Micrsosoft but there are a number of open working groups including one on Claims Agents that are looking to open source variants of elements of the system. Teh basic tenet of uProve, from a architecture perspective, is not that different than BAE. Grant it the underlying technology is very different but the architecture of a relying party, communicating with an end entity for authentication and then using a third party to validate claims is not that different than what BAE is achieving. 
There is a third one that comes to mind and it relates to the idea of re-application of technology. ePassport systems are based upon either Basic Access Control (BAC) or Extended Access Control (EAC) mechanisms for access to information on the ePassport. At Entrust, where I work, we have implemented both types of solutions in multiple countries and have been involved in the standards work around ePassports for a number of years. As the BAE and uProve technologies have come to surface I began to think about how EAC has the same basics in terms of architecture. The EAC architecture is more closed today due to the application but with the ongoing definition of EAC 2.0 there is a lot more similarities in terms of architecture. Can EAC 2.0 be even further extended such that its protocols extend beyond the chip-reader communication set to reuse the ideas there to allow for extension into chip-relying party in general? Does this provide the end entity with greater control in terms of information release and do so in a self contained environment? Today information release through uProve and BAE have release notification at the attribute service but could EAC bring that to teh credential holder in a self contained way.

I am planning to explore this a bot more over the next two days at the NSTIC conference in Cambridge so look for a follow-up.

Friday, June 24, 2011

Why PIV-I?

I wanted to followup on the discussion of PIV-I that I started a couple of weeks back. That discussion centered around what is PIV-I. As is the case the "what" has to be paired with a "why". Over the last couple of weeks I have also brought up a couple of other issues that tie in here that can be summarized with "authentication architectures".

The last few months have seen a significant proliferation of attacks against authentication systems: RSA SecurID, a defense contractor having their Active Directory administration services breached; and the Comodo attack against an administrative function. These attacks, and more, were intended to achieve one thing and that is to be able to get deeper into systems than a normal attack would and to potentially extend the attack reach outside of the initial target.

So how does this play to PIV-I?

PIV-I does  a number of things: it defines a standard issuance process; it defines a standard token; and it defines a standard token interface. The PIV-I specification allows organizations to readily see that another organizations have implemented an authentication mechanism that is measurable. Once you know that the mechanism is measurable then you can decide how it meets your requirements so you can decide if you can trust these credentials, and for what purpose. A good example of this is within the US Government agencies looking to PIV-I as a way to be able to easily work with external parties, contractors, suppliers, partners and other governments like State and Local. An example of this happening is that DOD has created a list of commercial SSPs that issue PIV and PIV-I credentials that can be trusted within DOD. So there is a business reason to use PIV-I, a strong mechanism to allow for digital interoperability at a high degree of assurance.

There is also a business reason for PIV-I based on the above mentioned attacks. A PIV-I token is a FIPS validated credential with strong protection mechanisms. Too many wrong passwords and the token is no longer usable. Private keys cannot leave the card. It is a PIN protected device; and has a biometric capability for additional assurance requirements. On top of these the credential is issued through a process that is well defined, requires in person proofing, and is separate from the relying party application. This last piece is the important element as it provides a safety barrier in the case of attack against the authentication system, as discussed in the last post. Another good read on the subject can be found here.

So PIV-I is an authentication mechanism that provides a well defined process for issuance, a strong credential  for identity (physically and logically) and mitigates some of the risk factors that were in place with the attacks against authentication systems that have happened over the last many months.

Next will be a discussion on using PIV-I in a solutions architecture.

Tuesday, June 21, 2011

APT and Layered Authentication

I was recently speaking with someone about their infrastructure and an issue they were addressing. Their infrastructure is based around Active Directory. It is a standard implementation that uses AD to identify end entities, grant privilege and to push policy. The issue is that they are faced with an Advanced Persistent Threat against this existing AD implementation. The question becomes how does one move from this existing infrastructure to a new one while ensuring that they can safely port the existing end entity population (machines and persons) and to ensure that they implement a strategy that mitigates the risk of the ongoing and future APTs.

It seemed to me that the initial strategy has to be one around the segregation of the authentication infrastructure. In a traditional layered security approach we speak of separation of duty, roles and in some cases networks but infrequently do I see people look at the authentication infrastructure as a separate element - it normally is imbedded into some other element - usually preceded by comments like "well Microsoft gives me a CA imbedded in AD". no knock against Microsoft but we need to be careful of architectural implementations.

The authentication infrastructure is a key component of usability but also of defense. Knowing who is accessing a resource is critical and in the case of an APT even more so as the traditional ways to protect a resource may already be compromised. There was a very good example of this within the last year when a major technology/defense firm had their AD compromised. In the process of that compromise the attacker was able to act as a administrator for the certificate authority that was part of the AD implementation and was able to issue credentials that allowed broader access and possibly to a wider audience.

It is becoming critical in light of these persistent attacks that additional precautions be taken. Layering the authentication infrastructure is one element of this that allows for migration of the core elements with less impact on end entity credentials. Similarly if the authentication infrastructure becomes compromised, such as the RSA breach, then you can also take a layered approach to credential replacement. Although this is more challenging if you do not know how deep the attack became after the credential breach but this is another reason to stress the importance of the authentication infrastructure. One step beyond that would be to look at layers of credentials within your infrastructure that would allow levels of access to resources based upon risk and type of credential presented .... but that is another topic.


- Posted using BlogPress from my iPad

Saturday, June 18, 2011

Is security less important to some companies?

I know the above question is a dangerous one. The answer in a general sense is Yes. Security is more important to some companies. But my question is a bit more esoteric.

I read an article yesterday saying that while RSA has come out to say that they will replace SecurId tokens for customers it will take months for that to happen for all of them and actually only about a third of customers will have their tokens replaced.

Yes, I, as do most people, understand that there are economic sensibilities here and that there may even be uncertainty as to the breadth of the breach and what was taken - but really - only a third will have their tokens replaced? If you are in that 67% what are you thinking? If it was me I would be thinking .... Is RSA certain I am not vulnerable? Will they warrant that ... with insurance?

Given the mixed messages coming from RSA on the breach overall, Jeffrey Carr did a couple of great pieces on this - see this one, one would have to wonder on the strategy for the decision on who gets replaced.

It will be interesting to see what follows from all of this. Token based OTP has been a ubiquitous element of multi-factor authentication for some time and one wonders if this whole RSA mess hurts the market or just RSA. Only time will tell.


- Posted using BlogPress from my iPad

Thursday, June 16, 2011

Perfect

I went out just before 6 AM for a five mile run .... I have a few gravel roads I can run on and took one of those today. 

As I ran the geese were taking off from a field as light rain drops fell on my face. The sky was cloud covered to the west and a bit overhead but to the east it was clear enough that you could see the sun coming up. I looked and to the south I see a near perfect rainbow ... end to end. I saw that rainbow for the next mile.

As I ran by the school on the hill the sun was now on my right ... I looked left and I could see my shadow on the green field by the school and arching over the school ... framing it ... the perfect rainbow.

I think I found new motivation to get out and run

Tuesday, June 14, 2011

What is PIV-I?

I have been involved with credentialing in the Federal Government for many years, coming on multiple decades to be honest, and it has been an interesting ride. Over the last few years there has been a substantial change, starting with the signing of HSPD-12 in 2004. What HSPD-12 did was to codify credential issuance within the Federal Government. HSPD-12 brought in not just a technical specification but also a process specification. The combination of these two elements was intended to address the differences between agencies issuance processes as well as allowing interoperability of credentials, at a logical and physical level, between agencies.

As part of the extended process the Federal working groups within the Federal Identity, Credential and Access Management community also developed a set of requirements for organizations external to the Federal Government to create credentials that were based upon a well defined set of standards both from a technical perspective as well as from a issuance perspective.

So what are the differences between a PIV card and a PIV-I card? I hate to give a bulleted list but I am not sure how else to do it so here goes: (Note the left side will always be PIV and the right PIV-I)

  • Identity Verification: While both PIV and PIV-I strongly authenticate individuals there is a slightly higher requirement for PIV. The PIV IV process requires that the end entity have a National Agency Check with Inquiries (NACI). 
  • 10 vs 2: For PIV cards, ten fingerprints are collected, where possible, while for PIV-I only two are required to be collected
  • FASC-N vs UUID: These are identifiers for the end entity normally used as part of the access control decision for physical access. It is possible that the government will move towards the UUID but for now it is FASC-N. The structure does allow for quick differentiation between a PIV and PIV-I card at a PACS system.
  • Common Policy vs FBCA mapping: From a logical perspective the trust anchor for PIV is the Federal PKI Common Policy Root. For PIV-I the trust anchor is the issuers root but the issuers policy must be mapped to the Federal Bridge CA. This is demonstrated in the different OIDS used in the certificates in the cards. The intent here is to allow users under the Common Policy can trust users of PIV-I cards through the linkage at the FBCA.
  • Content Signing: Both PIV and PIV-I cards contain signed objects. These objects are signed using certificates that carry specific indicators that they be used only for content signing. The identifiers used are different for PIV and PIV-I and therefore it provides another way to differentiate the card.
  • Physical Layout: A PIV-I card MUST look different than a PIV card. 
So as you can see there are differences but the intent is to make the PIV-I card usable within an organization and to also allow it to interoperate with other organizations, both physically and logically. The publication of the specifications, with a validation process, allows application vendors to build systems, logical and physical, to use the cards and also allows relying parties to be able to determine their policy on use of PIV-I cards.

So who should think about PIV-I and why? I will leave that to another day.

Friday, June 10, 2011

Continued thoughts on NSTIC Conference

Today is day two of the NSTIC conference and most of today is an extension of the discussions that started to happen in the working groups yesterday afternoon. The discussions themselves are interesting and will lead to a consolidated set of ideas that will be published and then taken into account with the responses to the NOI. All of that will hopefully be the basis of a governance structure that will allow NSTIC to move ahead.

As I sit through these discussions I hear the familiar refrains: Government should not be involved - this should be a Private sector driven effort; government needs to handle liability and provide funding for pilots; the governance needs to be all inclusive with industry, government, relying parties, consumers, education ....; the steering committees need to be small - but represent all parties. Some of these certainly seem to be contradictory so it will be interesting to see where the arrow lands as it spins through the options.

I will note that even when people say that the market will decide how this will happen I get a little bit nervous. FFIEC defined some very strong requirements for banking systems and authentication, as did HIPAA for healthcare systems, however these requirements were later overshadowed by other events and the strong requirements were never implemented or enforced. This lack of implementation was generated by industry and the lack of enforcement a reaction by government.

We need to get better at this - we need to make rules that are implementable and enforceable and we must hold feet to the fire to have it happen. The end goal is to make the environment more secure but that does not happen unless someone starts and it does not start unless there is a relying party need or a mandate from industry or the government. The reaction of FDIC's Harper is a good example of the right direction - we just need to see it get implemented.

None of this will happen overnight but, as I mentioned in previous posts, that should not stop people from moving forward it should only help them to understand that they need to be involved and to build systems that can grow and be flexible. Today's authentication solutions will likely not be tomorrow's. I never thought that when I got my first RSA securID two decades ago that today I would be using a phone based OTP not from RSA. In fact after the last 3 months I am kind of glad it is not from RSA.

A Response to a Breach .... Is it enough?

As I was preparing to step into day two of the NSTIC conference I came across an interesting article on the Citi breach that was announced yesterday. Citi Data Theft Points Up a Nagging Problem

Now I would have said the new breach at Citi but as we know it was not new and that of course raises a number of questions: When they discovered it were they worried about the effect of the subsequent press on their reputation so they kept quiet? Did they want to handle their customers first? Were they worried that they may be targeted further, possibly because they had not resolved the "how it happened" questions? The are lots of questions and likely we will never know the answers.

What was interesting was Sheila Harper's response - that some banks need to strengthen their authentication mechanisms. Now I do have concern that she said "some" banks but if some large ones do move ahead then it is likely that market forces will convince the small and medium sized banks to also move the bar forward.

All of this is very interesting given my last couple of days. Wednesday's discussion on the US ideas on International Cyberspace and the last two days working on NSTIC Governance structure. NSTIC certainly is very relevant here in that it embodies the administrations ideas that enhanced credentialing improves security which in turn improves commerce. Certainly this is also what the chairwoman of FDIC sees as well, even above and beyond the protection of consumers. Banks will continue to face struggles until they get a handle on making online transactions and access more secure and reliable for their users. NSTIC certainly is a medium to long term solution. I say medium to long term in that it will take at least a couple of years to see broad implementations that will get the interest of banks. In the interim what do banks do?

Well one thing they can do is look to the government. Treasury Direct today implements enhanced security with its consumers through the use of an additional token. This token is inexpensive, easy to use and has even been implemented in Braille. Today Treasury has issued over a million of these so yes it is scalable. A near term solution that can later interoperate with the NSTIC model - an idea that should be thought about.

Maybe Citi needs to look to Treasury for more than financial bailouts - seems Treasury also has innovative ideas for dealing with customers and keeping their transactions secure.


- Posted using BlogPress from my iPad

Thursday, June 9, 2011

Highlights From Day One of NSTIC Conference

Today was the first day of the NSTIC Governance Conference in Washington. The intent of today and tomorrow is to start to generate the discussion about the governance model for the broad NSTIC program and that combined with the NOI released yesterday, see
http://www.nist.gov/nstic/nstic-frn-noi.pdf, the belief is that a workable governance model should emerge.

Today started with introductions from Howard Schmidt and Jeremy Grant. Probably the biggest news out of it was the multiple mentions of the newly publicized breach at CITI. Along with that Jeremy threw out some interesting stats when he introduced NSTIC:
- when DOD moved to their Common Access Card from passwords for authentication they saw a near immediate drop in breaches of 46%
- last year there were 8.1 million US citizens affected by ID theft totaling $37 billion in losses.
Clear reasons why moving to stronger credentialing is important.

Howard made it clear that this is not an effort that means government credentialing of the citizenry. In fact he was very clear to say, as does NSTIC, that the credentialing needs to come from the private sector and the government can help that advancement with some funding for pilots and leading by example, such as FICAM credentialing efforts and being an early adopter. This plays into something that Jeremy also brought up that there needs to be services to use the credential for interest to occur. In discussing this idea he invoked Metcalfe's Law, the idea that the value of a network (telecom in Metcalfe's case) is proportional to the square of the number of connected users. A graphic he showed with the pinnacle being economic benefit based on trusted identities leading to enhanced security and improved privacy re-enforces the idea that the end goal here is to improve the capabilities to deliver services electronically. That of course needs services to be trusting the credentials and needs people using credentials that are available.

After these introductions the agenda switched to the discussion on governance. The latter part of the morning was focused on how different parts of the technology and application sphere built governance and saw people like Chris Louden discuss FICAM and Joni Brennan of Kantara discuss Kantara and the OIX linkage. Other speakers covered efforts within NACHA, SmartGrid and OMB.

The afternoon saw another element of the though process with Tom Smedinghoff kicking off the afternoon discussing elements of the framework and how legal/policy/contracts and the technical side intersect and become somewhat co-dependent (my words - not Tom's). Other speakers included some interestingviews on privacy from the ACLU, and other views on governance structure from eCitizens Foundation and OASIS.

The afternoon then split into work groups and I will give a synopsis of that tomorrow after round 2 concludes.

All in all an interesting set of discussions and it is obvious that there is still much work to be done.


- Posted using BlogPress from my iPad

International Strategy in Cyberspace

I consider myself a fortunate person. Of course that starts with a wonderful and supportive family (it would be nice for the kids to get part-time jobs so they can contribute to that college education ..... but that aside) but for me I enjoy my work. It gives me some great opportunities and I do not mean just the traveling but it gives me opportunity to be involved with people who make a difference.

Today I was in a meeting of a group that I became involved with over a decade ago that was doing some great work on Critical Infrastructure Protection. That group morphed over the years, and in particular after the creation of DHS, into a group that focused on economic security. Now that may not have sounded all that interesting to many people 7 or 8 years ago but now with the importance of the cyber world to our economy .... things get a bit more interesting now.

Today David Edelman from the National Security Staff was there to discuss the White House's document on International Strategy in Cyberspace. This paper had input from lots of people including many that I had worked with that decade ago, including Dan Hurley from NTIA and Howard Schmidt, who a decade ago was with Microsoft and of course now is at the White House. The document itself is an interesting document and I saw in it and in the discussion a lot of ideas that Richard Clarke (some would say one of Howard's predecessors) had brought out in Cyber War.

I will likely mention this paper again in the coming week as it is interesting but I wanted to focus for a minute on one thing that David brought up - the idea of the Nationalistic Internet. In the paper the administration promotes an open and interoperable cyberspace. The nationalistic Internet is of course not that. When you think of the nationalistic Internet you can think of China and the restrictions it has placed around information and companies operating there. Another example is the recent discussions coming out of Iran speaking of an Iranian Internet. This view of controlling the citizenry through what they have opportunity to see is certainly not what was thought of oh so many years ago when people like Vint Cerf started to think about how to extend what was being done in closed communities out into a bigger world.

Now there are definitely times when we want to know where information is coming from and who is gaining access to it but generally speaking having access to a broad set of information so that we can evaluate sources and ideas around a single topic before we make a decision sounds like it would lead us to a better place. Restricting information, in most cases, is simply wrong. I say most cases as there are sets of information, things like child pornography, that should never be available.

Of course we also need to be careful because we do need to understand that the internet is not solely about information it has become a major commerce tool. Restricting access or sites at a national level changes some of that paradigm as well and overall success and growth of commerce without broad access will in the long term be limited.

The paper is a great starting point for the discussion on how to successfully use the Internet as a tool for growth, both economic and social. Tying this paper into what is being done in the US with NSTIC makes it even more interesting and I will talk about that over the next few days.


- Posted using BlogPress from my iPad