Monday, February 10, 2014

Target and Snowden ... two cases with the same root cause?

As greater information comes out about the Target breach one begins to wonder if it and the method used by Snowden, to access the information that he disclosed, point to a fundamental misunderstanding when it comes to implemented "best practices" in information security.

I certainly agree that the range of implementations of different security architectures represents a broad range of very bad to good, when it comes to protecting access to information. I am also a believer in the balance between ease of use and "appropriate" access to information. I am not suggesting that one size fits all. What I am suggesting is that we have two significant examples here that point to, what appears to be, a glaring gap in what some would see as fundamental principles as it applies to access to information, that is, access should only be granted to the right people at the right time for the right reasons.

In the two cases here we have two very different organizations but at their core of operations you have two organizations that share data, both as a supplier and a consumer; interact with a broad range of organizations (Target's case it's suppliers and the NSA case it is contractors) and have legislative or contractual mandates to protect the data that they hold.

What is interesting is that in both cases we had entities able to have access to more data then they needed to have access to. Yes Snowden did go outside the box in some cases to gain the data but that was not caught, which points to the possibility that they were doing things correct on the front end but had weak audit mechanisms/checks to validate the implementation. In Target's case it is obvious that the external supplier that they were dealing with did not have access restricted to systems that were irrelevant to the relationship.

These are not the first cases of significant data loss occurring because of access to data that occurred indirectly. Fundamentally the RSA breach was the same thing ... in that case it was data being on a server that should not have been exposed in the manner in which it was. If a server does not have controls to restrict access appropriately then do not have exceedingly sensitive data on it.

Fundamentally organizations need to get back to data categorization and map that to data access. Data access however is not just a case of yes or no based on the credential you have but has to be considered in relation to the credential you have (considering strength, assurance level etc), situational analysis (Is something going on in the network that raises suspicion of activity such as excessive data egress), timing (Is access be requested at an hour that is inconsistent with past behaviors?) and other factors that may need to be considered based on the environment or data sensitivity.

This process of data categorization will lead to system dependencies that will need to be well documented in security and operational plans. Once these plans are implemented there also need to be the process of validation and audit/monitoring that needs to happen to ensure consistent operations.

Some would say "This is not rocket science ... people know this". I think more people are thinking of it now but if they have known it certainly does not appear to be widely implemented in a complete fashion.