Friday, October 2, 2009

Where Compliance Went Wrong

Earlier this week I had the opportunity to give a presentation on the new Massachusetts Privacy law or 201 CMR 17. The goal of the presentation was to give attendees a detailed understanding of what the law required, penalties for non-compliance and a roadmap for cost-effective compliance. The presentation itself however, is not the focus of this discussion. Rather, a question asked by an attendee is. During the course of the presentation, one participant asked "if I'm 100% compliment with 201 CRM 17, will I still be fined if there's a breach?" My short answer at the time was "Yes" (although it is still a little unclear how that fine will be determined. The question however, got me thinking.

With regulations like Sarbanes-Oxley, GLBA and the HIPAA Security Standards the focus is on input. What do I mean by that? Well, many of the various laws define that you must implement controls to make sure something bad doesn't happen. In some cases, organizations are left to determine the specifics of the controls while in other, the control requirements are relatively detailed. They key is that these regulations tell organizations what they must do to stop bad things from occurring. I'm referring to this as "input" focus because the regulations focus on what needs to go into a security program.

The converse to this is an "output" focused law. California Senate Bill 1386, the first well known state privacy law, is an example of this. The law is very light on requiring organizations to do anything as far as implementing controls. It is very short and to the point. If you suffer a security breach that results in the disclosure of personal information, bad things will happen to you. The only "control" really mentioned is encryption and even that is not required. This approach allows organizations to perform their own risk assessment and implement the controls they feel are necessary to reduce risk to an acceptable level.

While there is probably no perfect solution, the question asked during my recent presentation highlighted the major flaw in "input" focused regulations. With this type of regulation, compliance does not equal security. Recent security breaches where the organization was previously identified as "compliant" highlight this problem. Unfortunately, the response to these events was to blame the auditor. I watched numerous discussions where people made the case that auditors should be held responsible should a "compliant" organization suffer a breach. I'm sorry but that is just plain stupid. That removes the decision making responsibility from the organization and put it in the hands of a third party who will do what is in their best interests. That means, to a large degree, massive risk avoidance rather than reasonable risk management. Risk avoidance then results in significant increases in cost that are way out of line. Ask yourself this, if you were told that you had to audit another company for security and if they suffered a breach, you would be held responsible, what would you do?

Another problem with "input" focused regulation is that it forces organizations to focus on the specific regulation requirements rather than on good overall security. In response, many organizations create checklists for compliance. They will do the minimum to check off each item in the checklist and nothing more. Suffice to say that this is not the best approach to security either. In effect, it gives the attacker a list of exactly what you are doing and what you are not doing to secure sensitive data. It's no wonder "compliant" organizations often suffer security breaches.

Back to the questions I was asked. If I'm 100% compliment with 201 CRM 17, I will still be fined if there's a breach? To me, that sounds like a significant amount of input focus. I'll restate the question. If I complete everything on the 201 CMR 17 checklist, do I really need to worry about actually protecting personal information? The same question can be asked about other regulations.

Now, let's look at output focused laws like many of the state privacy rules. They simply say that organizations are requited to protect personal information and if they don't, bad things happen. Generally speaking, there are no requirements for periodic audit and there are no checklists for compliance. If you protect personal information, you win. If you fail to protect personal information, you suffer the consequences.

It seems obvious to me that the current method of checkbox security doesn't work well. All it has done is increase IT spend and increase costs of external audit without any real gains in security. Perhaps more focus on achieving security goals and objectives and less focus on a bunch of predefined controls might be a good idea.