Tuesday, December 8, 2009

Security Fundimentals

It's been a while. I haven't posted anything in over a month which is actually a good thing. Things have been very busy and seem to be getting more so. I've just come off doing a series of security assessments for a variety of organizations and have come to a realization - the information security industry is broken.

Now before you get all upset and start bombarding me with hate mail, let me explain. Security professionals often talk about being proactive. It is better to put security in place before something bad happens than after. I agree entirely. While we say this however, we spend a huge portion of our time encouraging reactive thinking. Even when we are being proactive, we are being reactive. That might not seem to make sense but think about this.

In my home office I have a collection of computer security books. Some are focused on various certifications so I'll ignore those for the purpose of this discussion. Of the remaining, I count 25 books. (No, that's not all my books but most are in my office at work). Of these 25 books, 23 are focused, in one way or another, on securing things by understanding how they can be compromised with a few focused exclusively on discussing how to compromise. Only two of the 25 (that's 8%) look at security from an exploit or attack independent perspective. One of these focuses on establishing metrics for security and the other on designing security around detection rather than protection. Let's take this further. Almost all of the news groups and email lists I am a part of focus on the newest vulnerabilities, attacks or victims. Most of the podcasts I look at take about penetration testing, computer forensics or social engineering. As I see it, we spend the vast majority of our time learning how bad stuff could happen then reacting to that knowledge. Hopefully, we are proactively reacting but we are reacting none the less. This creates a situation where "good" security can only be achieved by security experts who fully understand the threat landscape. Unfortunately, not all organizations have access to such people.

The other side of the security industry are the vendors of security technology. They often represent the ultimate in proactive action. They want to sell their products and rightly so. However, in doing so, they are often forced into a situation where they have the solution to a problem that may not exist (at least for any given customer) thus they often try to show the customer why they have a problem and then how "technology A" solves it. This creates a situation where security product implementation may not really match up with actual risk. This means security spend is not in line with risk reduction and potentially leaves areas of significant risk unmitigated.

If the security industry has three sides, the third would be regulation. In my opinion, most security regulations have combined the worst aspects of reactive security with a misalignment of controls vs. risk. Some regulatory writes a document that states, to varying degrees of detail, the controls that organizations need to put in place. Affected organizations then react to the regulation by implementing the mandated controls and completing their compliance checklist. They effectively replace security with compliance assuming they are one and the same. Unfortunately, they are not. The result, excessive spend that may not be in line with actual risk and that doesn't actually accomplish the security goals of the regulation.

So what are we missing? In my opinion, what we are missing is a set of basic, fundamental security measures that are easily understood, that can be implemented in virtually every environment and that don't require reading hundreds or thousands of pages of highly technical documentation to understand. Furthermore, these measures cannot be tied to specific technologies. Basically, I'm thinking of some basic uses of common technology and some operational processes that "everyone" can use. Some things that come to mind are:

- Segmenting the network based on business requirements
- Applying access controls to network segments
- Ingress AND egress filtering on firewalls
- Logging ALLOWED inbound & BLOCKED outbound firewall traffic
- Basic data classification measures
- Security incorporated into change control procedures
- Implementation of basic hardening standards for core technologies

The list can get longer but hopefully you get the idea. By putting together some basic guidance, the average IT person who also must deal with security has a good place to start. They can create a technical and operational environment that supports security by design rather than having to try to layer security on top of in inherently insecure environment using the vendor or regulation-recommended technology of the day.

Thoughts?

Friday, October 2, 2009

Where Compliance Went Wrong

Earlier this week I had the opportunity to give a presentation on the new Massachusetts Privacy law or 201 CMR 17. The goal of the presentation was to give attendees a detailed understanding of what the law required, penalties for non-compliance and a roadmap for cost-effective compliance. The presentation itself however, is not the focus of this discussion. Rather, a question asked by an attendee is. During the course of the presentation, one participant asked "if I'm 100% compliment with 201 CRM 17, will I still be fined if there's a breach?" My short answer at the time was "Yes" (although it is still a little unclear how that fine will be determined. The question however, got me thinking.

With regulations like Sarbanes-Oxley, GLBA and the HIPAA Security Standards the focus is on input. What do I mean by that? Well, many of the various laws define that you must implement controls to make sure something bad doesn't happen. In some cases, organizations are left to determine the specifics of the controls while in other, the control requirements are relatively detailed. They key is that these regulations tell organizations what they must do to stop bad things from occurring. I'm referring to this as "input" focus because the regulations focus on what needs to go into a security program.

The converse to this is an "output" focused law. California Senate Bill 1386, the first well known state privacy law, is an example of this. The law is very light on requiring organizations to do anything as far as implementing controls. It is very short and to the point. If you suffer a security breach that results in the disclosure of personal information, bad things will happen to you. The only "control" really mentioned is encryption and even that is not required. This approach allows organizations to perform their own risk assessment and implement the controls they feel are necessary to reduce risk to an acceptable level.

While there is probably no perfect solution, the question asked during my recent presentation highlighted the major flaw in "input" focused regulations. With this type of regulation, compliance does not equal security. Recent security breaches where the organization was previously identified as "compliant" highlight this problem. Unfortunately, the response to these events was to blame the auditor. I watched numerous discussions where people made the case that auditors should be held responsible should a "compliant" organization suffer a breach. I'm sorry but that is just plain stupid. That removes the decision making responsibility from the organization and put it in the hands of a third party who will do what is in their best interests. That means, to a large degree, massive risk avoidance rather than reasonable risk management. Risk avoidance then results in significant increases in cost that are way out of line. Ask yourself this, if you were told that you had to audit another company for security and if they suffered a breach, you would be held responsible, what would you do?

Another problem with "input" focused regulation is that it forces organizations to focus on the specific regulation requirements rather than on good overall security. In response, many organizations create checklists for compliance. They will do the minimum to check off each item in the checklist and nothing more. Suffice to say that this is not the best approach to security either. In effect, it gives the attacker a list of exactly what you are doing and what you are not doing to secure sensitive data. It's no wonder "compliant" organizations often suffer security breaches.

Back to the questions I was asked. If I'm 100% compliment with 201 CRM 17, I will still be fined if there's a breach? To me, that sounds like a significant amount of input focus. I'll restate the question. If I complete everything on the 201 CMR 17 checklist, do I really need to worry about actually protecting personal information? The same question can be asked about other regulations.

Now, let's look at output focused laws like many of the state privacy rules. They simply say that organizations are requited to protect personal information and if they don't, bad things happen. Generally speaking, there are no requirements for periodic audit and there are no checklists for compliance. If you protect personal information, you win. If you fail to protect personal information, you suffer the consequences.

It seems obvious to me that the current method of checkbox security doesn't work well. All it has done is increase IT spend and increase costs of external audit without any real gains in security. Perhaps more focus on achieving security goals and objectives and less focus on a bunch of predefined controls might be a good idea.

Wednesday, September 2, 2009

Snow Leopard Install Knightmare

Well, it happened. On Friday of last week I ran out to the Apple store and purchased Snow Leopard. More specifically, I purchased the Snow Leopard Box Set with iLife '09 and iWork '09. That was one lucky purchase but more on that later.

After a bite to eat I introduced my MacBook Pro to the Snow Leopard disk. They seemed to get along well for the first minute or two. Then the Snow Leopard installation routine asked me where it should install Snow Leopard. There is only one problem. NONE of my disks including the default "Macintosh HD" were identified as "bootable" and thus Snow Leopard woudn't install. No options, no nothin'. I then went to my favorite troubleshooting tool - Google. I discovered that others were having the same problem but there was no definitive solution. Some people said it had to do with PGP Desktop so I removed PGP Desktop. That didn't work. Some said there was a backup file in the root directory of the hard drive that would cause the problem but that file didn't exist. I tried booting from the install disk. Fail! So what's next? Call Apple.

I got tech support on the line and told them what was going on and what I did. They politely asked me if they could put me on hold and then did so. They came back and had me boot from the install disk and attempt to repair the disk volume. No problems were detected (I had already checked but humored them nonetheless). They put me on hold again and came back with the secret, fine-print, little known fact. The upgrade from Leopard to Snow Leopard works for computers that had Leopard originally installed but not for computers that originally had Tiger. I was told that I needed to buy the full version and not the upgrade. I asked how much that would cost. I then mentioned that I was a little upset as I had already dropped $170 on the "box set". At this time I was somewhat relieved as we had found the solution. At the same time I was a little ticked off because I was going to have to drop even more money on this upgrade. The tech support guy heard me mention the box set and put me on hold again. It turns out that the box set is the full version and thus I had the right product and it still wouldn't install. The brought in another tech support guy to help. This one was a product specialist.

We did another troubleshooting dance, going round and round. We tried to install Leopard on top of Leopard with no success. OK, so it's not a Snow Leopard problem but something wrong with my laptop. We checked the partition information and found all was as it should be. We checked a few other settings associated with the disk and still found no problems. What was the final option? We had to re-partition the hard drive. Yep, that's right. We blew out the whole thing and installed from scratch. I guess it's a good thing that I purchased the "box set".

After a clean install I was able to use my Time Machine backups (completed earlier in the day) to restore my profile, applications, etc. After using the Mac for a couple of days now, everything seems to be working well. I'm still figuring out all the in's and out's of the new Exchange integration but otherwise, everything is working.

Total time to install Snow Leopard including restoring from Time Machine and installing iWork and iLife upgrades - about 6 hours.

Thursday, July 9, 2009

Massachusetts Privacy Law - Why EVERYONE should care

The Massachusetts Privacy Law (AKA 201 CMR 17.00) is on the horizon with the deadline for compliance is 6 months away. While many states have instituted privacy laws, this one is a game changer and affects companies beyond those geographically located in Massachusetts. Why is that? I'm glad you asked. Here are some things you need to know:

Q: Who does the law apply to?

The law applies to any person or business who owns, licenses, stores or maintains personal information about a resident of the Commonwealth of Massachusetts. Keep in mind, this is not limited to Massachusetts-based companies. Technically, a company based on California that has personal information about a Massachusetts resident must comply.

Q: What is the purpose of the law?

The law establishes minimum standards for safeguarding personal information in both paper and electronic form.

Q: When does the law go into effect?
Organizations must be in full compliance with the law on or before January 1, 2010.

Q: What is “personal information”?
The law defines personal information as a first and last name or a first initial and last name in combination with any of the following:
- Social security number
- Driver’s license number
- State-issued identification card number
- Financial account number
- Credit or debit card number (with our without access code or PIN)

Q: What does this law require?
The law places a number of requirements on every person or organization “covered entities” that owns, licenses, stores or maintains personal information about a resident of the Commonwealth of Massachusetts. To comply with this law, covered entities must:

- Develop, implement, maintain and monitor a comprehensive, written information security program. Such a program must contain administrative, technical and physical safeguards to ensure the confidentiality of personal information.

- Designate one or more employees to maintain the comprehensive information security program.

- Identify and assess foreseeable internal and external risks

- Evaluate and seek to improve the effectiveness of existing safeguards on an ongoing basis including; (1) Performing ongoing employee (including temporary and contract employee) training, (2) Verifying employee compliance and (3) Implementing a means for detecting and preventing security system failures

- Develop security policies

- Impose disciplinary measures for violations of security program rules

- Prevent terminated employees from accessing records containing personal information.

- Take all reasonable steps to verify that any third-party service provider with access to personal information will protect it

- Limit the amount of personal information collection to the greatest extent possible, limiting the time such information is retained and limiting access to that information as possible.

- Identify paper, electronic and other records, computing systems, storage media (incl. laptops and portable devices) used to store personal information.

- Implement restrictions to physical access to personal information records including a written procedure that defines the manner in which physical access is restricted.

- Perform regular monitoring to ensure that the security program is operating in the manner designed.

- Review the scope of security measures at least annually or whenever there is a significant change in business practices.

- Develop an incident response plan.

- Implement reasonably strong user authentication

- Implement access controls to restrict access to personal information

- Encrypt all transmitted records or files containing personal information that will travel across public or wireless networks

- Perform monitoring of systems for unauthorized use of or access to personal information

- Encryption of all personal information stored on laptops or other portable devices

- Provide firewall protection, up-to-date patching and up-to-date anti-malware signatures of all systems containing personal information that are connected to the Internet

- Conduct regular education and training of employees

Q: Wow, that's a long list. Can you summarize all of that?
Sure. Basically organizations need to:
- Perform an assessment to identify internal and external risks.
- Develop a formal, documented information security program based on the results of the risk assessment.
- Document the program via a suite of information security policies.
- Utilized strong authentication methods and strict access controls
- Ensure effective patch and configuration management
- Implement physical access controls
- Incorporate risk assessment into daily operations
- Perform regular internal audits to verify compliance
- Use secure, encrypted communications protocols
- Perform security monitoring and maintain an incident response program

Q: What can be done to comply with this law? What are the next steps?
First, and most importantly, it is critical to perform of an initial compliance/risk assessment. During such a project you would ideally accomplish two simultaneous goals; assess your current security posture to identify any compliance gaps and perform the initial risk assessment dictated by 201 CMR 17.00.

Based on the outcome of the assessment, there are a number of initiatives that will commonly be required:
- Development of information security policies
- Development of an internal audit program
- Performance of periodic security reviews
- Penetration testing & web application security testing
- Development of an incident response plan
- Configuration of technologies to provide encrypted communications protocols

One final thought. Keep in mind that the law specifically states that compliance will factor the size of the business, the resources available, the amount of stored data and the need for confidentiality of both customer and employee information. Because of this, our recommendations to any customer will only be based on the outcome of a risk assessment.

All of that said, if you are a company that "does business" in the Commonwealth of Massachusetts, you should take a hard look at your security posture and your level of compliance with the requirements of this law. Failure to do so could mean failure to comply with the law and that could open you up to legal liability risks, public relations problems and a host of other nastiness that nobody wants.

Tuesday, July 7, 2009

10 Most Dangerous Infosec Mistakes

I have had the opportunity over the past couple of months to perform security assessments for a bunch of different organizations including hospitals, universities, manufacturing companies and real estate companies. While the results of these assessments are as unique as the companies for which they were performed, I have noticed some common trends. I thought it would be interesting to try to condence them down into a "top 10" list.

1. Ignoring web application security
2. Poor patch management (expecially internal systems and workstations)
3. Lack of a risk basis to security decisions (or making decisions based on fear uncertainty and doubt)
4. Relying solely on the perimeter for protection
5. Ignoring the operational aspects of security (e.g. IDS tuning, maintenance, incident response, etc.)
6. Poor password management (Is 8 characters with an upper, lower, numeric and special char. changed every 90 days really strong?)
7. Ignoring detection - focusing solely on attempts at protection
8. Failing to account for users (who will always find a way to break security)
9. Failing to implement a DMZ, allowing external access directly to the internal network
10. Focusing exclusively on completing regulatory "checkboxes" - compliance does not equal security

As you read this list, ask yourself, is this you? Have you adequately tested the security of your web applications? Have you conducted a web application penetration test that is complete and comprehensive? If not, how do you now your web applications are secure?

What about patch management? Cross site scripting, email-based links and malicious Javascript make your end users direct targets. If an end user workstation gets compromised the attacker can continue their attacks from within your network perimeter. What will they be able to do? Are you expecting your firewall and other perimeter devices to provide protection in this scenario? Is your network resistant to attack from within? Have you implemented proper network segmentation and implemented strong access controls between internal segments?

If an attacker were to get in, are you ready? Do you have sufficient detective capabilities to identify the attack in its early stages or will you wait until a partner, customer or other third party notifies you of the breach? If you notice the attack, do you have a formal, approved incident response plan in place? What are your incident response goals? Do you want to conduct a forensics investigation or simply get the system back up and running? What about notifying law enforcement?

What about regulatory compliance and risk? Does your security plan focus exclusively on meeting regulatory compliance requirements or are you making security decisions based on assessed risk? One way will cost a lot and achieve little with respect to actual risk reduction. The other reduces costs, achieves compliance in the face of a dynamic regulatory landscape and reduces business risk to an acceptable level. Which are you doing.

I have put together a podcast that covers each of the items on this top 10 list so if you are interested, give it a listen. If you want to discuss this in more detail, please reach out to me. Also, don't forget to follow me on twitter - http://www.twitter.com/nwnsecurity - and on facebook (kevinfiscus).

Take care!

Kevin



Tuesday, June 30, 2009

Cisco Security Advisories

Wow! Late June saw a lot of action on the Cisco front when it comes to security advisories. Over the course of 3 days, June 24th, 25th and 26th, there were 8 updates. Of these, 2 were new while the other 8 were updates to advisories published on March 25th of this year. The two new advisories were focused on vulnerabilities in Cisco's physcial security technologies.

Cisco Physical Access Gateway
The first is a denial of service vulnerability in the Cisco Physical Access Gateway product. An attacker sending specially crafted packets can create a memory leak. If this happens, connected door hardware (card readers, locks, etc.) may not function causing the door to remain locked or to remain open. Products affected include software versions prior to 1.1. There are no workarounds however free software updates are available. Additional detail about this vulnerability can be found at - http://www.cisco.com/warp/public/707/cisco-sa-20090624-gateway.shtml

As a penetration tester, physical security is paramount and the closer I can get to your critical assets, the better. If I can open your locked doors using some crafted packets, I can access your facility. At a minimum, I can plug in to your network and that's bad. The flip side of this is also bad. If I can cause your doors to lock and stay locked, I can stop security guards from making their rounds and stop people from getting to work. If these locks are used for egress as well as ingress, there may also be life safety issues as well.

Cisco Video Surveillance Stream Manager
The next advisory relates to the Cisco Video Surveillance Stream Manager firmware for the Cisco Video Surveillance Services Platforms and Cisco Video Surveillance Integrated Services Platforms. A crafted packed can cause a reboot of the system. There is also a vulnerability in the Cisco Video Surveillance 2500 Series IP Camera that could allow an authenticated user to view any file on a vulnerable camera. Cisco has released free software to remediate these vulnerabilities. Detailed information can be found at - http://www.cisco.com/warp/public/707/cisco-sa-20090624-video.shtml

If you have deployed these technologies then you must have determined that video surveillance is an important component of your security program. The DoS vulnerability in Cisco Video Surveillance Stream Manager could result in an extended DoS condition which would effectively blind you. The vulnerability in the cameras could allow a non-privileged user to gain privileged access.

Updates
As I stated previously, the remaining issues are updates to previous advisories. They all relate to Cisco IOS and include a vulnerability with Mobile IP and Mobile IPv6, a cTCP DoS vulnerability, a Session Initiation Protocol DoS vulnerability, a crafted UDP packet vulnerability that affects several IOS features, vulnerabilities with WebVPN and SSLVPN, a privilege escalation vulnerability in Cisco IOS secure copy and a crafted TCP packet vulnerability that affects multiple IOS features. Links to additional information have been included below.

http://www.cisco.com/warp/public/707/cisco-sa-20090325-tcp.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-webvpn.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-udp.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-sip.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-scp.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-ctcp.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-ip.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-mobileip.shtml

Summary
Now that I've thrown a bunch of information at you, I want to put all of this in terms of risk and risk assessment. When I started writing this post, I had a couple of statements about patching and installing updates along the lines of "Please patch". I pulled them out because I realized that I was going against the normal advise that I give. Should you patch or upgrade your software to fix these problems? The short answer is probably yes but that is not the whole story. Simply deploying each and every patch because the vendor says it is a problem is not risk-based security.

I recommend that you take a look at these vulnerabilities. If you are using this technology and if you are affected by the vulnerabilit you need to ask yourselves "what would the impact be to the business if....". For example, the vulnerability with the Cisco IP Cameras would allow an authenticated but non-privileged user to gain privileged access. Do you have non-priveleged users? If not, does it make sense to install the patch? If you do have non-privileged users, what would happen if they gained priviliged access? What do you believe is the likelihood that that particular vulnerability could be expanded to allow unauthenticated users access?

I understand that this seems more complicated than simply deploying the patch and in some cases, it may be but consider an environment with 500 cameras. How many man hours will be requred to push out the patch to that many cameras? If it will take 1 hour to deploy each patch, that's 500 man-hours. Assuming a simple per man-hour cost of $100, that is a cost to deploy the patch of $50,000. Would the potential impact of the vulnerability cost the orgnization more or less than $50,000. What about other technologies. Deploying a patch on a purpose-built device like a camera may have little down side but what about deploying a similar patch to a critical application or a core operating sytem. Way back when SQL Slammer came out I had a customer who relied on SQL databases. They were unaffected by the worm but the patch caused days of downtime because it broke other applications.

I know I am over simplifying things but the concept is sound. When it comes to security, it is important to understand the potential negative impact to the business, the likelihood of a problem and the costs/risks associated with remediation.


Monday, June 15, 2009

Netgear Vulnerability - Disclosure Notification

Today the members of the Bugtraq mailing list received a notification about an email in a Netgear DG623 router. The text of this email reads as follows:

Product Name: Netgear DG632 Router
Vendor: http://www.netgear.com
Date: 15 June, 2009
Author: tom@tomneaves.co.uk
Original URL: http://www.tomneaves.co.uk/Netgear_DG632_Remote_DoS.txt
Discovered: 18 November, 2006
Disclosed: 15 June, 2009

I. DESCRIPTION
The Netgear DG632 router has a web interface which runs on port 80. This allows an admin to login and administer the device's settings. However, a Denial of Service (DoS) vulnerability exists that causes the web interface
to crash and stop responding to further requests.

II. DETAILS
Within the "/cgi-bin/" directory of the administrative web interface exists a file called "firmwarecfg". This file is used for firmware upgrades. A HTTP POST request for this file causes the web server to hang. The web server will stop responding to requests and the administrative interface will become inaccessible until the router is physically restarted.

While the router will still continue to function at the network level, i.e. it will still respond to ICMP echo requests and issue leases via DHCP, an administrator will no longer be able to interact with the administrative web interface.

This attack can be carried out internally within the network, or over the Internet if the administrator has enabled the "Remote Management" feature on the router.

Affected Versions: Firmware V3.4.0_ap (others unknown)

III. VENDOR RESPONSE
12 June, 2009 - Contacted vendor.
15 June, 2009 - Vendor responded. Stated the DG632 is an end of life product and is no longer supported in a production and development sense, as such, there will be no further firmware releases to resolve this issue.

IV. CREDIT
Discovered by Tom Neaves


This posting elicits so may comments that I don't really know where to start so I've tried to break things down a bit to make the discussion easier to follow.

Vulnerability Severity
First I'd like to address the perceived severity of this vulnerability. It is a denial of service attack against a platform that is most commonly used in the home or SOHO environments. This combination means that most mid sized or larger organizations will likely pay little attention to it. This is a mistake for a few reasons. First, and speaking to small, medium and large organizaiton, you may not used SOHO devices but your users do and they, in turn, connect to your network. This may be via VPN (through the SOHO router), directly using various protocols, or simply by bringing ther laptop from their vulnerable home environment to your "protected" work environment. In short, problems with these devices affect you directly.

Now you may say that this is "just" a DoS problem and isn't going to have an effect. At a minimum, it can effect the productivity of your users. Let's consider an extremely situation - you have a user who is working on a proposal that needs to be turned in on a specific date and time. If their router gets nailed it may delay the proposal causing you to lose the business. From a more day-to-day perspective, many organizations allow teleworking. If they cannot connect to the business network, their productivity will be impacted costing your organization money. If your corporate network, or even a single switch on that network, went down, you'd ensure the problem was addressed ASAP. From the perspective of a teleworker, a DoS attack against their SOHO router are exactly the same as the loss of a corporate switch.

Finally, DoS attacks have a way of turning into other types of attacks - possibly allowing the attacker to gain control over the SOHO router. if that happens, it results in a whole new set of problems for business with users who rely on these systems.

Vulnerability Disclosure
OK, now that we've covered the fact that we should care about this kind of problem, I want to talk a bit about vulnerability disclosure. In this case, that problem was identified in November of 2006 and was posted on Bugtraq on June 15th of 2009 - 2 years and 7 months later. This means that people have been using vulnerable equipment for all that time without knowing it or having any ability to do something about it. That, to me, is a problem. Even now, becuse the product is "end of life" there is not patch available but purchasing a new box - again, a scary situation.

I don't want to come across as an alarmist. The issues affecting this particular device are unlikely to represent a significant security risk to any organization but this problem of vulnerability disclosure is multiplied throughout the industry on virtually every singel technology in existence making it a situation that is relevant to everyone. Now comes the point where I outline a bunch of questions with few answers:

Security researches spend time "hacking" technology. By hacking, I'm referring to the term in the original context, not the media driven "attacker" definition. These researches hack technology for a variety of reasons. It may be part of an authorized penetration test or they may work for a company with a vested interest in vulnerability identification (e.g. a security product vendor, etc.). Often, they just do it for fun. In any case, they discover a vulnerability. This leads them to the first question. Do they notify the vendor, disclose the problem to the public, sell it to the highest bidder......?

If they notify the vendor, they may get a positive response. They may also get brushed off and ignored. They may get critisized or, in rare situations, they may be the target of legal actions or "illegal reverse engineering" or similar "crime". In any of these cases, vendor notification resulting in positive, short term actions by the vendor are not the majority leaving the users of the technoloy vulnerable and ignorant.

If they decide to publish the problem on the Internet making it available to the public the will likely be contacted by the vendor, and/or the vendor's lawyers. The public may be able to, if they are made aware of the problem, implement some controls to reduce their risk level but the security researcher is also putting their discovery in the hands of the bad guys also. In effect, they take off the white hat and put on the grey one.

There are also organizaitons out there who will buy vulnerabilities. This puts some money in the pockets of the reseachers but comes really close to "black hat" work.

This is a bit of a sticky situation and gets more so when you look at the concept of pay. Many vulnerability researchers are either independent consultants or do this type of work as a hobby in addition to their normal 9-to-5. They spend countless hours performing quality analysis for some vendor for free. When they do identify something and turn it in to the vendor, the vendor in turn has the opportunity to make their product better. Shouldn't the researches receive some compensation for the QA work that should have been done by the vendor in the first place? Like I said, this is a sticky subject.

Attack Vectors
The last thing I want to talk about is the way this vulnerability can be attacked. According to the article - "This attack can be carried out internally within the network, or over the Internet if the administrator has enabled the "Remote Management" feature on the router." This seems to imply that if "Remote Management" has not been enabled, this attack is not exploitable from the Internet. While I haven't tested it, I would suspect that Cross-site scripting (XSS) and Cross-site request forgery (CSRF) attacks may also be used to attack the system even of the direct external access is not possible.





Tuesday, June 9, 2009

New Podcast

I have just taken my first step into podcasting. In this case, I recorded a discussion about the 10 most dangerous security mistakes I see organizations making when I conduct security assessments. Unfortunately, these mistakes occur far too often and make real security exceptionally difficult. I'll apologize in advance for the quality of the audio editing. I'll get better. I promise.

Wednesday, June 3, 2009

Cyber Security Czar

The White House continues to talk about creating a cyber security coordinator or cyber security czar. Comments from the administration talk about the threat of cyber attacks on our critical infrastructure and of cyber terrorism. My first response - FINALLY! Finally someone is taking the threat seriously. That said, I do not understand what a cyber security czar would actually do.

Many of the articles I have read on the subject talk about the administration's "plan to keep government and commercial information on the Internet safe from cyber criminals or terrorists." They talk about forming partnerships with state and local governments as well as with the private sector and about focusing on training and education. These are all good things but they won't really help make things more secure.

The only way for the federal government to make things more secure is to pass laws or set policy that everyone must comply with. Education won't do it because there are too many organizations that, as a result of ignorance, incompetence or arrogance, won't do the right things. That's why the government has put in place regulations like HIPAA, 21 CFR Part 11, GLBA and Sarbanes-Oxley. That's why the credit card companies put in place PCI DSS. The problem is that these measures just don't work. The minute you think you have created a checklist of minimum mandatory requirements, you have really created the only list of requirements that many organizations will follow. You have also given the bad guys a template of what your securty will look like. In effect, you have made security weaker.

Federal regulations and PCI have taken the wrong approach when it comes to cyber security. This is because they are trying to define the controls that need to be put in place. Unfortunately, information technology is too complex, companies are too diverse and the threat landscape is too dynamic for that to work. The minute you define a control requirement, bad guys find a way around it. The regulations then have to change but that process takes much too much time. Control focused regulatory requirements also create the impression that compliance equals security. That is simply not the case.

So, if the current regulatory requirements are the wrong approach, what is the right one? I'm glad you asked. The short answer is - responsibility. Organizations should be held responsible for the results of their security mesures, not on the measures themselves. If an organization has really bad security but never suffers a breach or compromise, is there a problem? If an organziation is 100% compliant with all regulations but they suffer a compromise that discloses credit card numbers and results in identity theft for thousands did being compliant help? The answers to these questions should be self evident. Unfortunatley, our current regulatory climate would praise the second organization while punishing the first. My thoughts are simple. Hold organizations accountable for effectively securing sensitive data - or more specifically, data that if modified, altered or destroyed, would negatively affect others. Many of the various state privacy laws to this. Let organziations secure their environments as the see fit but hold them accountable for failure. It doesn't take a cyber security czar or a Cybersecurity Act of 2009 to do this. It take a couple of things. First, it requires a national description of what "sensitive" data is. This list doesn't need to be that large but would include personal information that could be used for identity theft, personal medical records, personal financial records (including credit card data), classified government information and the like. Once this definition has been established, two things need to happen; a law needs to be passed that would result in penalties should these data be disclosed and a law needs to be passed (perhaps an expansion of the Computer Fraud and Abuse Act) that makes it a crime to access these data without authorization.

What would the effect of this be? Of course I don't know for sure but here's my guess. Nothing would happen initially until the first few public cases of fines or other penalties levied against organizations who let their sensitive data be compromised were on the nightly news. This would serve as a wake up call (hopefully) making organizations approach security as a matter of risk management rather than as a checkbox that needs to be checked. Would it work? I don't know but I believe it would be better than what is happening today.

Tuesday, April 28, 2009

Cyber Security Kill Switch Bill

On April 1st, Sen. John D. Rockefeller IV (D-W.Va.) and Sen. Olympia Snowe (R-Me.) introduced Senate Bill 773, also known as the Cybersecurty Act of 2009. This bill seeks to provide increased protections to United States critical infrastructure from the threat of a cyber attack. In the abstract, I think that is a good idea. The specific implementation of this bill however, has some problems. I won't go through the entire bill but I'll provide some highlights.

This bill calls for a Cybersecurity Advisory Panel that will be created by the President to track the state of security of critical infrastructure. The panel will report to Congress "not less frequently than once every two years".

The bill calls for the creation of a cybersecurity dashboard that can track the state of security of all critical infrastructure assets in real time.

It calls for the creation of regional cybersecurity centers who will "transfer" standards to the private sector with a focus on small to mid-sized businesses. These regional centers will also have funding to make loans to small businesses to promote enhanced security.

The bill tasks NIST with the task of creating much more robust standards including standards for secure coding and software configuration.

It also calls for the implementation of a "secure" DNS solution.

The bill will also make it illegal for "any individual to
engage in business in the United States, or to be employed in the United States, as a provider of cybersecurity services to any Federal agency or an information system or network designated by the President, or the President's designee, as a critical infrastructure information system or network, who is not licensed and certified under the program."

The bill establishes the "Department of Commerce as the clearinghouse of cybersecurity threat and vulnerability information to Federal Government and private sector owned critical infrastructure information systems and networks." In this role, the Dept. of Commerce "shall have access to all relevant data concerning such networks without regard to any provision of law, regulation, rule, or policy restricting such access".

The bill requires that "within 1 year after the date of enactment of this Act, the President, or the President's designee, shall review, and report to Congress,
on the feasibility of an identity management and authentication program, with the appropriate civil liberties and privacy protections, for government and critical infrastructure information systems and networks."

The bill gives the President the authority to "declare a cybersecurity emergency and order the limitation or shutdown of Internet traffic to and from any compromised
Federal Government or United States critical infrastructure information system or network" and to "order the disconnection of any Federal Government or United States critical infrastructure information systems or networks in the interest of national security"

You will note that throughout these summary points, the phrase "critical infrastructure" is stated frequently. Whether any specific organization or network is "critical infrastructure" will determine if it falls under the purview of this bill. According to the bill, the President will determine what makes up "critical infrastructure".

I'm going to avoid any purely political analysis of this bill. Whether you think the federal government should be responsible for cybersecurity of private industry is for you to decide. I want to comment on whether this bill will achieve the desired objectives of making our critical infrastructure more secure. In short, I believe that not only will our critical infrastructure not be more secure, it will in fact become less so.

I have done a lot of work with regulated companies; health care, pharma, financial, retail, etc. While the regulations covering them are different, I have seen one common factor - something I call the checklist syndrome. When organizations are forced to comply with a security regulation, they start by developing a checklist of what is required. They then audit themselves to determine where their gaps are. Finally, they work to fill the gaps and believe themselves secure. In taking this approach, I have seen more than one organization intentionally ignore a good security measure because it was not required for compliance. I have seen companies re-word strong policies so they will only apply to the subset of their systems required for compliance. Guess what?!?!? The attackers also have access to the compliance standards. Setting up a security program that is precisely and only what is required for compliance is giving the attackers a picture of your strengths and weaknesses.

Imagine if we were required by law to secure our homes. The law states that we must lock all doors and windows when we are away. It also requires that sensors be installed on all ground floor doors and windows to detect unauthorized access. It also requires that motion sensors be installed on the ground floor. In theory, homes adhering to these standards would be more secure but let's now assume that people forced to comply with this do so by adhering to the principles of checklist syndrome. They do exactly what is listed in the law and consider themselves secure. Attackers, knowing this, take advantage of the lack of exterior lighting (not required by the law) to provide cover as they climb to the second story, break a window and steal what they can find. Because they never go to the first floor, they never trip the sensors but the victims are still victims. I understand that my example is overly simplistic but Hannaford Supermarkets was "PCI DSS compliant" up until they suffered a massive compromise.

Aside from the checklist syndrome problems, there are a number of other problems I see with this bill.

The Cybersecurity Advisory Panel is only required to report every two years. How much changes in the world of information security in two years. The state of cybersecurity could go from "outstanding" to "epic fail" in two days under the right circumstances, let alone two years.

Requiring cypersecurity professionals to be licensed and certified sounds nice but has some dramatic potential impacts. Who will create the certification and licensing process and how much will it cost? If infosec people need to invest time and money into this licensing/certification, what will that do to all of the existing certification organizations (e.g. SANS, (ISC)2, ISACA, etc.? As these licensing requirements get put in place, those who are licensed will become more valuable and thus will demand higher pay. Many organizations already struggle trying to maintain infosec expertise on staff. This may make it far more difficult. What about consulting firms who do security related but not security specific work? Is deploying Active Directory or a Cisco router security work? What about a firewall? Will the folks that do this type of work also need to be cybersecurity certified.

I'm entirely in favor of better security but I'm not sure this is the way to go about it. If it were up to me and I was inclined to draft legislation about cybersecurity, I think I might keep it far more simple:

1) Thou shalt incorporate risk assessment into strategic and operational business decisions.

2) Thou shalt make security decisions based on assessed risk such that (a)unacceptable risk shall be mitigated and (b)acceptable risk shall be documented.

3) Thou shalt establish proper standards for security; the standards shall be implemented based on assessed risk and thou shalt regularly audit for compliance with established standards.

4) Thou shalt establish roles and responsibilities for promoting security and one such roll shall be responsible for maintaining a current understanding of security threats, techniques, technologies and trends.

5) Security shall be prioritized equally with performance and functionality and a lack of any of these shall not be accepted.

6) Failing to implement reasonable controls to protect against commonly understood threats is negligence.

7) Security is not a technology issue; rather it is a business issues that involves technology, people, policy and process. Similarly, technology does not provide the full security solution.

8) Computing hardware and software is extremely complex and will have vulnerabilities. Expect them and build security around that fact.

9) Security is not about protection only; rather it involves protection, detection, response and recovery; the ratios of which are determined by by assessing risk.

10) Security can never be 100% effective as long as people are involved.

Another approach would be to make getting compromised illegal. Perhaps that would get organizations to pay security proper attention. (wink, wink, nudge, nudge)

Wednesday, February 4, 2009

But we're in a recession?!?!?!?

WPA has been cracked. Twitter and other "web 2.0" technologies have been hacked. Payment process Heartland Payment Systems was recently compromised. Regulatory compliance requirements continue to increase while the range and scope of threats continue. What's the matter? Don't the bad guys know we are in a recession and my budget for security has been cut?

The fact is that during times of economic trouble security requirements don't decrease, they increase. Organizations may scale back hardware upgrades or the implementation of a new cool technology but they simply cannot choose to ignore security. A compromise in a strong economy is bad. A compromise in a weak economy, where profits are lower and competition is greater could make the difference between a business that succeeds and one that fails. So what can organizations do to maintain security and regulatory compliance while at the same time reduce costs?

Recent industry activity has shown that organizations are doing a few things to meet these seemingly conflicting requirements. Many organizations are looking automation and outsourcing. These approaches allow organizations to do more with less. "Security as a service" can allow organization to take advantage of high levels of expertise without the high employee overhead. Replacing highly manual and labor intensive processes with technology can replace those costs further. Managed security services look to play a big role in the coming year.

Many organizations are looking to blend physical and logical security. Technologies such as smart cards and proximity cards can provide "single sign on" to the building, the data center, the network and applications eliminating the need to manage and maintain multiple solution.

Larger organizations are also looking to centralized a sometimes distributed security infrastructure. Moving security technologies into a central data center can reduce administrative costs significantly.

So what can be done?

First, organizations need to understand where their security strengths and weaknesses are. They need to not only understand their risk of compromise, they need to identify areas where consolidation, centralization, automation and outsourcing would result in better security at a lower cost. If organizations have already addressed their security concerns, they should focus on testing their solutions to validate effectiveness.

One final thought. In a troubled economy it is important that organizations assess the financial stability of their security technology vendors. The failure of a security company could result in organizations relying on unsupported technologies. This would be a problem if we are talking about a firewall. It would be a disaster if we are talking about technologies, like anti-virus and intrusion detection, that require constant updates from the vendor. While technology replacement may not be high on the list of priorities for many organizations, replacing security technology from troubled vendors may be a requirement.

First Posting - Quick Overview

Welcome to the NWN Security blog site. I hope to use this site and its related Twitter account to distribute updates about NWN's Security Testing, Assessment and Response practice. Rather than talking about what this blog will contain, I'll just start and hope you get the idea.

As many of you know, NWN has created a new practice that focuses exclusively on security testing, security assessments, regulatory compliance, incident response and computer forensics - thus "Security Testing, Assessment and Response" or STAR. For those of you not familiar with what we do, I'll give you an overview.

"Security Testing" focuses mainly on reviewing security from an attacker's perspective. This includes things like vulnerability scanning, war dialing, war driving, social engineering, physical security and full penetration testing. Basically, we try to break in to customer networks to test their security.

"Security Assessment" tests to operate from a more trusted perspective. We work with our customers reviewing the configuration of systems and devices, their network architecture, Active Directory, security technology, security policies and security operations to determine overall security effectiveness. Assessments can also take the form of formal audits where NWN collects evidence of proper security and provides our customers with PASS/FAIL grades.

"Incident Response" involves identifying and confirming the attack or compromise, containing the problem, cleaning up the mess and finally, restoring normal business operations. It can include formal computer forensics investigations, either in conjunction with law enforcement or not.

Any and all of these services can be directly related to regulatory compliance (e.g. PCI, SOX, GLBA, HIPAA, 21 CFR Part 11, etc.) or they can be based on industry standards such as the ISO 27000 series.

Well, that's about all for now. Check back periodically for more updates. I hope to get to this on at least a weekly basis. If you have any questions, concerns, comments or need anything from me, don't hesitate to reach out.

Thanks,

Kevin