Skip to content

Let’s talk about frameworks, security edition

Published on: August 13, 2006    Categories: Frameworks

It’s been an exciting week, hasn’t it?

The Rails vulnerability touched off quite a firestorm of commentary on the security of web application frameworks (and, by extension, applications developed with them), so let’s bring back the frameworks series for one last hurrah and take a look at security.

What do we mean by “secure”?

This may sound like a strange question to ask, but it’s an important one. A common misconception is that an application is “secure” if it doesn’t have any “bugs”. Setting aside the fact that this just switches out one vaguely-defined term for another, let’s consider what it would take for an application developed with a framework to be completely “secure”. At the very least, each and every one of the following would have to be absolutely “bug-free”:

Also, all of those components, when configured as they are for the application, have to interact in a way that’s 100% “bug-free”.

It’s unfortunate, but the truth is that a completely bug-free stack involving all the components listed above has never existed in the history of the world, and never will exist. So “bug-free” has to go as a definition of “secure”. Similarly, any related definitions like “performs without deviation from the intended behavior” is out.

Which is a good thing, because it makes us think about security in much clearer terms: as a series of trades.

Consider an example: one common vulnerability in web applications is SQL injection, where an attacker is able to make your application run whatever database queries (including INSERT, UPDATE or even DELETE or DROP) he wants by taking advantage of common errors in the construction and escaping of SQL statements which will be executed by the application.

Now, there’s a very simple and very easy way to prevent 100% of all known, unknown and as-yet-undeveloped SQL injection attacks: all you have to do is stop using a database.

No, really.

This is what’s really going on when we talk about “securing” something: we’re evaluating a bunch of possible trades we can make, and whether they’re worth making. In some cases, not using a database is an acceptable trade (if, for example, you’re talking about a site which is fairly small and simple, and can reasonably be maintained by someone who just edits files directly). But for most of the really interesting things you can do on the web, not having a database just isn’t an option.

So you make a different trade: you decide that the benefits of having a database outweigh the costs and risks of having to ward off SQL injection.

And for other features you have to consider similar trades:

There are lots of potential attack vectors for web applications, and a number of them can be completely avoided simply by not having a particular type of feature. The problem is that those tend to be the really interesting and useful features, so most of the time you end up taking on extra cost and risk to get the benefits of those added features.

So what do we mean by “secure”?

This leads into a much more complex and nuanced definition of “secure” which can vary wildly on a case-by-case basis. So long as you’re accepting user-submitted content and displaying it in some form, there is a risk — possibly a very small risk, possibly a very large risk — that you’ll fall victim to a cross-site scripting attack. So long as you’re using a database, there is a risk that you’ll fall victim to SQL injection. For every useful feature you have, you’ve made a trade and taken on the risk of bad things happening because of that feature. A rough definition of “secure”, then, might be that you’re aware of the trades you’ve made and of their consequences, and that for any trades which expose you to risks you’ve taken reasonable steps to minimize those risks.

When the fall is all that’s left…

It’s also important to note that, sooner or later, something bad is going to happen. It’s inevitable. This isn’t an excuse, and it isn’t a reason to throw your hands up in the air and say, “why bother”, it’s just a statement of fact. How you respond after the breach is just as important as how you prepared before the breach (and, in fact, contingency plans for that situation need to be part of your precautions — if you don’t have a “what to do if we get hacked” procedure in place, stop what you’re doing and work one out), because sooner or later you will make a mistake and someone will manage to capitalize on it. Whether you learn from the mistake and use it to re-assess the trades you’re making will determine whether it happens again.

I thought we were going to talk about frameworks?

OK. Back to the actual topic :)

When we talk about “frameworks”, we’re really just talking about tools which are designed for a particular purpose: easing and simplifying the process of developing web applications.

Now, tools are notoriously tricky things; a knife, for example, is a very useful tool — it can be used to prepare food, open packages, carve wood, all sorts of things. But it can also be used to accidentally cut your thumb off. I’ve got a scar on my left hand from almost doing that exact thing once when I was cutting some boxes.

So tools (surprise!) involve trades. You get a certain amount of utility (like being able to cut the tape that seals a box shut), but you also get a certain amount of risk (like being able to perform inadvertent amateur surgery). Whether you use the tool or not depends on whether you’re willing to trade a certain amount of safety for a certain amount of utility (and, more importantly, on whether you have the competence and discipline to use the tool in ways that minimize the risks).

And frameworks are, in a way, the utility knives of the web-development world. They tend to have lots of little attachments and pointy bits that come in handy for different situations, but you can still cut yourself or poke your eye out if you don’t know what you’re doing or don’t pay attention. So whenever you use one you’re taking on a certain amount of risk, and hopefully you’re aware of that and know how to take steps to minimize that risk.

The dilemma of scope

Which puts framework developers into a nasty position: they want to provide useful tools, but they know that there are people who will use those tools without realizing the risks or knowing how to minimize them. And given the explosive popularity of frameworks as tools for people who don’t really know much about web programming, or even programming in general, that starts looming large (language designers also face this to a certain extent, and this problem contributed a lot to PHP‘s abysmal security reputation — there were bad things in PHP, yes, but there were much worse things being done with PHP by people who didn’t know any better).

So framework developers have to draw a line somewhere; obviously the framework should not expose any vulnerabilities directly (the Rails bug involved the ability to trick Rails into executing code from places it shouldn’t have, something that clearly shouldn’t have happened), but how far should the framework go toward saving application authors from themselves? Making the framework more “secure” in this “I’m sorry Dave, I can’t let you do that” sense means (surprise!) trading away some flexibility.

Different frameworks are going to make different trades. And just as you need to be aware of the trades you yourself have made, you need to be aware of the trades the framework has made.

Let’s take cross-site scripting as an example; a large percentage of XSS attacks can be prevented by a combination of carefully validating user input and carefully escaping that input any time you display it. So when you’re evaluating a framework you need to ask a few questions:

If you never ask these sorts of questions, then you’re taking your life into your hands; you won’t know the answers, and you won’t know what trades the framework is making — which means you won’t know what sorts of risks you’re exposing yourself to.

Falling down

And it’s equally important to ask what the response will be when there’s a breach, because there will be one eventually; even OpenBSD, arguably the most secure software system you or I can lay our hands on, eventually had a remote exploit.

So find out whether the framework’s developers have a security policy. Find out how they disclose vulnerabilities, and what their patching policies are. Find out whether they live up to their policies. Find out whether they learn from their mistakes. Part of your security will be in their hands, and you need to know whether that’s a trade you can make.

Don’t be afraid to ask lots of questions, especially hard questions. Those of us who have to answer them may squirm a little or get annoyed once in a while, but in the end we’ll all be better off for it.

Class dismissed

That’s all I’ve got for now; security is a topic that deserves a lot of attention, and lots of books and articles have been written about it; some of them are even pretty good. Go out and find some, and start reading and thinking and asking questions. A good place to start would be almost anything recent from Bruce Schneier, whose books and blog dragged me, kicking and screaming, into a better understanding of how to think about security (and whose willingness, more than once, to respond to his critics by saying that they were right is a large part of why I admire him). But the more you read and find out, the better off you’ll be.