Skip to content

Sniffle

Published: December 18, 2006. Filed under: Accessibility, JavaScript, Usability.

The other day, the Dojo blog announced the public beta of Renkoo, an “evite killer” which relies heavily on the asynchronous features of Dojo and “Comet”, the name that’s been given to the use of long-lived HTTP connections to provide instant updates of state in event-driven web applications.

Except I can’t check out Renkoo in my browser of choice — Safari — because I get automatically redirected to their “unsupported browser” page. They have a brief write-up on their blog of why Safari support is problematic, though I can’t help thinking that a social application which can’t work in Safari is starting out with the odds stacked against it. Safari has a pretty small slice of the overall browser market, but in the context of a social application which will depend on building networks of people, market share in the general population is almost meaningless: what’s important is browser market share among the critical adopters who will make or break the network. If you develop social software and you haven’t already read danah boyd’s “Cluster Effects and Browser Support”, stop right now and go read it. I’ll wait.

Anyway. I understand why Renkoo has problems with Safari. From what I read on their blog, it appears they have one of the few legitimate cases I’ve ever seen for sniffing out browsers and excluding them: the Comet model doesn’t appear to really be workable in Safari right now. In the long run I think they’re screwed if they can’t find a way to work in Safari, but they’re really in a rock-and-a-hard-place situation and I don’t envy them one bit.

Updated January 2, 2007: It now appears that Renkoo have ditched the browser-sniffing on every page, and an item in their documentation says that the only thing they don’t offer in Safari now is the Comet-based version of the invites interface. That’s a big improvement, and it’s nice to see they’ve dealt constructively with the issue.

But lately I’ve been seeing a major renaissance of browser sniffing in cases where it is, frankly, unjustifiable. Browser sniffing is dead, and has been for years.

The first death of browser sniffing

When the Web first really took off, the only significant player in the browser market was Netscape Navigator, whose rise and fall form a sort of Ozymandias for the Internet age. There were other browsers, though, and generally you could tell them apart reliably: they mostly had unique user-agent strings which were sent with every request, and by looking for the HTTP User-Agent you could figure out which browser you were talking to and make some assumptions about what it supported.

Netscape’s user-agent string was easy to spot, because it included the string “Mozilla”; early Netscape user-agent strings looked like this:

Mozilla/3.0 (OS/2; U)

And when Internet Explorer came on the scene, it was easily recognized by having “MSIE” in the user-agent string, and all was well with the world. Netscape and Microsoft worked hard to cram new unique features into their browsers, but the user-agent strings made them easy to tell apart, right? Well, not really. Very early in its history, IE started including the word “Mozilla” in its user-agent string; for example, here’s the string from IE 1.5:

Mozilla/1.22 (compatible; MSIE 1.5; Windows NT)

So every IE release since 1.5 has done this, complicating attempts to reliably tell IE and Netscape apart and making web design even more difficult, since IE and Netscape were, at the time, anything but “compatible” with each other.

If we’d been smart, web geeks would have stopped sniffing user-agent strings then and there. But of course people kept right on sniffing, developing ever more complex sets of rules to tell apart browsers whose user-agent strings began borrowing more and more liberally from each other. For the longest time, Opera sent a string almost identical to IE’s, originally to get around sniffers which locked out non-IE clients. Safari’s default user-agent string includes “Gecko” — the rendering engine of Mozilla and its descendants — though it most certainly isn’t a Gecko browser. Today, through some native support and some creative extensions, Opera, Safari and Firefox all have facilities available for changing the user-agent string to anything you like and can effectively masquerade as any major browser on the market.

In other words, it’s been about a decade (IE 1.5 was released in 1995) since the last time user-agent sniffing offered a straightforward way to figure out which browser a user has. But that hasn’t stopped people from trying.

The second death of browser sniffing

In the modern era of the Web — assumed for purposes of this article to have begun in 2003, the year the first versions of Safari and the-browser-now-known-as-Firefox appeared — sniffing is even more problematic because writing a new web browser has become ridiculously easy, thanks to embeddable open source rendering engines. Embedding a rendering engine is, of course, nothing new; AOL for Windows embedded the Internet Explorer rendering engine for years, and Windows, through the various incarnations of its application object model, has long exposed IE for use by other applications. But that was one engine on one platform.

In recent years, we’ve seen two solid cross-platform, open-source rendering engines come to maturity: Gecko and KHTML (which has since given birth to the Mac-specific WebKit application framework at the heart of Safari and several other Mac browsers). And embedding seems to get easier with each passing year, thanks to increasingly prolific bindings for a number of languages and desktop environments (for example, consider this demo from last year, which builds a Gecko-based web browser in only a few minutes, using Ruby Gecko bindings and the GNOME desktop’s interface builder).

Browsers which embed Gecko or KHTML (or which use WebKit on OS X) have pretty much identical rendering capabilities when compared to Firefox, Konqueror and Safari — all fully modern, state-of-the-art browsers in their own right (in fact, new browsers which embed these engines often are more capable since they’re built on more recent snapshots of the rendering engines than release versions of better-known browsers). The result: there are browsers out there with user-agent strings you’ve never seen before, but which are just as capable of handling advanced CSS and JavaScript as the browsers you’ve heard of. Excluding them via user-agent sniffing, then, is self-defeating: your goal is to keep out browsers which can’t handle advanced functionality, not browsers which can.

When what you’re doing is actively countering the goals of your site, it’s time to stop.

He’s dead, Jim

So sniffing the user-agent string to find out whether a browser can handle what you want to throw at it has died not once, but twice: the first time because major browser developers intentionally obfuscated their user-agent strings, and the second time because new, perfectly capable browsers have their own strings which won’t be in your list. Maintaining an up-to-date list of every user-agent string of every capable browser, including new browsers which embed Gecko, KHTML or WebKit, is basically a hopeless task.

And yet…

And that’s just the tip of the iceberg; if I wanted to, I could dig up dozens, probably hundreds of high-profile sites which are all too happy to drive away potential traffic by engaging in browser sniffing; just today, I saw a story on Slashdot about the Boston Globe literally telling someone to stop using Opera, another perfectly capable web browser with an advanced rendering engine at its heart (if I recall correctly, Opera’s engine is called “Presto”).

For as far back as I can remember, the Mozilla project has maintained a “technology evangelism” sub-project aimed at moving sites away from the horrors of IE-only sniffing, but it appears that the only real response has been to move to “IE or Firefox only, and maybe Safari”; that’s not an improvement. That’s treating a superficial symptom instead of curing the disease. And this weekend I came across “Gecko is Gecko”, which tries to educate developers on the existence of embedded Gecko in browsers that aren’t Firefox. One of their suggested solutions — sniffing for “Gecko” in the user-agent string instead of “Firefox” — is a minor improvement (and has the added benefit of giving Safari a free pass), but again this just treats a symptom.

Let me say this as clearly as I can: sniffing user-agent strings and comparing them against a list of “approved” or “known good” browsers is dead. Dead as a doornail. Yank the sniffing scripts out of your toolbox and throw them away. These days, you’re far more likely to be excluding capable but (to you) unknown browsers than you are to be excluding 1.x editions of Netscape Navigator.

Sniff the right way

One solid alternative is feature sniffing. This is the other solution mentioned by the “Gecko is Gecko” folks, and it’s vastly superior: instead of looking at the user-agent string and redirecting if it’s not in your “approved” list, you instead use JavaScript to test for the features you want (and, these days, browser sniffing is much more about JavaScript — particularly AJAX capabilities — than anything else). This is a concept that’s been kicking around for years, and which has had one or two high-profile articles written about it (including one by JavaScript guru Stuart Langridge, helpfully linked by the “Gecko is Gecko” site).

The nice thing about feature sniffing is that it mostly just works: a browser will lie out of both sides of its mouth about whether or not it’s IE, but its JavaScript engine won’t lie about whether it supports getElementById. There are a couple of small wrinkles in this rosy picture (notably Safari which, last I checked, exposes a method called preventDefault on DOM events, even though it doesn’t actually do what preventDefault is supposed to do), but I’d be willing to bet it’s as least as effective, percentage-wise, as sniffing for the “big four” (IE, Firefox, Safari and Opera) and it has the advantage of being future-proof: when the browser market changes, user-agent sniffing scripts have to be laboriously updated. But so long as you’re testing for actual features, you never have to update; getElementById isn’t going to get renamed any time soon.

And to really hammer it home, keep in mind that this is by far the most effective way to implement AJAX effects. Versions of IE prior to 7 expose XMLHttpRequest in a slightly different fashion from other browsers, so the browser-sniffing method relies on being able to absolutely differentiate IE so you can use the correct invocation. Of course, lots of other browsers like to masquerade as IE in their user-agent strings, so that’s out the window: trying to use IE-style invocation in non-IE browsers will just throw JavaScript errors at your users, and then they’ll complain (if you’re lucky) or take their money and their ad-viewing eyeballs somewhere else (if you’re not). Meanwhile, a few short lines of JavaScript which determine where the XMLHttpRequest object lives are all that’s needed to effectively work out how to do AJAX.

Even better: just stop sniffing

That’s still only a partial solution, though. The Platonic ideal is what’s come to be known as “progressive enhancement” (or, for the buzzword-inclined, “Hijax”): instead of building an advanced interface and then worrying about how to exclude browsers that can’t handle it, first build a simple interface that works everywhere, and then layer the fancy effects on top of it (using techniques like feature sniffing if necessary). Jeremy Keith has done some great evangelism for this technique, including coining the “Hijax” name and providing one of the best write-ups of how it should work.

I’m fortunate to work for a company which “gets it”, and which lets me and my co-workers build things this way; we’ve been working on a redesign of ljworld.com which will hopefully launch in the not-too-distant future, and progressive enhancement has been the name of the game. Pretty much all of the JavaScript I’ve written for it just hooks into Nathan‘s semantic HTML and adds fancy effects when possible. If you don’t have JavaScript available, you still get all the content. A couple of other projects on the horizon are developing in much the same way. For example, one project (which will remain unnamed for the moment) involves some relatively complex interactions; they were built first as simple multi-step HTML forms (fill out one part, submit, move on to the next), and only later was JavaScript added in which will — for capable browsers — collapse that into a single page with fancy AJAX effects.

For the unfortunates

Of course, not everyone is so lucky, and I recognize that. Some designers and developers are stuck working at places where old-school user-agent sniffing is still mandated from On High, because something that kind of worked in 1996 must be really great to have in 2006, right? Anyone who’s caught in that situation has my deepest sympathy. And a suggestion.

If you have absolutely no other choice besides user-agent sniffing, do it the way Yahoo does it, and feel free to drop that name in discussions with corporate bigwigs; an article on some guy’s blog won’t carry much weight, but the field-tested techniques of one of the Internet’s biggest companies will hopefully get their attention. Yahoo uses a system called “graded browser support”, in which every browser falls into one of three categories:

The key here is how Yahoo treats the “grade X” browsers: most places lump them in with “grade C” and either serve a watered-down interface or refuse to serve anything at all. Yahoo, however, assumes that unknown browsers are capable, an assumption more likely to be correct in these modern times, and goes so far as to say that

Unlike the C-grade, which receives only HTML, X-grade receives everything that A-grade does. Though a brand-new browser might be characterized initially as a X-grade browser, we give its users every chance to have the same experience as A-grade browsers.

If you absolutely must sniff based on user-agent strings, this is the philosophy to adopt; the pool of ancient browsers which can’t handle modern features is known and documented, so unknown browsers are much more likely to be new and — thanks to the availability of advanced, embeddable rendering engines — more than capable of dealing with any fancy effects you want to use.

And once you’ve introduced your management to this style of thinking, I heartily encourage sending them the entire article on Yahoo’s grading system; the analogies they use for explaining accessibility and progressive enhancement (particularly the color versus black-and-white TV one) are top-notch, and may help you make further inroads toward adoption of modern development techniques.

But in the long run, user-agent sniffing is dead, and with each passing day the old-school forms of sniffing offer less and less utility and more and more opportunities to accidentally drive away people who would otherwise have been revenue-generating users. Switching to the inclusive Yahoo-style model is a good first step, but it’s just a stopgap in the march toward more modern and more powerful techniques like feature sniffing and progressive enhancement; if you’re doing user-agent sniffing here and now, at the end of 2006, your new year’s resolution should be to stop before the end of 2007.