Skip to content

The future of web standards

Published: December 17, 2007. Filed under: Pedantics, Philosophy, Web standards.

The world of standards-based web design and development has been undergoing something of a shake-up these past few days; Andy Clarke’s CSS Unworking Group” seems to have opened the floodgates to expressions of dissatisfaction with the current method of progress (or lack thereof) in developing and standardizing new features for web developers and designers. Alex Russell’s “The W3C Cannot Save Us” and my friend and former colleague Jeff Croft’s “Do we need a return to the browser wars?” continue the theme, as does Stuart Langridge’s “Reigniting the browser wars”, which popped up as I was finishing the first draft of this post.

Ultimately, I think this boils down to two problems.

The first problem, not to put too fine a point on it, is that progress in developing new standards is glacial at best. HTML went from initial concept to version 4.01 in less than a decade, but has stayed pinned at 4.01 since before the turn of the millennium (XHTML is no better; XHTML 1.0 was deliberately identical to HTML 4.01 except for the XML syntax, and XHTML 1.1 really didn’t add much, focusing mostly on reorganization and modularization). Similarly, CSS has been sitting at version 2 of its spec since 1998; CSS 2.1 is still no more than a “Candidate Recommendation”.

The second problem is that innovation on the Web today is largely taking place through leveraging non-standardized or even proprietary technologies: Flash is popping up in all sorts of unexpected places, Microsoft and Adobe are both working on next-generation “rich internet application” runtimes and the biggest buzzword of them all — “AJAX” — is based around a formerly Microsoft-only technique (XMLHttpRequest) which has found its way into other browsers.

The combination of these two problems leads to a very real worry: that whatever chance there might have been for a truly interoperable Web will vanish, with useful content and applications disappearing into the proprietary walled gardens which provide the only genuine opportunity for new features and capabilities.

The two problems — the slow pace of standardization and innovation through as-yet-non-standard features — are obviously intertwined. The question at hand, then, is how to solve them together: how to produce a process whereby standards bodies can respond quickly to innovative ideas, and wherein innovators will encourage standardized, interoperable implementations of their ideas.

While I don’t have a solution, I do have some thoughts on some of the problems with the discussion so far, and some ideas on where to look for a successful model to follow. So let’s dive in.

The false dilemma of standardization models

One of the major issues in resolving this problem is that huge amounts of discussion have essentially assumed a false dilemma, that there are only two ways to run a standards body:

  1. A closed-door, pay-to-play system, as the W3C is perceived to be.
  2. A howling mob which runs by consensus of the participants.

Of course, these aren’t the only two options, but quite a few people (Daniel Glazman, for example) seem to behave as if they are. The first thing we need to do, then, is throw that out and recognize that there’s actually a fairly broad continuum of options in between these two extremes; in other words, what we should be looking for is a balance between the input of people who use and develop for the Web, and people who develop browsers and attendant technologies.

Finding the balance

This brings us to a new question: how do we find the proper balance between the competing interests of Web vendors and Web users/developers? Personally, I think the answer is to look at the available history: the world of web standards is not really breaking new ground in needing to properly strike this sort of balance, and there’s already a long and rich history of groups going through precisely this process, which anyone who’s interested in reforming web standards should be looking at.

I’m primarily thinking of open-source software development, which has been through this already many times over: there’s plenty of open-source software out there which needs to appeal both to “average joes” and to multinational corporations, and even a few examples of projects which have successfully done so. The Linux kernel is one such; its model is neither “closed-door” nor “howling mob”, but rather something in between. Linux is not a dictatorship, but it is not a democracy; Linus Torvalds and his trusted “lieutenants” ultimately retain control over the project, but input from any interested party is accepted and taken into consideration. And though not all opinions are equal, the process of deciding which opinions are given more weight than others seems to be largely pragmatic and meritocratic.

The result is that large corporations can participate and contribute without turning Linux into a closed-door effort, and smaller/independent developers can participate and contribute without turning Linux into a howling mob. That’s quite an accomplishment, and it’s something that should not be overlooked by anyone who wants to reform web standards.

And the same is true in many other successful open-source projects: Perl, Python and Ruby, for example, all have open development processes, but ultimately remain in the control of a single “BDFL” and/or a few “lieutenants”. And the success of these projects across a broad spectrum of their target markets shows that this process can work extremely well: a general patten of open input and discussion, with a few experienced and trustworthy folks who steer the process and have the power to make final decisions when things would otherwise get bogged down, manages to avoid the downsides of both of the extremes of the false dilemma outlined above.

W3C’s on first, WHAT’s on second

Of course, this raises the question of whether the WHATWG, which has been working steadily on refining, improving and extending various standards for several years, will be able to accomplish the same thing in the web-standards world that Linux and other successful open-source projects have accomplished in the software-development world.

At this point I don’t honestly know, and I don’t think anyone really does, though there’s no shortage of opinions. People far smarter and more experienced than I have come down on both sides, with some feeling that WHATWG is the right way to go and others feeling that it’s a fool’s errand. There are some encouraging signs, though:

Some folks will inevitably point to the recent tempest in a teapot over media codecs as an example of the WHATWG being too beholden to corporate interests, but I’m not convinced (though, to be fair, I’m also not convinced that the HTML spec should be in the business of telling people which media formats to use); unlike many people who began frothing at the mouth when they heard about it, I actually followed the discussion and so I don’t really see anything to complain about. The spec no longer specifically recommends Ogg Theora, but it still leaves implementors on the hook to provide support for unencumbered, interoperable media. And in the long run the “unknown unknown” (to borrow a phrase from the Tao of Donald Rumsfeld) that represents may prove worse for large companies than the known unknown of implementing Theora; the current language of the draft is somewhat akin to a dentist telling you that painful tooth will have to come out someday.

At any rate, it’s still too early to tell whether the WHATWG will work, especially for the case of CSS; the WHATWG’s work to date has largely focused on HTML and the DOM, as indicated by the current WHATWG draft specification.

The Microsoft bogeyman

The largest perceived obstacle to any reform of web standards is a big question mark surrounding Microsoft. Though a member of the W3C and a participant in various working groups, Microsoft’s implementations of the resulting standards have ranged from lackluster to laughable and this, combined with Internet Explorer’s dominant market share in the browser world, naturally leads to a concern that any effort which doesn’t have a firm commitment from Microsoft will inevitably fail.

But at this point, honestly, I’m not convinced that Microsoft is relevant. At least, not in the way everyone seems to think they’re relevant.

If you’ve never read Joel Spolsky’s article “Fire and Motion”, do so now, because what I’m about to argue won’t make much sense if you haven’t. Go ahead and read the whole article, because it’s good, but the thing you want to take away from it is the fact that a large part of Microsoft’s dominance is due to the phenomenon Joel so eloquently describes by analogy to infantry combat tactics:

The competition has no choice but to spend all their time porting and keeping up, time that they can’t spend writing new features. Look closely at the software landscape. The companies that do well are the ones who rely least on big companies and don’t have to spend all their cycles catching up and reimplementing and fixing bugs that crop up only on Windows XP. The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Microsoft. People get worried about .NET and decide to rewrite their whole architecture for .NET because they think they have to. Microsoft is shooting at you, and it’s just cover fire so that they can move forward and you can’t, because this is how the game is played, Bubby.

Now, stop and look at Internet Explorer 7 (er, I mean, “Windows Explorer 7”). IE7 is, despite what Microsoft’s PR people would like you to believe, the result of Microsoft being pinned down under covering fire from the rest of the industry. Mozilla, Apple and Opera were pelting them with fire from all directions — tabbed browsing, better privacy, better security, all sorts of features people actually cared about — and Microsoft found itself on the receiving end of its own strategy. And, just as attempts to constantly keep pace with Microsoft resulted in lots of software vendors turning out sub-par products, Microsoft’s own response to the unfamiliar need to play catch-up resulted in a sub-par browser.

The same thing is actually happening to Microsoft on several fronts right now:

Microsoft is pinned; they’re stuck trying to catch up to what everybody else is already doing, while the competition just keeps piling on new features and new technologies. They’re not dead yet, of course, and are far from it: IE is still the dominant browser and Windows is still the dominant operating system. But there’s definitely been a sea change in the industry: Microsoft, the unstoppable juggernaut, is vulnerable and is having to play catch-up to maintain its dominance. To borrow a phrase from Lewis Carroll, they’re having to run as fast as they can, just to stay where they are.

And so Microsoft really isn’t relevant to the future of web standards; any compelling new development that comes from the rest of the industry will be just another form of fire and motion, and Microsoft will have no choice but to keep pace, regardless of whether they participated in the process.

Where to go from here

I don’t really know. Right now I’m just making some observations and engaging in a little analysis of the problem, because that’s what I do. I do think that the major points I’ve outlined above are all important to any actual solution:

But beyond that I’m not sure where to go. For now I’m keeping my eye on the WHATWG (as I’ve been doing for several years, because for the most part they’re the ones who are actually getting things done), but something better may certainly come along. I think an important next step for the folks who are dissatisfied with the status quo is to decide what to do. Andy Clarke has proposed some ideas, but amusingly derides the idea of operating by consensus whilst proposing things that could only work if accepted by consensus (and largely seems to fall into the false dilemma outlined above). It’s all well and good to know that a different direction is desired, but it’s equally important to know which direction you want to go in. So I’ll close with a thought to that effect from one of my philosophical heroes, G.K. Chesterton (from the opening chapter of his wonderful book Heretics):

Suppose that a great commotion arises in the street about something, let us say a lamp-post, which many influential persons desire to pull down. A grey-clad monk, who is the spirit of the Middle Ages, is approached upon the matter, and begins to say, in the arid manner of the Schoolmen, “Let us first of all consider, my brethren, the value of Light. If Light be in itself good—” At this point he is somewhat excusably knocked down. All the people make a rush for the lamp-post, the lamp-post is down in ten minutes, and they go about congratulating each other on their unmediaeval practicality. But as things go on they do not work out so easily. Some people have pulled the lamp-post down because they wanted the electric light; some because they wanted old iron; some because they wanted darkness, because their deeds were evil. Some thought it not enough of a lamp-post, some too much; some acted because they wanted to smash municipal machinery; some because they wanted to smash something. And there is war in the night, no man knowing whom he strikes. So, gradually and inevitably, to-day, to-morrow, or the next day, there comes back the conviction that the monk was right after all, and that all depends on what is the philosophy of Light. Only what we might have discussed under the gas-lamp, we now must discuss in the dark.