Skip to content

On “magic”, once again

Published on: July 23, 2009    Categories: Django, Programming

So it seems Giles Bowkett is upset about use of the word “magic”. I’m happy to agree with the general consensus from various fora that the specific article he’s complaining about is, well, pretty much content-free. I could read that post over and over and still have no idea what actual things the author liked about Django or didn’t like about Rails. But I’ve pretty much learned to ignore content-free hype, and that’s what I did in that case.

I’m also quite happy to grant that not all programming languages do the same things in the same ways. Usually, I’m even willing to overlook the sorts of predictable comments which have so endeared Lisp and Smalltalk programmers to the world in general (see: “Lisp had that forty years ago”, “Smalltalk is the only real object-oriented language”, “this would be so much better with macros and tail recursion”, “we invented refactoring, you insensitive clod”, etc.).

But that word, “magic”. It keeps coming up over and over again. Now, I’ve written (at some length) about this topic, and generally endorsed the viewpoint that the sorts of things typically described as “magic” are often just applications of simple principles or techniques, and so really just require some background knowledge to understand and use effectively. And, really, I think the advantages or disadvantages of such “magic” are — except in extreme cases — largely subjective and have more to do with experience and preference than anything else.

I think that’s not too far from what Giles was trying to say, but I could be wrong. So I’ll see if I can explain where I’m coming from.

For my first trick…

Django, of course, famously underwent a “magic removal” (immediately preceding the 0.95 release) which was hugely backwards-incompatible and is a motherlovin’ pain to migrate through if you happen to be upgrading really old Django installs (see: my life at the moment). What was this “magic”?

Well, let’s say you want a blog application which, in the grand tradition, we’ll call blog. You want a way to store, retrieve and represent entries, so — since you’ve got this object-relational mapper lying around — you want to use a model class to do this. In any Django release since 0.95, you’d have a directory (a Python module) named blog, in it a file named models.py and in that a class named Entry. Want to interact with it? from blog.models import Entry and away you go. Simple.

In Django releases prior to 0.95, you have a directory (a Python module) named blog, in it a directory (also a Python module) named models, in that directory a file named blog.py and in that file a class named Entry. Oh, and the models module needs to explicitly export that in its __all__ declaration. Want to interact with it? Well, the obvious thing would be, say, from blog.models.blog import Entry, which is a bit redundant but still serviceable. Except that won’t work.

What you actually want is from django.models.blog import entries. You can get at the Entry class through that (entries.Entry), but what you almost always want is the collection of other stuff in entries, which is to say all the helpful functions for retrieving entries from your database, exceptions related to entries, constants related to entries, etc., none of which you ever actually wrote and all of which were generated on-the-fly rather than, say, being inherited from some suitably-generically-written parent class.

Wait, what?

Understanding why it worked that way is easier if you have some knowledge of the prehistory of Django; originally, the Django ORM was, literally, a glorified code generator. You’d feed it a definition of your data model, and it would churn for a bit and then spit out a Python module, containing your model and a bunch of utility functions and other stuff. Then you’d stick that somewhere on your Python import path and go to town. And if you needed to make changes, no problem: just re-generate the code.

Based on advice from the outside world, that went away before the first public release of Django. Sort of. What actually happened was that instead of generating a module you’d save somewhere on your filesystem, the ORM just generated that whole module of code on the fly, kept it in memory and hacked sys.path to make it importable. This is not as hard as it sounds and can be a neat parlor trick (ladies and gentlemen, I have no .py files up my sleeve! Now watch carefully…). So, really, it was still a glorified code generator, it was just wearing a mask and a fake moustache and hoping nobody would notice.

This was “magical” in the sense that a whole bunch of stuff you’d never written appeared in a module you’d never created and was supposed to be used in place of the code you had actually written. And although the level of dynamic programming and cleverness involved was, to borrow Giles’ phrase, “Palookaville, Omaha bullshit” compared to the types of things you see in, say, Lisp or Smalltalk, it was pretty much universally condemned.

But — and this is important — it wasn’t condemned because dynamic programming or cleverness are bad in themselves; it was condemned because, well, there are situations where that stuff’s appropriate and situations where it’s not. More on that in a moment.

In the center ring…

Many circuses and travelling shows involve a particular feat wherein one or more performers set up some thin poles and set plates or other round-ish objects spinning upon them, balancing them and keeping them rotating fast enough that they stay — seemingly in defiance of natural law — perched atop the poles rather than crashing to the ground. Keeping in mind the fact that all analogies for programming are inherently bullshit, this can be considered an analogy for programming: you’ve got some plates, and you have to keep them spinning or you get crashes.

Some languages, libraries and/or frameworks require you to keep track of a larger number of plates than others. For example, in C you have to manage memory yourself, and remember to free anything you’ve malloc‘d when you’re done with it, and to remember what you’ve already freed so you don’t try to follow a pointer down the rabbit hole. There are programmers who consider this to be a character-building experience. Personally, I just look at it as adding a bunch of plates that I have to keep spinning.

Other languages, libraries and/or frameworks try to reduce the number of plates you have to keep track of. For example, many languages have now come around to the idea of automatic memory management and garbage collection; this has, by and large, been a boon to programmers everywhere (see: jwz, “Java doesn’t have free(). I have to admit right off that, after that, all else is gravy.”). In my experience, reducing the number of plates you have to keep track of makes programming simpler and programs more robust. So this is generally a good thing.

Round and round she goes

What does this have to do with “magic”? Well, sometimes programmers decide to do clever things, often in the name of convenience. They set things up so that, say, some variables will be automatically defined and populated without you having to do anything. Or so that certain modules of code are automatically loaded and made available to you without lots of tedious import statements. Or so that the return value from A ends up rendered by B, based on some convention of similarity of names between the two. Or so that a two-line class declaration ends up producing lots of members which were never explicitly defined in that class or any of its ancestors.

These sorts of things are, in certain circles, referred to as “magic”. They are, without a doubt, quite clever in their way. And in many cases they do, without a doubt, offer some level of convenience to the programmer who uses them.

But they can also introduce whole new batches of plates that you’ve got to keep an eye on. And they won’t be plates that you personally set spinning, and sometimes they’re not plates that you can easily watch or run over to if they start getting wobbly. They’re just a bunch of plates that got added to your act, and they’ve got to keep spinning or else they’ll crash and ruin the show, and figuring out why that happened or how to prevent it may prove rather difficult (see: Brian Kernighan, “debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?”).

In my opinion, designing good languages, libraries and/or frameworks consists of finding a balance between providing clever and convenient things, and asking programmers to keep too many plates spinning at the same time. When so many clever things are going on that it’s hard to track all those plates, people tend to call it “magic” . When so few clever things are going on that everyday tasks get cumbersome, people tend to call it “dumbed down”.

To paraphrase Jeff Goldblum…

Every language/library/framework has its equivalent of “magic”. Python, for example, has functions and classes as first-class values, (non-mutable) closures, the decorator pattern baked into the language, generators, comprehensions, properties, special methods you can implement to enable language-level syntactic constructs, dynamic code generation and all sorts of other stuff. These features are extremely important in Python, but use of these features often appears “magical” to people coming from languages which don’t have them (like, say, Java).

And every language/library/framework has its own conventions on when and how to use the “magic” it offers. For example: I am not a Lisp guru, but I’ve seen plenty of people who are point out that you should never write a macro when a plain old function will suffice. Sure, you’ve got all that fantastic dynamism that lets you bend the language in clever ways, but cleverness for cleverness’ sake is not a virtue; it’s just spinning up a bunch of extra plates, for no other reason than “because I can”.

To put it concisely, I think that’s what a lot of critiques of “magic” really boil down to: the notion that just because you can do something doesn’t mean you should. Accepting this can be tough, but I think it’s a necessary part of becoming a good programmer. If you don’t have a certain sense of discipline and respect for the power you’re wielding, well, you might end up cloning dinosaurs and getting eaten by them when the power goes out.

Of course, there’s quite a lot of room for subjectivity here. If you grew up, metaphorically speaking, in a family of acrobatic plate-spinners, you probably don’t mind handling a bunch of extra plates in your act, because you’ve been spinning lots of plates your whole life. If you didn’t, well, you might take a different view of things. And I think we also have to account for taste; there are, I know, people who can keep dozens of plates spinning but who choose not to, because that’s not what they like to do. And I’m sure there probably are people who can’t but would love to if only they could.

Get to the point

I’m not entirely certain that there is just one point here. It’s a bad thing to reflexively call stuff “magic” just because it does something you don’t yet understand. But it’s equally a bad thing to use “magic” just because you can. The right thing, if there is one, consists of a willingness to learn how stuff works even if it’s unfamiliar or seems complicated, but also consists of learning to solve problems in a way that’s clear and useful to people who’ll read and use your code.

Often, that means learning and abiding by the customs and conventions of the particular programming community you’re part of. Python and Ruby, though surprisingly similar as languages, have communities of programmers whose norms are significantly different from each other. And even within a language there can be large variations (and, as with so many other language features, Lisp has arguably been doing that longer than anything else).

It also means that long after you’ve mastered the syntax, concepts, patterns, libraries and frameworks of the languages you work with, you’ll still be working to master the discipline (in multiple senses of the word) of programming. In fact, if you do it right you’ll always be working on that part, you’ll always be learning to find the right amount of “magic” in both the languages you use and the programs you write, and sooner or later you’ll have to accept that it will never be a fixed quantity.

And don’t listen to these modern pundits who think you can rush into all of that; ten years is nowhere near enough time.