Bullets of vaguely silvery hue
There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.
If “The Mythical Man-Month” — or at least the popular distillation of it into a single pithy saying — is the most famous thing the late Fred Brooks ever wrote, then “No Silver Bullet”, the essay quoted above, must surely be the most infamous, or perhaps the most-argued-about, even in the pre-Web era, since Brooks felt the need to write a followup (“‘No Silver Bullet’ Refired”) further explaining his points, and engaging with and arguing back against critics, before the decade of his initial prediction had even fully run out.
Which is a shame, because I think “No Silver Bullet” may have been the best thing Brooks ever wrote.
So I’d like to spend some time today laying out my own thoughts on “No Silver Bullet”, and exploring some things that — even if they might not be true silver bullets in the Brooks-ian sense — are silvery enough to be worth bringing up and talking about.
But before I say anything else, and before you read any more of this post, I want to urge you to pause here, and go read it for yourself. Don’t just read the Wikipedia summary of it, or someone else’s summary of it, don’t ask some half-baked “AI” to make up lies about it for you. Go get a copy of it and read the whole thing. It’s not terribly long. The 1995 “twentieth anniversary” edition of The Mythical Man-Month includes both it and the “Refired” followup, and is worth spending a few dollars on to have in your personal library. Once you’ve read it, you can come back to this post.
It’s important to understand that the essay was written in the mid-1980s, when computing hardware was routinely delivering huge gains in power (see Moore’s Law); “No Silver Bullet” attempts to answer the question of why there were not corresponding huge gains in software productivity.
A rough summary of Brooks’ argument is that the difficulty of software development can be split into two types: accidental difficulty and essential difficulty. Since you probably didn’t take my advice and actually go read the full essay, here’s how he defines those terms:
Following Aristotle, I divide them into essence—the difficulties inherent in the nature of the software—and accidents—those difficulties that today attend its production but that are not inherent.
An example of “accidental” difficulty might be a low-level programming language where the programmer must spend an inordinate amount of time manually managing resources, such as blocks of memory. Changing to a higher-level language where the compiler and/or the runtime automatically manage these resources and perform all the necessary “bookkeeping” would improve productivity.
The “essential” difficulty, meanwhile, comes from factors like complexity (especially as systems scale up in the number of components and interactions between them); change over time; and the difficulty of properly visualizing and thus conceptualizing/reasoning about software systems.
Brooks then points out that depending on how much of overall software difficulty is essential, it may not be possible to achieve large gains solely from reducing accidental difficulty:
How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.
A different, stronger formulation of the idea comes not long after that (the emphasis here matches what appears in my copy):
The essence of a software entity is a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions. This essence is abstract, in that the conceptual construct is the same under many different representations. It is nonetheless highly precise and richly detailed.
I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to the conceptual errors in most systems.
If this is true, building software will always be hard. There is inherently no silver bullet.
Though it’s worth noting Brooks didn’t intend to be pessimistic or entirely to rule out improvements in productivity over time, just that he believed they would be the result of more difficult incremental progress rather than constant huge leaps forward (as with hardware):
The first step toward the management of disease was replacement of demon theories and humours theories by the germ theory. That very step, the beginning of hope, in itself dashed all hopes of magical solutions. It told workers that progress would be made stepwise, at great effort, and that a persistent, unremitting care would have to be paid to a discipline of cleanliness. So it is with software engineering today.
The middle portion of the essay deals with some historical advances, and some ideas that were trendy in the mid-80s as possible major advances (including things like object-oriented programming, and “artificial intelligence” in various forms) but that Brooks didn’t believe would yield huge gains.
The final section is, I think, one of the most important and also most-overlooked: a listing of potential attacks on essential difficulty which Brooks believed were promising. The first suggestion is just, when possible, to avoid building software, and buy it instead. Buy-versus-build decisions are still topical nearly forty years later. The next two are things that eventually got swallowed up under the “Agile” branding: rapid prototyping/refining of requirements, and incremental development (in Brooks’ phrase “grow, not build, software”). Finally, he suggests training and cultivating software design skills. I have some disagreements with Brooks on the existence of “great” designers as a class apart from ordinary mortals (much as I have issues with the concept now known as the “10x developer”, introduced in Chapter 3 of The Mythical Man-Month), but I appreciate that he apparently believed good software design, at least, was a teachable skill.
Brooks was right
A lot of people have argued against “No Silver Bullet” in the (now) 37 years since it was initially published. The nine-years-later “Refired” followup — which is also worth reading — dealt with a lot of the early ones and attempted to further clarify what Brooks was originally getting at.
But the simple fact is Brooks was right: although we have made progress on accidental difficulty in software, and even some on essential difficulty (mostly through improved practices and design), progress in software development as a whole has not been anything like the gains upon gains seen in hardware.
In fact, not only have we obviously not matched the Moore’s Law gains in hardware, we haven’t even come close. And the hard part of software remains, as Brooks explained back in 1986, the “specification, design, and testing” of the abstract “conceptual construct” that is a software system. Actually typing code into an editor is the last, least-time-consuming, and probably least important step of the entire software development process (if you get to that step and it doesn’t feel that way, something has almost certainly gone wrong much earlier).
This is true even in teams that follow the kinds of practices Brooks suggested. Even when you’re prototyping rapidly and gathering feedback and iterating quickly and “growing” a system, the time the team spends doing things that are not typing-code-into-an-editor is still going to significantly outweigh the time spent on tasks that are. And one of the great ironies is that the better you get at the process of software development, the more those non-code-writing tasks will come to dominate (not the only reason, but one important reason: the better your practices, the more likely it is that your average code change is small).
Why it matters
As you climb up the ladder of software development job titles, you should be measured less and less on your own personal productivity, and more and more on the impact you have on the productivity of your colleagues. Understanding why software development is difficult, and where meaningful gains are actually to be found, is thus a hard requirement for career growth and advancement.
For example, a currently popular fad for increasing productivity is a bunch of stuff that falls under the umbrella term “developer experience” (or “DX” or “DevEx” or other supposedly-productivity-saving abbreviations). But here’s the dirty secret: “DX” is almost entirely about attacking accidental difficulty, which means it’s a diminishing-returns game. That doesn’t mean you shouldn’t do it — there is a state of the art for this stuff and many teams will see significant initial gains from trying to get closer to it — just that, well, it’s not going to be a silver bullet.
If you can take, for example, the time needed to bootstrap a new backend web service with all the standard bells and whistles and reduce it from, say, a week or more of mostly manual work to, say, an afternoon (or less) of mostly watching automation run, that is a large improvement. And you absolutely should consider investing some of your more senior developers’ time into things like that.
But you should also remember an important question: how often do you perform that task? And what happens before it, and after it? How do you arrive at the decision to start a new service? How do you decide what it will do, what it will look like, what it will need? The setup is something that happens once per codebase. And no matter how slick and snazzy and fast that experience is, when it’s done it dumps responsibility back on the developer’s shoulders and says “OK, you’ve got a codebase initialized, now what are you going to do with it?”
This is a key lesson of “No Silver Bullet”. Having (to stick to the example at hand) snazzy DX for bootstrapping developer environments and new codebases and such is a good thing. But an even more important thing is having people who know about and can implement and follow good practices and good software architecture and design. If you have the people with the knowledge and skills to do that, the DX will probably happen sooner or later as a side effect. If you go out and get some fancy DX stuff but don’t work on the people and the practices and the design skills, your results will not be so good.
We are, however, as an industry, not anywhere near as good as we ought to be at communicating and teaching good practices and good design and architecture. Which is another thing that should be a big focus area for people climbing the job-title ladder, but that stuff mostly isn’t cool or shiny, while “DX” tooling is. So it’s easy to get distracted. I am at least as guilty of that as anyone else I know (and my current title is “Principal”, though I still am very much growing into it). Improving practices and design by even a small amount is likely to yield larger, and longer-term, gains than any amount of shiny “DX” tooling, persisting even across job changes — knowing particular tools well doesn’t always transfer well, but knowing how to design good software does.
I promised you some silvery things
I deliberately used the term “diminishing returns” above, because I believe that’s a good way to think about software productivity. Brooks happily admitted that, when they first came along, higher-level programming languages (and in the 1980s, many such languages were still very low-level by the standards of the 2020s, or even of some earlier decades) did enable significant productivity gains by freeing programmers from the manual tedium older languages had imposed. But once you’ve eliminated some big initial chunks of accidental difficulty, the gains that are left to be had from that approach are never again going to be as big or as easy to obtain.
And I think, for the most part, that’s been the arc of software development as a whole, and of individual sub-fields within it. For example, my own sub-field of backend web development got some huge gains in the late 1990s/early 2000s courtesy of things like PHP’s programming and deployment model displacing and putting to shame most of what had come before, and then again from the rise of the much simpler (compared to the prior status quo) and more opinionated rapid-development frameworks like Rails (and Django!) which mostly threw out the flexibility-as-an-end-in-itself and corresponding pile of layers of abstraction of popular “enterprise” tools in favor of a clearer and more in-your-face “you will do it this way” approach. But after that it cooled off, and we have not seen those kinds of gains since. I wonder whether we ever will again, though at least for the Python world I have some thoughts that are going to need their own separate post.
But not getting gigantic Moore’s-Law type gains doesn’t mean no gains at all. So I’m going to mention two things that, while they might not be true silver bullets (in the sense of enabling order-of-magnitude overall gains in productivity), are still (in my opinion and experience, which of course are objectively correct and universal) of sufficiently silvery hue to be worth looking into.
Note that here I am using the label “Docker” as a shorthand for the much larger ecosystem of containers, containerization, etc. which involves both the original implementation specifically named “Docker” and other competing implementations which are not named “Docker” but are relatively interoperable with it.
I was a holdout on Docker. Partly because at this stage of my career I’m not an ardent early adopter of many things and prefer to wait and see which stuff survives its initial hype cycle. And partly because a lot of my early exposure to it was alongside Kubernetes. Which, if there’s such a thing as an anti-silver bullet, may be a worthwhile candidate. Maybe instead of a type of bullet, it’s like a Looney Tunes cartoon where one character swaps another’s gun for something that’ll blow up in the shooter’s face when the trigger is pulled. That’s how I feel about Kubernetes.
But Docker? Docker is great. Start with the obvious thing: it’s the first “virtual machine” (or VM-alike) technology I’ve ever used that actually mostly works. When I got into this business, having a “local development environment” meant running Apache and the application code and a database server and probably also a cache server instance, and having to know how to install and manage those things on your own machine. Then along came VMs, and running something like
vagrant up and walking away for an hour and coming back to find it failed, but for reasons that are impossible to figure out because it worked for the person sitting next to you who ran the same command on the same model of laptop running the same operating system.
Now? Docker. It’s not perfect, it doesn’t get to 100% success, but I would guess that for the things I work with at my day job (which all ship one or more
Dockerfiles and a
docker-compose.yml standard) the “it worked first try” rate is easily approaching an order of magnitude higher than I ever saw anyone achieve with Vagrant, let alone with the old-school non-virtualized setups.
It’d be worth adopting just for that. But it also gets you far higher consistency across local and deployed environments, and the ability to just try things in ways that were never really possible before. I used to groan and make faces if I needed to set up a local database, let alone something more complex like, say, a local Kafka cluster (foreshadowing!). Now? Pretty much every well-maintained tool these days ships at least a standard
Dockerfile, and more complex things ship a
docker-compose.yml. You can just
docker run or
docker compose up and a staggering percentage of the time it works. And then you can shut it down and trash it when you’re done. No longer do you have to worry six months later about why your laptop is starting three databases and an enterprise Java cluster every time it boots up (or why Homebrew upgrades take all day because of the sheer amount of stuff you had to install).
It’s getting to the point where I think having an easy “run this Docker or docker-compose file” story is probably table stakes for any software project that wants to be taken seriously. I don’t know who exactly was first to get there in the web-framework world, but I first saw really good integration over in C#/.NET land, where the
dotnet command-line interface can Dockerize a codebase for you without you needing to do much work at all. Every time I see someone talk about doing a simple
dotnet publish to get their app containerized, I get jealous and spend the rest of the day wishing Django had something similar.
Anyway, I think Docker is a pretty silvery piece of tech. Especially the docker-compose stuff, because for all the faults of YAML (and there are many), opening up that YAML file and reading “here are the things that this service is made of” comes tantalizingly close to attacking essential difficulty. One part of which, Brooks noted, is the difficulty of visualizing a software system: trying to solve that by introducing different levels of resolution from something like a
docker-compose.yml file down to an editor/IDE view of the structure of a particular unit of code is starting to really get somewhere.
I am deliberately not using the word “events” here, because what I am talking about is not going all-in on event-driven/microservice architectures. So let’s call them messages for now.
When I was a kid, cartoons on TV taught me that “knowing is half the battle”. But in software, sometimes knowing is the thing you need to battle against: design principles like the Law of Demeter are based on the realization that it’s not always good for different parts of a software system to know about each other. The more knowledge components have of each other, the more tightly they’re coupled and the harder the whole system is to understand and work on.
For example, a while back I got into an argument about “service layer” abstractions in Django applications, and someone brought up the example of an order system where, say, cancelling the
Order requires a bunch of work in other components, too: it also has to check whether it got as far as a
Shipment and cancel or reroute that, potentially cancel a
Payment or create and issue a
Refund, etc. The point being made was “This is too much complexity to put into a single
cancel() method of the
Order model!” Which is true, but the problem was the followup: “Therefore, I’ll put it all into a single
cancel() method of an
OrderService class instead.” Doing that doesn’t actually solve the problem (
OrderService needing to know about, and know how to manipulate,
Shipments and so on), it just moves the problem around a bit.
So what would solve the problem? Decoupling those things from each other. There are multiple ways to do this, but one I suggested at the time, because I’ve had success with it, was to introduce some sort of message bus, and just have the
Order adjust its own fields and then emit an “order cancelled” message. Then all the other components — shipments and payments and so on — can watch for that message and react appropriately.
And this doesn’t have to be hugely complex or involve tons of infrastructure or going all-in on “event-driven” architecture. There are plenty of ways to just get a message queue and use it entirely within a single codebase, no “microservices” needed. Django, for example, includes and provides public API for a simple synchronous message bus (which Django calls “signals”). Redis, which a lot of people use already for caching, has message-queue functionality. And there are specialized/dedicated tools and services out there: RabbitMQ, Kafka, SNS/SQS, and many more, with libraries that make it easy to connect to and use them.
The more I think about it and work with them, the more I think message queues are pretty silvery. Not perfect, but still a significant improvement on what you get without them, which is probably why so many systems and frameworks in so many other fields of programming have some form of event-based or event-driven model. Backend web development currently seems to be in the process of discovering that.
The big gain on accidental difficulty from message queues is, as mentioned, that you can have a lot of components which all cooperate without having to directly call into or even know about each other: just put a message in the queue and let whatever cares about that message type read and react to it. This in turn reduces the amount of coupling in the system and the amount of bookkeeping of each other that the components have to do. But again it also comes tantalizingly close to attacking essential difficulty: using message queues doesn’t directly make an entire system easier to visualize or reason about as a whole, but the removal of direct coupling does facilitate reasoning about individual components in isolation, which is a smaller and hopefully easier task.
In closing, some bullet points
Software development is hard. It’s been hard for a long time. It’s going to stay that way for the foreseeable future. Yes, even with your favorite “AI” code assistant, because even if it actually does by some chance spit out reasonable code, that’s not and never was the hard part of software development.
“No Silver Bullet” gives us a useful framework for thinking about how and why it’s hard, and for understanding where and how we can achieve gains in productivity.
And despite the number of people who’ve spent literal decades arguing Brooks was wrong, we still have not seen the kinds of consistent huge gains in software productivity that we’ve seen in hardware, which is a strong indicator he was right about there being some essential difficulty that’s much harder to reduce away.
This doesn’t mean progress isn’t possible, or that we shouldn’t ever attack accidental difficulty. Reducing accidental difficulty can provide significant up-front gains; they just quickly fall off as essential difficulty, inevitably, comes to dominate.
Understanding the root causes of difficulty in software development, and being able to evaluate where productivity gains are likely to be available, is something every developer should work on as they grow and advance through their career. The same goes for understanding ways to attack essential difficulty.
Some tools or technologies or techniques that mainly attack accidental difficulty are still really useful and worth adopting. That goes double for ones that help you to make code or processes simpler or clearer or easier to reason about, because that’s approaching essential-difficulty territory.
We should all get a lot better than we are at good software design and architecture, and at teaching it to our up-and-coming colleagues.