That’s not Popper!
In an entry posted the other day, Aaron Swartz expounded on a general observation that the “scientificness” (if I may coin a word) of a theory or discipline is roughly inversely proportional to the number of times the word “science” occurs in its name. Good examples include “creation science” and “Scientology”. This is certainly relevant material, and there are quite a few good, recent books on the subject (many of which, if you’re looking for an author to get into, have been written by Michael Shermer). But I couldn’t help cringing at Aaron’s characterization of Sir Karl Popper as
[A]n enemy of science who tried to insist that science never actually made any progress, that we never learned anything more about the world.
Time to dust off the old philosophy degree and do some expounding of my own, because while I don’t necessarily agree with where Popper ended up, I think he’s one of the most important and most unfairly-maligned philosophers of science of the twentieth century. To understand why, though, we have to back up a bit.
A brief history of knowledge
There are lots of branches of philosophy — metaphysics, logic, ethics, political philosophy, social philosophy, philosophy of language, philosophy of mind, etc. — and each one deals with a particular topic or set of topics. Philosophy of science, which is one that Popper was especially concerned with, deals with, obviously enough, science. But really understanding Popper and the question at hand requires us to dip into epistemology, which is the branch of philosophy concerned with knowledge itself. Epistemology generally looks to answer a few basic, but extremely important, questions:
- What is the nature of knowledge?
- Where does knowledge come from?
- How can we know that what we take to be our “knowledge” is correct?
Epistemology is, along with metaphysics and ethics, among the oldest branches of philosophy, largely because the questions it asks have never really been answered, or even changed much. In various forms, it also is the branch best equipped to study the essential problem of modern philosophy (which is a bit of a misnomer; “modern” philosophy can technically mean anything from roughly the last four hundred years — it’s generally considered to have begun with René Descartes): how can we “know” that we know anything?
In modern philosophy there are two great schools of epistemology: rationalism was the system of Descartes, and emphasizes deductive reasoning from self-evident or otherwise rigorously-provable first principles, while empiricism began, approximately, with Sir Francis Bacon and emphasizes inductive reasoning based on experience and observation. Good science typically draws from both schools, but is heavily empirical by nature, so any discussion of scientific method or reliability needs to be firmly rooted in empirical epistemology.
The problems of science, part one
From a philosophical perspective, the gravest problem facing science — specifically, the scientific method — is an issue of logic which was best explained by the 18th-century Scottish philosopher David Hume, and is generally known as the problem of induction. To fully understand the problem, let’s talk about a fictitious animal, the East Siberian Hairless Marmoset.
Suppose for a moment that this animal has just been discovered, and is relatively elusive. It lives on the frozen tundra of, well, East Siberia. And there’s something unusual about it: it has no hair. So how does it survive in the frigid climate of Siberia? You, being a good scientist, formulate a hypothesis: marmosets are mammals, and thus warm-blooded. And warm-blooded animals can generate more heat by burning calories at a higher rate, so maybe the East Siberian Hairless Marmoset avoids hypothermia by having an incredibly fast metabolism. And, of course, you test your hypothesis by going to East Siberia and scavenging around to find some hairless marmosets and conduct tests on them. After six weeks, you’ve captured and tested twelve marmosets and all of them had extremely fast metabolism, but winter is setting in and you can’t stay out on the tundra any longer. So you go home. Case closed.
Well, not really. You’ve seen twelve marmosets at a particular time of the year, and they all had high metabolism. But that doesn’t necessarily mean their metabolism is the secret to their survival. Maybe they eat a whole lot during the short warm season, and then hibernate. Or maybe in the winter they burrow really deep into the ground where it’s warmer (snow is an insulator). And maybe the metabolism is explained by summer being marmoset mating season — their little hearts were racing for a different reason.
In other words, you haven’t really verified your hypothesis, because you haven’t conducted enough tests or enough types of tests. All you’ve done is generalize from a sample of twelve horny marmosets observed in summer, and that doesn’t verify anything except that you’re pretty strange. To get solid verification of your hypothesis you’d need to conduct more research on more marmosets under more different conditions.
But there would still be a problem: no matter how times you go back to test marmosets, no matter how many different times of the year you go digging them up out of the tundra, you can never really be certain that you’ve explained how they survive without any hair. You may feel progressively more certain, and other scientists may be progressively more inclined to accept your results, but in order to prove your hypothesis you’d need to examine every moment of the lives of all the East Siberian Hairless Marmosets that have ever lived, are alive now or ever will live. That’s the only way to be sure, but you can’t do that because you don’t have a time machine to jump into the past or the future and test marmosets that have already died or that haven’t been born yet.
Now, that’s an extremely contrived example, but it illustrates the general problem: science generally proceeds by a process of inductive generalization; in other words, given a set of observations, a scientist constructs a hypothesis which fits those observations, and then conducts a few more observations — usually carefully designed — in order to verify the hypothesis. In this way, a scientist can go from observations of a limited set of circumstances to a general rule about all similar circumstances, and with each successive observation which conforms to the hypothesis, the scientist will say that it has been further confirmed.
Except it’s never totally confirmed. Science deals largely with what philosophers call universally quantified statements; in other words, statements which claim to cover everything of a particular type. But observation and experiment are usually inadequate to establish the absolute truth of such a statement, because the only way to fully verify a universally-quantified statement is to conduct observations of every possible thing it covers. The fact that every place we’ve ever been to has gravity which conforms to the inverse-square rule doesn’t necessarily mean that every part of the universe does; there’s no logical reason why there couldn’t be, say, four cubic meters of space in Galaxy NGC 7217 which conform to an inverse-cube rule. Until we’ve been to every single point of our universe’s space-time (which isn’t possible), we can’t truly say that the inverse-square law has been verified. All we can say is that it’s held up every time we’ve tested it, which is something altogether different.
But typically at some point we say that the number of observations is “enough” and that we can safely generalize — we’ll assume from then on that the hypothesis does accurately cover all the possible situations, even if we never get around to observing them all. We do this because we decide that, even though it’s possible a future observation will contradict our hypothesis, it’s not particularly likely. In other words, we fall back to the rule of inductive generalization: given a large number of observations of something following a particular rule, it’s safe to assume that all observations would show that something to be following that rule. And science moves forward.
But there’s a problem with that: the rule of inductive generalization has no formal support in logic. The only thing we can say in support of it is that we’ve used it a lot of times, and so far it’s worked out well. But that’s a no-no, because that means we’re using an inductive generalization as support for the rule of inductive generalization; that’s using an assumption to prove itself, also known as circular reasoning or (formally) begging the question.
David Hume laid this out in an utterly devastating manner, leaving science with a huge gaping problem: in order to work, science needs to be “verifiable” — in other words, you need to be able to make inductive generalizations based on repeated observations. But inductive generalization, as a rule of reasoning, is not verifiable.
This, in a nutshell, is the problem of induction.
The problems of science, part two
After the problem of induction, science’s biggest issue is the problem of demarcation — how to separate science from non-science. There are lots and lots of things which go around calling themselves “science” or “scientific”, but how do we know if they really are? One of the most popular criteria has always been verifiability; accepted scientific theories make claims which can be verified through testing.
But as we’ve just seen, verification doesn’t work because for the most interesting things science deals with — universally-quantified statements such as “all electrons have negative charge” — true verification isn’t possible. Do you know of any way to test the charge of every electron that’s ever existed or ever will exist?
But this means that “verifiability” isn’t a usable criterion for weeding out the non-science, because if we really want to talk about empirical verification then we have to throw out an awful lot of things that really ought to be categorized as science. And because of the problem of induction, we can’t lower the bar of verifiability to require fewer tests, because inductive generalization from a limited set of tests doesn’t really “verify” anything.
That’s a lot to digest, and I still haven’t gotten to the real point here, which is what Popper said and didn’t say about science. But by now you’re probably wondering whether anyone really cares about the difference between “all our observations have been consistent with the theory so far” and “all possible observations will be consistent with the theory”. It’s a very fine distinction, and for the most part it doesn’t matter a whole lot; in general, the scientific method of verification by observation, and accepting a theory after a certain (varying) threshold of successful observations, works. It works really, really well. But the fact that it’s worked up until now doesn’t logically imply that it always will work, which gets us right back into the problem of induction again. So people who want science to be on the firmest possible epistemological footing care very much about all of this, and are willing to spend their lives working on the problem to find ways out of it. Except, so far, nobody has.
Except maybe Karl Popper. Let’s talk about him now.
What Popper didn’t say
Popper never said “that science never actually made any progress, that we never learned anything more about the world.” If we were to qualify the second part of that to read “that we never absolutely established the truth of empirically-tested universal statements about the world”, then maybe we could fit those words into Popper’s mouth, but he wasn’t the originator of that idea; he mostly took it as proven, just like pretty much everyone else had for a couple hundred years. From a logical standpoint, the critique of induction is unimpeachable, and to a philosopher it presents a very real problem for establishing the reliability of scientific method.
What Popper did say
Well, he said a lot of things. The guy lived a long time, and was a prolific thinker. But he did advance a possible solution to the problem of induction, by turning it on its head. I’ve been talking about “universally quantified” statements, statements which make blanket claims about all possible instances of a given thing. The term “universally quantified” comes from a branch of logic called the first-order predicate calculus, which defines ways of formalizing certain types of statements and rules for reasoning with them. In the first-order predicate calculus, the (approximate) opposite of a universally-quantified statement is an existenitally quantified statement, which only says that at least one instance of a particular thing exists. An existentially quantified statement can be proved by finding just one instance of the particular thing it talks about. And, importantly for us, a universally quantified statement can be disproved by proving the existentially quantified statement which is its opposite.
In other words, we may never be able to prove that high metabolism is the survival strategy of every East Siberian Hairless Marmoset, but we can disprove it by finding just one marmoset that has low metabolism and still manages to survive.
So Popper advocated what has come to be known as the “falsifiability criterion” for drawing a line between science and non-science: when we look at a theory, we should hunt for things we could observe or test which would disprove it. If we can find some, then the theory is more likely to be scientific than otherwise. And if there is no conceivable observation we can make, or test we can conduct, which would disprove the theory, then the theory isn’t scientific.
It’s important not to fall into a common misconception here: Popper never tried to say that falsifiability was the only criterion for separating science from non-science, just that it was one criterion and that it was logically much more sound than verifiability (because falsifiability doesn’t run up against the problem of induction). In philosophical jargon, Popper said that falsifiability was a necessary condition for being scientific, not a sufficient condition for being scientific (a necessary condition is one which must be met; a sufficient condition is one which, if met, renders any other conditions irrelevant. In other words, there may be more than one necessary condition for something to be proven, but meeting any one of its sufficient conditions is enough to prove it).
Popper himself was largely concerned with two theories which claimed, more or less, to be scientific: Marxist social theory and Freudian psychological theory. Popper had plenty of personal experience of both, and eventually claimed that both were demonstrably unscientific, because there were no possible observations or tests which a Marxist or Freudian would accept as disproving those theories; for example, no matter how many psychiatric cases you present which appear to contradict Freud, a Freudian can always claim that you’re merely looking at the surface, and missing unobservable subconscious effects which would bear out Freud’s theories. Popper referred to this as the “immunization” of a theory, and had a great distaste for it (though he admitted that ad-hoc immunization was not necessarily always a bad thing; an ad-hoc immunization of Newton’s theory of gravitation eventually led to the discovery of the planet Neptune, at which point the original contradiction — motion in the orbit of Uranus which deviated from Newton’s predictions — disappeared).
Popper also espoused a general belief, which he formalized somewhat through a system of analyzing the logical consequences of theories, that scientific theories which take more “risks” by making more, and more easily falsifiable, predictions are also more interesting and ultimately more useful than theories which do not, largely because such theories — provided they stand up to attempts at falsification — tend to add much more to our available knowledge when they become accepted.
An enemy of science?
So hopefully at this point it’s clear that, whatever else he might have been, and whatever else you might think about his ideas, Karl Popper was about as far as possible from being an “enemy of science”. Like many philosophers who’ve struggled with the problem of induction and the problem of demarcation, he was deeply committed to helping science by putting it on firmer logical footing, even if philosophers are about the only people who really care about these problems.