Science and What Exists

To make the transition to Einstein’s universe, the whole conceptual web whose strands are space, time, matter, force, and so on, had to be shifted and laid down again on nature whole.

—Thomas Kuhn

One problem metaphysicians have been dealing with for, well, forever, is the unfortunately necessary intertwining of metaphysics and epistemology. Metaphysics is the philosophical study of what exists; epistemology is the philosophical study of knowledge. And it’s trivial to point out that the best we can do in detailing what there is that exists is to rely on our best epistemology: We can’t talk about what we know about, without talking about what (and how) we know. If we know about quarks, it’s not simply the case that quarks exist, but that we figured out that they exist. Our catalogue of items in the universe is inherently tied to our knowledge of those items.

Why is this problematic? Well, many metaphysicians are very conscious and conscientious about keeping existence separate from knowledge of existence. Much of the problem can be traced back to the venerable Bishop Berkeley, who posited that everything in the universe in actually mind-dependent for its very existence — it’s not, Berkeley thought, just that the computer screen in front of you is merely hidden from view when you close your eyes, but that this lack of observation actually means the computer screen is not really there when your eyes are closed. Problems with this theory forced Berkeley to say that God observes everything at all times, and so there’s no worry about things blinking in and out of existence with the blink of an eye. God never blinks. But regardless of the absurdity of this centuries-old bit of philosophy, the aftershocks have stayed with us. There’s something very compelling, apparently, about the idea that our minds have metaphysical power — that minds can create some of reality.

The great irony is that the best scientifically-minded philosophers of the 20th Century, while trying to shore up the mind-independence of the external world, actually gave proponents of mind-dependence a strong foothold in the metaphysical debate.

Naturalized epistemology — the brain child of W.V.O. Quine, though it was clearly anticipated hundreds of years earlier by David Hume — takes science to be the paragon of knowledge-farming; the discipline whose results we are most certain about. Naturalism, though, if we accept it, forces us also to acknowledge the following: We can’t make judgements about the world from some point of privileged access outside of science. That is, there is no way to step outside science and see what there is in the world; we don’t get a clearer picture of quarks without science — science itself tells us about quarks, and without science this piece of ontological furniture would not be accessible to us whatsoever. Our metaphysical house, chock full of interesting furniture, wouldn’t merely look somewhat different without science; it would be a bare, dirt-floored cabin with very little of interest in it.

This leads to a very tantalizing point. Science often changes its mind, and in such episodes of change what we take to be our ontology (our catalogue of things that exist) changes as well. For instance, once upon a time science told us that there was a substance called phlogiston that is released from things when they are burned. This substance — a consequence of a good scientific theory that explained several phenomena related to chemistry — was taken by scientists (and the informed public) as existing in the world. If science is our best arbiter of what exists, then, at the time during which science told us that phlogiston existed, there’s a strong sense in which it actually existed. Science, remember, tells us what there is, and there’s not privileged perspective outside of science to figure out our metaphysics. It turned out, however, that the phlogiston theory of chemistry ran into serious problems, and was more or less wholesale replaced by the oxygen theory of Lavoisier. In this new theory, there was no place for phlogiston. At this point, science told us that phlogiston does not exist.

There are (at least) two conclusions that can be drawn from this, each of which I will encapsulate using the Kuhnian metaphor at the top of this entry:

Standard Naturalism: The whole of science forms a conceptual web from which vantage point we purvey the world. There is no spot outside of the web from which to purvey the world. We can change science by changing some part of the web — this amounts to changing our ideas about an unchanging world. The world is independent of our ideas about it, even as we discover new ways to look at what exactly is in it. For instance, we were simply wrong about the existence of phlogiston. It never existed.

Kuhnian Mutant Naturalism: A scientific theory is a conceptual web that uniquely lays upon the world giving it its shape. When a new theory is developed, an entirely new web is made. There is still no place outside of the web from which to purvey the world, but we can shuck off the entire web in favor of a new one. The world is partly dependent for its existence on our ideas about it — whichever web we throw onto the world actually gives the world its shape. When we change our ideas, we change the world. For instance, phlogiston actually did exist while scientists were working with phlogiston theory. When Lavoisier came up with a new chemical theory, the world actually changed — phlogiston disappeared, and in its place oxygen and other items filled our metaphysical cupboards.

Many have noted from Kuhn’s version of naturalism that he is an anti-realist in the Kantian vein. We won’t get into the thickets of Kantian metaphysics here, but, in short, he believes that our ideas are not merely a pre-condition for theorizing about things, but that theorizing indeed is a pre-condition for the very existence of things. Contrary to this, standard naturalism usually goes hand in hand with common-sense and scientific realism, wherein, as Philip Kitcher notes: “Trivially, there are just the entities there are. When we succeed in talking about anything at all, these entities are the things we talk about, even though our ways of talking about them may be radically different.”

One reason Kuhn is led to his odd metaphysics is because of his implicit description theory of reference. On a description theory, the only way to correctly refer to an entity is to have its unique description in mind; but if a scientific revolution changes the description associated with a key scientific term, then the old description no longer refers. This leads Kuhn to the idea that competing scientific paradigms are incommensurable. It also motivates his metaphysics. If a term once referred and now it does not, all on the basis of our changing descriptions, then by some inferential jump one could think that this correlation was causal; i.e., that our changing descriptive thoughts cause a change in the world.

We’ll examine description theories and the philosophy of language in an upcoming post. Stay tuned…

Nonmonotonic Logic and Stubborn Thinking

I was struck recently by some similarities between the psychology of stubborn thinking and the history of science and logic. It’s not just individuals that have trouble changing their minds; entire scientific, logical, and mathematical movements suffer from the same problem.


When people think about logic (which I imagine is not very often, but bear with me on this), they probably think about getting from a premise to a conclusion in a straight line of rule-based reasoning — like Sherlock Holmes finding the criminal perpetrator with infallible precision, carving his way through a thicket of facts with the blade of deduction.

Here’s a sample logical proof that would do Holmes proud.

Birds fly.
Tweetie is a bird.
Therefore Tweetie flies.

We have here a general principle, a fact, and a deduction from those to a logical conclusion.

The problem is that the general principle here is just that: general. It is generally the case that birds fly. In fact, some birds do not fly at all. (In fact, there’s not ever a general principle that universally applies: even the laws of physics are arguably fallible. Cf. Nancy Cartwright’s wonderful How the Laws of Physics Lie.) Tweetie could be an ostrich or an emu, or Tweetie could have lost his wings in a window-fan accident, or Tweetie could be dead.

You could shore up your general principle in order to try to make it more universal: Birds that aren’t ostriches, emus, wingless, or dead, fly. But this sort of backpedaling is really an exercise in futility. As the past several decades of research in artificial intelligence through the 90s showed us, the more you expand your general principle to cover explicit cases, the less of a general rule it becomes, and the more you realize you have to keep covering more and more explicit cases, permutations upon permutations that will never end. (E.g., even in the case of death, Tweetie might be able to fly. He could be dead, but also in an airplane at 20,000 feet. Would you amend your general principle to cover this case? It would be a strange sort of “scientific” law that stated “Birds fly, except dead birds that aren’t in airplanes.”)

A brilliant solution to this sort of problem was found via the creation of nonmonotonic logic, a logical system that is what they call defeasible — that is, it allows for making a conclusion that can be undone by information that eventually emerges to the contrary. So the idea is that a nonmonotonic system allows you to conclude that Tweetie flies via the logic above, but also allows you to change that conclusion if you then find out that Tweetie is, in fact, e.g., dead.

This may not seem like a big deal, since this is how a rational human is supposed to react on a regular basis anyway. If we find out that Tweetie is dead, we are supposed to no longer hold to the conclusion, as logical as it may be, that he flies. But for logicians it was huge. The old systems of logic pinned us helplessly to non-defeasible conclusions that may be wrong, just because the logic itself seemed so right. But now logicians have a formal way of shaking free of the bonds of non-defeasibility.


The history of science is rife with examples of this principle-clinging tenacity from which it took logic millennia to escape. A famous case is found in astronomy, where the concept persisted for more than a dozen centuries that the earth was at the center of the universe. As astronomy progressed, it became clear that to describe the motion of the planets and the sun in the sky, a simple model of circular orbits centered around the Earth would not suffice. Eventually, a parade of epicycles was introduced — circles upon circles upon circles of planetary motion spinning this way and that, all in order to explain what we observed in the earth’s sky, while still clinging to the precious assumption that the Earth is centrally located. The simpler explanation, that the Earth was in fact not the center of all heavenly motion, would have quickly done away with the detritus of clinging to a failed theory, but it’s not so easy to change science’s mind.

In fact, one strong line of thought, courtesy of Thomas Kuhn has it that the only way for scientists to break free from such deeply entrenched conceptions is nothing short of a concept-busting revolution. And such revolutions can take years to gather enough momentum in order to be effective in mind-changing. (Examples of such revolutions include the jarring transition from Newtonian to Einsteinian physics, and the leap in chemistry from phlogiston theory to Lavoisier’s theory of oxidation.)

Down to Earth

If even scientists are at the mercy of unchanging minds, and logicians have to posit complicated formal systems to account for the ability to logically change one’s mind, we should be prepared in our daily lives to come up against an immovable wall of opinions. Despite what the facts tell us.

Indeed, it isn’t very hard to find people that have a hard time changing their minds. Being an ideologue is the best way of sticking to an idea despite evidence to the contrary, and ideologues are a dime a dozen these days. What happens in the mind of an ideologue when she is saving her precious conclusion from the facts? Let’s revisit Tweetie. (You can substitute principles and facts about trickle-down economics or global warming for principles and facts about birds, if you like.)

Ideologue: By my reasoning above, I conclude that Tweetie flies.

Scientist: That is some nice reasoning, but as it turns out, Tweetie is dead.

Ideologue: Hmmm. I see. Well, by “flies” I really mean “flew when alive”.

Scientist: Ah, I see. But, actually, Tweetie was an emu.

Ideologue: Of course, of course, but I mean by “flies” really “flew when alive if not an emu”.

Scientist: But so then you’ll admit that Tweetie didn’t actually fly.

Ideologue: Ah, but he could have, if he had had the appropriate physical structure when he was alive.

Scientist: But your conclusion was that Tweetie flies. And he didn’t.

Ideologue: Tweetie was on a plane once.

Scientist: But isn’t that more a case of Tweetie being flown, not Tweetie flying?

Ideologue: You’re just bogging me down in semantics. In any case, Tweetie flies in heaven now. Case closed.