Frege Was From Venus

If you spend any time mucking around in the philosophy of language, you’re going to run headlong into Gottlob Frege at some point. Frege, round about the turn of the 20th century, was a key figure in the emerging fields of logic and the philosophy of mathematics, but he may well be best remembered for his contributions to the theory of meaning.

What is Meaning?

The basic question that any philosophy of language must address is this: What can we say about the meaning of a word (and — what perhaps amounts to the same thing — the meaning of a sentence)?

A first stab at analyzing this is to say that the meaning of a word is just what it points to — what it designates or refers to. For instance, the word (or name, in this case) “Herbie” refers to my cat, Herbie. (Make sure to get your head around the difference between a word and an object referred to by that word. We’ll have a post about this “use/mention” distinction soon. For now, just stay alert to the use of quotation marks to distinguish a word from its associated object.) The word “Herbie” points to the creature that is at the time of this writing tapping my leg with his paw, trying to get me to play with him. (I’ll be right back…)

Reference - Herbie

We can apply the same analysis to numbers. The ink-on-paper numeral “7” that you might write down in your checkbook or on a math test, refers to the actual number 7, which for the sake of argument we’ll take to be some object out there in the universe somewhere. Similarly, and perhaps easier to comprehend, the words “seven”, “siete”, “sept”, and “sieben” all refer to the number 7 as well (in English, Spanish, French, and German, respectively).

Reference 7

If this is the right picture, it would give us a convenient way to explain how “seven” and “siete” both mean the same thing: It’s because both words refer to the same thing.

Reference Ain’t Enough

If this were all there is to meaning, then “12” and “7 + 5” would mean the same thing, because they both refer to the number 12.

But as Kant famously pointed out (in his analytic/synthetic, a priori/a posteriori distinctions), these two words/phrases might well mean different things.

To see why, let’s look at the following statement: “12 = 12”.

Compare that with this statement: “7 + 5 = 12”.

The first statement doesn’t say much except that a thing is always identical with itself. The second says something significantly new about 12 (that it’s the sum of 7 and 5).

If this is true, then “12” and “7 + 5” do not have the same meaning; and if this is the case, then there has to be more to meaning than the idea of reference. You can see this difference more clearly if you look at these in a different context.

“I know that 12 = 12.” One can know this without knowing anything about addition.

“I know that 7 + 5 = 12.” To know this, one has to know something about addition.

This becomes even clearer with a more complex mathematical fact.

“I know that 812,285,952 = 812,285,952.”

“I know that 24,789 x 32,768 = 812,285,952.”

Anyone can utter the first sentence without any more knowledge than ‘everything is equal to itself’. But to say the second sentence with any sort of certainty, you’d have to have done some complex calculations (or had a calculator do them for you). There’s something about the second statement that is differently meaningful than the first.

The Morning Star and The Evening Star

The more usual example philosophers of language use is (happily for most of you) not mathematical.

The ancient Greeks, looking at the dark sky above them, noticed two very bright stars. One came up shortly after the sun went down in the evening, and was brighter than any other star around it; the other star came up shortly before the sun came up in the morning and was similarly bright. They named these two stars: “The Morning Star” and “The Evening Star”.

Well, maybe you saw this coming, but it turns out that these two stars were actually the same object: Venus. (Of course, not even a star after all, but a brightly reflective planet.) So here’s the referential picture the ancient Greeks had:

Morning Star / Evening Star Greeks

A few centuries later, astronomers gave us this picture instead:

Reference Venus

Now, if reference is all there is to meaning, then these two sentences would have the same meaning:

“The Morning Star is the Morning Star.”

“The Morning Star is the Evening Star.”

Because by just considering reference those two sentences translate to this one sentence:

“Venus is Venus.”

Reference Sentences

But clearly these sentences have very different meanings — the first sentence is obvious to anyone, even those without any knowledge of astronomy; the second sentence is something that one would only know by virtue of synthesizing some significant piece of astronomical knowledge, namely that “The Morning Star” and “The Evening Star” both refer to the same heavenly body: Venus.

Frege’s Solution

So hopefully you’ll agree that reference can’t be all there is to meaning.

Frege’s idea was that while reference is important to meaning, there is another important dimension to meaning as well, which he called sense. He called the sense of a term the “mode of presentation” of the referent. So while “the Morning Star” and “the Evening Star” both refer to the same thing, they have different senses: the sense of “the Morning Star” is something like “the bright star that rises in the early morning”, while the sense of “the Evening Star” is something like “the bright star that rises in the early evening”. Same reference; different sense.

On this scheme, when we say “the Morning Star is the Evening Star”, we’re comparing senses, not references, and this is why it’s a statement of new knowledge (synthetic, a la Kant) and not just an obvious truth (analytic, a la Kant). “The Morning Star is the Morning Star” is comparing two things that not only have the same references, but the same senses. And this is semantically not interesting.

Sense Without Reference

One interesting consequence of Frege’s philosophy of language is that it turns out that not everything with a sense has a reference.

“The novel written by Richard Nixon” has a sense — it presents an idea to us in a clearly understandable way — but has no reference — Nixon never (as far as I know) actually wrote a novel. So in fact the meaning of a sentence might not have to rely at all on reference. “The novel written by Richard Nixon is long and boring” has a meaning even though the subject of the sentence doesn’t exist. We’ll take up this interesting idea in a future post.

Philosophy in 3 Minutes

Just ran across these little gems on YouTube — overviews of important philosophers done in three minutes, with excellently bad animation!

Here’s one on Aristotle that’s good (although it’s almost four minutes long, tsk tsk):

And one on Locke:

And a good one on Kant’s ethics:

Science and What Exists

To make the transition to Einstein’s universe, the whole conceptual web whose strands are space, time, matter, force, and so on, had to be shifted and laid down again on nature whole.

—Thomas Kuhn

One problem metaphysicians have been dealing with for, well, forever, is the unfortunately necessary intertwining of metaphysics and epistemology. Metaphysics is the philosophical study of what exists; epistemology is the philosophical study of knowledge. And it’s trivial to point out that the best we can do in detailing what there is that exists is to rely on our best epistemology: We can’t talk about what we know about, without talking about what (and how) we know. If we know about quarks, it’s not simply the case that quarks exist, but that we figured out that they exist. Our catalogue of items in the universe is inherently tied to our knowledge of those items.

Why is this problematic? Well, many metaphysicians are very conscious and conscientious about keeping existence separate from knowledge of existence. Much of the problem can be traced back to the venerable Bishop Berkeley, who posited that everything in the universe in actually mind-dependent for its very existence — it’s not, Berkeley thought, just that the computer screen in front of you is merely hidden from view when you close your eyes, but that this lack of observation actually means the computer screen is not really there when your eyes are closed. Problems with this theory forced Berkeley to say that God observes everything at all times, and so there’s no worry about things blinking in and out of existence with the blink of an eye. God never blinks. But regardless of the absurdity of this centuries-old bit of philosophy, the aftershocks have stayed with us. There’s something very compelling, apparently, about the idea that our minds have metaphysical power — that minds can create some of reality.

The great irony is that the best scientifically-minded philosophers of the 20th Century, while trying to shore up the mind-independence of the external world, actually gave proponents of mind-dependence a strong foothold in the metaphysical debate.

Naturalized epistemology — the brain child of W.V.O. Quine, though it was clearly anticipated hundreds of years earlier by David Hume — takes science to be the paragon of knowledge-farming; the discipline whose results we are most certain about. Naturalism, though, if we accept it, forces us also to acknowledge the following: We can’t make judgements about the world from some point of privileged access outside of science. That is, there is no way to step outside science and see what there is in the world; we don’t get a clearer picture of quarks without science — science itself tells us about quarks, and without science this piece of ontological furniture would not be accessible to us whatsoever. Our metaphysical house, chock full of interesting furniture, wouldn’t merely look somewhat different without science; it would be a bare, dirt-floored cabin with very little of interest in it.

This leads to a very tantalizing point. Science often changes its mind, and in such episodes of change what we take to be our ontology (our catalogue of things that exist) changes as well. For instance, once upon a time science told us that there was a substance called phlogiston that is released from things when they are burned. This substance — a consequence of a good scientific theory that explained several phenomena related to chemistry — was taken by scientists (and the informed public) as existing in the world. If science is our best arbiter of what exists, then, at the time during which science told us that phlogiston existed, there’s a strong sense in which it actually existed. Science, remember, tells us what there is, and there’s not privileged perspective outside of science to figure out our metaphysics. It turned out, however, that the phlogiston theory of chemistry ran into serious problems, and was more or less wholesale replaced by the oxygen theory of Lavoisier. In this new theory, there was no place for phlogiston. At this point, science told us that phlogiston does not exist.

There are (at least) two conclusions that can be drawn from this, each of which I will encapsulate using the Kuhnian metaphor at the top of this entry:

Standard Naturalism: The whole of science forms a conceptual web from which vantage point we purvey the world. There is no spot outside of the web from which to purvey the world. We can change science by changing some part of the web — this amounts to changing our ideas about an unchanging world. The world is independent of our ideas about it, even as we discover new ways to look at what exactly is in it. For instance, we were simply wrong about the existence of phlogiston. It never existed.

Kuhnian Mutant Naturalism: A scientific theory is a conceptual web that uniquely lays upon the world giving it its shape. When a new theory is developed, an entirely new web is made. There is still no place outside of the web from which to purvey the world, but we can shuck off the entire web in favor of a new one. The world is partly dependent for its existence on our ideas about it — whichever web we throw onto the world actually gives the world its shape. When we change our ideas, we change the world. For instance, phlogiston actually did exist while scientists were working with phlogiston theory. When Lavoisier came up with a new chemical theory, the world actually changed — phlogiston disappeared, and in its place oxygen and other items filled our metaphysical cupboards.

Many have noted from Kuhn’s version of naturalism that he is an anti-realist in the Kantian vein. We won’t get into the thickets of Kantian metaphysics here, but, in short, he believes that our ideas are not merely a pre-condition for theorizing about things, but that theorizing indeed is a pre-condition for the very existence of things. Contrary to this, standard naturalism usually goes hand in hand with common-sense and scientific realism, wherein, as Philip Kitcher notes: “Trivially, there are just the entities there are. When we succeed in talking about anything at all, these entities are the things we talk about, even though our ways of talking about them may be radically different.”

One reason Kuhn is led to his odd metaphysics is because of his implicit description theory of reference. On a description theory, the only way to correctly refer to an entity is to have its unique description in mind; but if a scientific revolution changes the description associated with a key scientific term, then the old description no longer refers. This leads Kuhn to the idea that competing scientific paradigms are incommensurable. It also motivates his metaphysics. If a term once referred and now it does not, all on the basis of our changing descriptions, then by some inferential jump one could think that this correlation was causal; i.e., that our changing descriptive thoughts cause a change in the world.

We’ll examine description theories and the philosophy of language in an upcoming post. Stay tuned…

Choosing a Kantian Maxim

Explaining anything about Immanuel Kant’s philosophy in a short blog post is a daunting and perhaps foolish task, but I am nothing if not undaunted and foolish.

I’d like here to address a particular problematic aspect of Kant’s ethical philosophy (and don’t let the terminology scare you off — it’s not as difficult as it’s about to sound): How one is supposed to go about applying Kant’s categorical imperative by way of universalizing a personal maxim?

Kant’s categorical imperative is the only pure (he had a thing about purity) moral law he could come up with, and it boils down to this: “Act only on that maxim by which you can at the same time will that it should become a universal law.” A maxim is a personal “ought” statement, like “I ought to save that puppy from that oncoming truck”. A universal law is generated from a maxim by applying it to the entire rational population. E.g., “Every rational person ought to save puppies from oncoming trucks.” And Kant’s categorical imperative asks us to use this process every time we wish to make an ethical choice: Come up with a personal maxim for the situation; universalize that maxim; and see if that universal law is something that should be followed by every rational person in every such situation.


Let’s go through an example of Kant’s process. Let’s say you’re faced with an instance where lying would be expedient. Here, then, is your personal maxim for the situation:

Maxim: I ought to lie in order to get out of a jam.

And then Kant asks you to universalize it:

Universal Law: Everyone ought to lie in order to get out of a jam.

According to Kant, this universalized version of your personal maxim shows us that your maxim is in fact immoral. Even though your maxim may seem harmless, and is certainly beneficial to you in the short term, by extending its reach to the whole of humanity, there arises something very bad. If we look at a world where everyone lies in every dicey situation, well, this is a world that is in trouble. And, thus, according to Kant, you should never lie. Period. No exceptions.

Lying to Nazis

This position leads to some obvious problems.

Say you’re in 1940 Germany, and you are harboring your Jewish neighbor in your attic, in order to protect her from the Nazis, who would like to find and kill her. Now imagine that the Nazis knock on your door and ask you: “Are you hiding any Jews in your attic? We’d like to kill them if you are.” The relevant moral question here, of course, is what do you do? Perhaps, as Kant thought, lying is a bad thing, but if you tell the truth in this situation, it will lead to your neighbor’s unwarranted death, which certainly seems worse, on the face of it.

Let’s look in a little detail at how Kant might have examined this situation. His logic went something like this:

  • If it’s okay for you to lie, then (according to the universalization of this maxim) it’s okay for everybody to lie.
  • But if everyone lies, then no one will ever believe anything anyone says.
  • And, thus, lies would become completely ineffectual.
  • Therefore, lying is a rationally inconsistent activity — it leads to its own conceptual destruction.

This rational inconsistency is at the heart of Kant’s claim that lying is immoral — he thinks that ethics has to be based on irrefutable, logical principles in order for it to be anything besides an argument over opinions. A concept that leads to its own self destruction certainly shows us that there is something inherently wrong with it. And so lying, in virtue of this, is immoral.

Choosing Your Maxim

But let’s look more closely at the procedure of picking your maxim in the lying example.

I should lie in order to help someone.

Is this a good candidate for a personal maxim? Well, no, not really. It’s certainly not generally applicable to moral situations. For instance, one could pretty easily argue that lying in order to help a mad bomber who is about to kill a thousand innocent people is probably not a very ethical thing to do.

I should lie in order to keep someone safe.

No, this has the same problem… what if you’re lying in order to keep the mad bomber from being arrested? This is arguably not a moral thing to do.

I should lie in order to save a life.

We’re getting better, but we still have the same problem lurking. If your lie is to save the life of an evil person, it’s at least arguable that the lie is not the morally right thing to do.

So let’s include something in our maxim to account for the idea that you are lying to protect someone innocent:

I should lie in order to save an innocent person from death at the hands of an evil person.

What happens if we universalize this maxim?

Everyone should always lie in order to save an innocent person from death at the hands of an evil person.

This is not bad, actually, but there’s still the Kantian objection of conceptual self-destruction lurking: If we always lie to evil people who want to kill innocent people, the evil people will start to catch on, and thus the lies will become self-defeating.

In fact, the example of lying is one of the best for Kant’s system — when he applies his system to other sorts of moral cases, it all starts to go to hell. But with lying, he has found a case where there is something internally irrational about the endeavor, when applied universally. But I’d like for a moment to talk about a general problem with Kant’s procedure. How, exactly, do you go about choosing your maxim?

The Problem of Specificity

One major problem here is that of specificity of the maxim you choose.

You could make your maxim very general:

I should lie to strangers.

This is just about the most general maxim you could use here; and certainly this isn’t universalizable. Not only would you not want to universalize it (everyone should lie to every stranger would be an odd moral rule!), but it harbors the same problem of lies being self-defeating.

What about if you go to the other extreme, and choose a very specific maxim?

I should lie in order to save the life of the Jewish person hiding in my attic in 1940 Germany from the Nazis who will kill her.

This is about as specific as you can get with your maxim. And actually this is pretty well universalizable, because by universalizing it you don’t lose much specificity — your universalized law is still quite specific and actually probably a good moral rule:

Everyone should lie in order to save the life of the Jewish person hiding in Alec’s attic in 1940 Germany from the Nazis who will kill her.

(You might generalize the universal law here a bit more: Everyone should lie in order to save the life of the Jewish person hiding in his or her own attic in 1940 Germany from the Nazis who will kill that Jewish person. Still, this is arguably easy to accept as a good universal law.)

The issue here is that very specific maxims will be easy to universalize, while very general ones won’t. And this is a problem because very specific maxims will usually be very uninteresting as the basis of moral tenets. Very general ones will usually be interesting.

Imagine instead of a moral law like “Murder is wrong”, we had a law that said “Murdering Joe Smith on August 24, 1968, because he applied the wrong postage to a letter, is wrong”. Other ethicists would mercilessly laugh us out of the business. Our law may be true, but is not very interesting.

So the only way to use Kant’s procedure to generate a sound moral rule is by picking a maxim that is so specific that it is morally mundane.

Other Problems With Kant

There are a million and one problems for Kantian ethics (although there are a million and two Kantian ethicists in the philosophical community today). But perhaps the most obvious concern with Kant’s ethics is that it doesn’t (in fact, explicitly so) account for the ends of one’s actions. Most of us are disposed to say that killing a mad bomber in order to save a thousand innocent lives is a moral action, regardless of the fact that it involves killing someone. Kant disagrees, saying we can’t rely on a good outcome (saving a thousand lives) as the basis of our ethics.

He’s got a point. What if you decide to kill the mad bomber, but by a fluke of luck you actually wind up wounding him instead, and he escapes, only to kill ten thousand people the next day? That fluke of luck turns you from a hero into a villain. This idea of moral luck is a fascinating topic on its own, but for our purposes here, it does cast Kant’s hardcore position in a somewhat better light. If good outcomes are dependent on luck, then perhaps a genuinely moral decision shouldn’t depend on its outcome — perhaps a good act is good no matter what the outcome.

Famously, a school of moral philosophy called utilitarianism (or more generally consequentialism) sprang up in direct opposition to this perspective. We’ll talk about some of its pluses and minuses in a future post.