Vague Objects

Allow me to introduce my cat, Pinky.

Pinky the Cat

My cat, Pinky, has one semi-detached hair.

The metaphysical question at hand is this: Is the semi-detached hair a part of Pinky or not?

Any way you slice it, there’s some vagueness here. The more usual thought in philosophy is that the world is perfectly unvague — the world is utterly precise (the loose hair either does or does not belong to Pinky), everything just is whatever it is, and whatever vagueness humans encounter is simply a matter of human imprecision. Either our knowledge-generating faculties or our language faculties (or both, if there’s a difference), are imperfect, and incapable of discovering/representing the perfection of the world.

But there’s another possibility: The world itself is a vague place, and, even if we had perfect knowledge-generating faculties, we’d still struggle with issues of vagueness, because those issues are embedded in the fabric of nature.

So, let’s agree that there is indeed some vagueness at play, and ask: Is this vagueness actually in the world, or is it in our language/thoughts about an unvague world?

Unvague Cats; Vague Language/Thought

If the vagueness is just in our language, and not in the world, then there is a fact of the matter as to whether or not Pinky has that loose hair as a part of itself. If Pinky does indeed own that hair, then “Pinky” picks out the cat-like mass along with the loose hair.

Which cat is Pinky?

Which cat is Pinky? The one with the loose hair, or the one without?

As Michael Morreau sees it, this actually generates a metaphysical problem:

If vagueness is all a matter of representation, there is no vague cat. There are just the many precise cat candidates that differ around the edges by the odd whisker or hair. Since there is a cat,… and since orthodoxy leaves nothing else for her to be, one of these cat candidates must then be a cat. But if any is a cat, then also the next one must be a cat; so small are the differences between them. So all the cat candidates must be cats. The levelheaded idea that vagueness is a matter of representation seems to entail that wherever there is a cat, there are a thousand and one of them, all prowling about in lockstep or curled up together on the mat. That is absurd. Cats and other ordinary things sometimes come and go one at a time.

Pinky and Blinky

Pinky and Blinky: Two different cats that share the same (mostly) space.

If the world is not vague, then both of these are perfectly unvague cat objects, and if one is a cat then there’s every reason to say that they both are. In fact there are thousands (billions? trillions?) of cats here, all walking around in one lump. So on the world-is-not-vague side, we have the repercussion of “Pinky” picking out one specific cat out of many taking up mostly the same space; Winky, Glinky, Zinky, Inky, Kinky, etc.

Vague Cats

So, let’s try the world-is-vague approach instead. On the world-is-vague side, there’s just one cat, but that cat is itself vague. There’s no metaphysical fact of the matter as to whether or not that loose hair counts as a part of Pinky. But that loose hair doesn’t suddenly create two unvague cats: Pinky and Blinky.

What would be problematic about a vague world like this?

Perhaps the biggest problem would be representational. If Pinky is a vague cat, then we have no chance of ever compiling the perfect representation of him. (The perfect representation would include a representation of that loose hair, if it’s a part of Pinky; and it would not include that hair if it’s not a part of Pinky. But if it’s vaguely attached to Pinky, our representations will fail in one direction or the other.) Those prone to thinking that representations should strive for perfection will be most unhappy with this state of affairs.

A related problem crops up in the philosophy of language. Language philosophers like to think that names (like “Pinky”) pick out unique, unvague objects (like Pinky). But if Pinky is himself vague, then the name “Pinky” can’t unambiguously refer to Pinky. This is particularly problematic for anyone harboring vestiges of a description theory — if that loose hair may or may not belong to Pinky, then we have a problem coming up with a complete description, wherein that hair plays a part (or not).

What would be the payoff for accepting vague cats into our ontologies? The non-proliferation of tightly bound brother cats to Pinky, for one thing. (There is no need, if Pinky is vague, to posit the existence of Blinky, Winky, Glinky, et al, existing in nearly the same space as Pinky.)

It also buys us a platform to talk intelligibly about such metaphysical conundrums as the Sorites paradox. If, similar to cats, heaps are vague, as opposed to just our knowledge of heaps being vague, we can escape some of the problems inherent with talking about heaps changing over time.

We’ll be talking about the Sorites paradox in a future post.

For now, take some comfort in the idea that your knowledge of the world isn’t inherently imperfect. The world itself is inherently imperfect.

Of course, knowing that might make you uncomfortable again. Sorry.


References

Morreau, Michael. “What Vague Objects Are Like,” Journal of Philosophy 99, 2002.

Frege Was From Venus

If you spend any time mucking around in the philosophy of language, you’re going to run headlong into Gottlob Frege at some point. Frege, round about the turn of the 20th century, was a key figure in the emerging fields of logic and the philosophy of mathematics, but he may well be best remembered for his contributions to the theory of meaning.

What is Meaning?

The basic question that any philosophy of language must address is this: What can we say about the meaning of a word (and — what perhaps amounts to the same thing — the meaning of a sentence)?

A first stab at analyzing this is to say that the meaning of a word is just what it points to — what it designates or refers to. For instance, the word (or name, in this case) “Herbie” refers to my cat, Herbie. (Make sure to get your head around the difference between a word and an object referred to by that word. We’ll have a post about this “use/mention” distinction soon. For now, just stay alert to the use of quotation marks to distinguish a word from its associated object.) The word “Herbie” points to the creature that is at the time of this writing tapping my leg with his paw, trying to get me to play with him. (I’ll be right back…)

Reference - Herbie

We can apply the same analysis to numbers. The ink-on-paper numeral “7” that you might write down in your checkbook or on a math test, refers to the actual number 7, which for the sake of argument we’ll take to be some object out there in the universe somewhere. Similarly, and perhaps easier to comprehend, the words “seven”, “siete”, “sept”, and “sieben” all refer to the number 7 as well (in English, Spanish, French, and German, respectively).

Reference 7

If this is the right picture, it would give us a convenient way to explain how “seven” and “siete” both mean the same thing: It’s because both words refer to the same thing.

Reference Ain’t Enough

If this were all there is to meaning, then “12” and “7 + 5” would mean the same thing, because they both refer to the number 12.

But as Kant famously pointed out (in his analytic/synthetic, a priori/a posteriori distinctions), these two words/phrases might well mean different things.

To see why, let’s look at the following statement: “12 = 12”.

Compare that with this statement: “7 + 5 = 12”.

The first statement doesn’t say much except that a thing is always identical with itself. The second says something significantly new about 12 (that it’s the sum of 7 and 5).

If this is true, then “12” and “7 + 5” do not have the same meaning; and if this is the case, then there has to be more to meaning than the idea of reference. You can see this difference more clearly if you look at these in a different context.

“I know that 12 = 12.” One can know this without knowing anything about addition.

“I know that 7 + 5 = 12.” To know this, one has to know something about addition.

This becomes even clearer with a more complex mathematical fact.

“I know that 812,285,952 = 812,285,952.”

“I know that 24,789 x 32,768 = 812,285,952.”

Anyone can utter the first sentence without any more knowledge than ‘everything is equal to itself’. But to say the second sentence with any sort of certainty, you’d have to have done some complex calculations (or had a calculator do them for you). There’s something about the second statement that is differently meaningful than the first.

The Morning Star and The Evening Star

The more usual example philosophers of language use is (happily for most of you) not mathematical.

The ancient Greeks, looking at the dark sky above them, noticed two very bright stars. One came up shortly after the sun went down in the evening, and was brighter than any other star around it; the other star came up shortly before the sun came up in the morning and was similarly bright. They named these two stars: “The Morning Star” and “The Evening Star”.

Well, maybe you saw this coming, but it turns out that these two stars were actually the same object: Venus. (Of course, not even a star after all, but a brightly reflective planet.) So here’s the referential picture the ancient Greeks had:

Morning Star / Evening Star Greeks

A few centuries later, astronomers gave us this picture instead:

Reference Venus

Now, if reference is all there is to meaning, then these two sentences would have the same meaning:

“The Morning Star is the Morning Star.”

“The Morning Star is the Evening Star.”

Because by just considering reference those two sentences translate to this one sentence:

“Venus is Venus.”

Reference Sentences

But clearly these sentences have very different meanings — the first sentence is obvious to anyone, even those without any knowledge of astronomy; the second sentence is something that one would only know by virtue of synthesizing some significant piece of astronomical knowledge, namely that “The Morning Star” and “The Evening Star” both refer to the same heavenly body: Venus.

Frege’s Solution

So hopefully you’ll agree that reference can’t be all there is to meaning.

Frege’s idea was that while reference is important to meaning, there is another important dimension to meaning as well, which he called sense. He called the sense of a term the “mode of presentation” of the referent. So while “the Morning Star” and “the Evening Star” both refer to the same thing, they have different senses: the sense of “the Morning Star” is something like “the bright star that rises in the early morning”, while the sense of “the Evening Star” is something like “the bright star that rises in the early evening”. Same reference; different sense.

On this scheme, when we say “the Morning Star is the Evening Star”, we’re comparing senses, not references, and this is why it’s a statement of new knowledge (synthetic, a la Kant) and not just an obvious truth (analytic, a la Kant). “The Morning Star is the Morning Star” is comparing two things that not only have the same references, but the same senses. And this is semantically not interesting.

Sense Without Reference

One interesting consequence of Frege’s philosophy of language is that it turns out that not everything with a sense has a reference.

“The novel written by Richard Nixon” has a sense — it presents an idea to us in a clearly understandable way — but has no reference — Nixon never (as far as I know) actually wrote a novel. So in fact the meaning of a sentence might not have to rely at all on reference. “The novel written by Richard Nixon is long and boring” has a meaning even though the subject of the sentence doesn’t exist. We’ll take up this interesting idea in a future post.

SHRDLU on Film

The glory days of Artificial Intelligence — those heady days when researchers thought genuine intelligence and perhaps even consciousness would transcend the bonds of human brains and sit comfortably on computer hardware — well, it was a fun time to be alive. And check out this gem:

http://www.csail.mit.edu/videoarchive/history/aifilms

An awesome historical collection of AI videos housed at MIT.

This one’s my favorite:

 

A silent, grainy, low-res film of Terry Winograd’s awesome SHRDLU in action. For those of you youngsters out there, SHRDLU was really quite a remarkable achievement in computer programming. And the fact that Winograd’s first release of the software was in 1968 makes it all the more remarkable. The idea was to create a program that could parse real English sentences and respond intelligently to them. Winograd realized that he couldn’t tackle the problem in an unlimited domain (that is, his program wasn’t going to be able to talk about everything a human might be able to), so he limited the program’s world to a virtual table top with some blocks on it, and a virtual robotic arm that could pick up and move those blocks. Within this tiny domain, SHRDLU was really very impressive. Frankly, I’m still impressed by the program today, even though it’s more than 40 years old.

Here’s a sample excerpt from a human-SHRDLU interaction:

Person: WHAT DID THE RED CUBE SUPPORT BEFORE YOU STARTED TO CLEAN IT OFF?
Computer: THE GREEN PYRAMID.
Person: THERE WERE FIVE BLOCKS TO THE LEFT OF THE BOX THEN.
Computer: NO, ONLY FOUR OF THEM: THE RED CUBE, TWO LARGE GREEN CUBES AND A LARGE RED BLOCK.
Person: PUT THE BLUE PYRAMID ON THE BLOCK IN THE BOX.
Computer: OK. (does it)
Person: IS THERE ANYTHING WHICH IS BIGGER THAN EVERY PYRAMID BUT IS NOT AS WIDE AS THE THING THAT SUPPORTS IT?
Computer: YES, THE BLUE BLOCK.
Person: DOES A STEEPLE
Computer interrupting: SORRY, I DON’T KNOW THE WORD “STEEPLE”.
Person: A “STEEPLE” IS A STACK WHICH CONTAINS TWO GREEN CUBES AND A PYRAMID.
Computer: I UNDERSTAND.
Person: ARE THERE ANY STEEPLES NOW?
Computer: NO.
Person: BUILD ONE.
Computer: OK. (does it)
Person: CALL THE BIGGEST BLOCK “SUPERBLOCK”.
Computer: OK.
Person: HAVE YOU PICKED SUPERBLOCK UP SINCE WE BEGAN?
Computer: YES.
Person: WHY DID YOU DROP IT?
Computer: BY “IT”, I ASSUME YOU MEAN THE SUPERBLOCK.

Really impressive language processing.

AI researchers (and philosophers) at the time were a little too impressed with SHRDLU, and thought that genuinely intelligent computers were surely close at hand. In fact, Winograd himself, after wrestling for years with SHRDLU and computational language processing, came to the conclusion that AI researchers were generally far too optimistic about their achievements, at least as regarded the actual supposed intelligence of their creations. Winograd wrote:

“Most current computational models of cognition are vastly underconstrained and ad hoc; they are contrivances assembled to mimic arbitrary pieces of behavior, with insufficient concern for explicating the principles in virtue of which such behavior is exhibited and with little regard for a precise understanding.” [Winograd, 1987]

This was a cold shower for a lot of optimistic AI researchers, and a bolster to philosophers who opposed functionalism. But this is a topic for another post.

Happy AI film viewing!