Cartesian Skepticism

Welcome to the blog’s first foray into epistemology: the philosophical study of knowledge. Today we will be talking about René Descartes, who will be ensconced in infamy for two feats: creating a system of geometry that would annoy high school students for hundreds of years to come, and for presaging “The Matrix”. Much as I actually liked high school geometry, I would like here to talk about the Cartesian skepticism of the external world that made so many science fiction movies possible.

For those of you who haven’t yet read Descartes’ famous Meditations on First Philosophy (mostly referenced plainly as the Meditations), what are you waiting for? Here’s an old translation into English to get you started. There are also approximately a billion print versions available on Amazon, in case you want a more contemporary translation, along with the ability to scribble in the margins.

The Meditations start with Descartes recounting the none-too-astounding realization that he had been wrong about some things as a youngster.

Several years have now elapsed since I first became aware that I had accepted, even from my youth, many false opinions for true, and that consequently what I afterward based on such principles was highly doubtful; and from that time I was convinced of the necessity of undertaking once in my life to rid myself of all the opinions I had adopted, and of commencing anew the work of building from the foundation, if I desired to establish a firm and abiding superstructure in the sciences.

So his project in the Meditations was very much foundational. Descartes wanted to tear down all things that passed for knowledge, in order to find a kernel of certainty, from which he would build back up a magnificent structure of infallible knowledge. Those of you who remember high school geometry might be having nightmarish flashbacks at this point, remembering how the subject was built up from just a few, allegedly very certain axioms. The axioms were the firm, unassailable foundation upon which the science of geometry was built. Descartes had similar plans for every other science and in fact every human epistemological endeavor.

His method was, simply enough, to sit comfortably in his pajamas and begin doubting everything that he possibly could doubt. The first victim of his skepticism was his senses. “All that I have, up to this moment, accepted as possessed of the highest truth and certainty, I received either from or through the senses. I observed, however, that these sometimes misled us; and it is the part of prudence not to place absolute confidence in that by which we have even once been deceived.” A pretty reasonable place to start doubting things. After all, there are a million and one ways in which we are regularly deceived by our senses: optical illusions abound, hallucinations occasionally crop up, and physical ailments of the eyes and brain can cause misperceptions.

But there’s an even more radical skepticism that can crop up from this line of thought. What if it’s not just the case that the senses deceive, but that they don’t exist at all? Take this picture of the human knowledge machine:

Beach ball on the inner screen

On this picture (which, I think, is a pretty sound depiction of what philosophers of that age thought, and indeed is still how a lot of people picture the mind), the only reliable access to knowledge is via an inner screen that has projected upon it images of the external world. The screen here is inside the brain/mind, and the little person viewing the screen is one’s consciousness. If the senses exist, then sometimes they project something misleading on the inner screen, and this gives rise to optical illusions and hallucinations. But on this picture, a skeptic could go so far as to say that the senses might be fictional. If all we have access to is this inner screen, then we just can’t be sure from where its images come. Maybe they come from the senses, and maybe they don’t. Of course, given that there were really no computers or any decent science fiction at the time, the only 17th Century source that would be powerful enough to accomplish this illusory feat would be God. But since God is supposed to be omnibenevolent, and would therefore not deceive us in this way, Descartes conjured up a reasonable facsimile of sci-fi for the time, and said that perhaps there is an evil demon who deceives each of us in this way.

Evil Demon at work deceiving us

Well, that’s a lot of doubt, and a lot of the world’s furniture that has suddenly become dispensable. Stones, trees, and cats might not exist. Neither might other people, for that matter. Descartes found himself at this point in an extremely solipsistic position. He might be the only person in the universe. And this person might not even have a body.

At this point, Descartes took some certainty back from the skeptical vortex into which he was falling. He might not have a body, but if he was indeed being deceived by some evil demon, then he was being deceived. “I am, I exist,” he concluded. And each time he thinks this (or anything else, for that matter), his existence is assured.

At this point, we could veer off into metaphysics and the philosophy of mind, and discuss the ontological corollary to this barely optimistic offramp of the Cartesian skeptical superhighway: Dualism. According to Descartes’ theory, the mind is not necessarily connected to a body; that is, it is logically possible for a mind to exist without a brain.

But let’s save this subject for another post. Now, let’s examine where Cartesian skepticism has taken us, epistemologically.

Skepticism of the external world is a very strong philosophical position. It is really quite difficult to debate a skeptic on matters of epistemology, because the default answer of “but can you really know that the external world exists” is very defensible. Try it out for yourself:

Me: This iPhone is great.
You: If it exists.
Me: What do you mean? I’m holding the thing in my hand!
You: You think you are. Maybe you’re dreaming.
Me: I know the difference between a dream and reality.
You: You think you do. But maybe you’re in a dream, and in that dream you dream that you’re awake, but really you’re still just dreaming.
Me: Oh, come on. That leads to an absurd infinite regress of dream states.
You: Well, it’s still possible. And anyway, you could be living in a computer simulation. Or you could be crazy and hallucinating all of this. In any event, you can’t know for sure that you’re holding an iPhone in your hand. You can know that you have an image of holding an iPhone in your mind. Therefore your mind exists. Does that make you feel better?

And you have won the debate!

The Way Out

So do we have to just give in to the skeptic? Is there no hope for those of us who would like to assume the existence of stones, trees, and cats? Real ones… not just images of them in our minds.

Well, yes, there is. It’s called Naturalized Epistemology (or just “naturalism”), and it was foreshadowed by David Hume way back in 1748. I’ll quote a lengthy passage, because it’s so beautifully crafted:

For here is the chief and most confounding objection to excessive scepticism, that no durable good can ever result from it; while it remains in its full force and vigour. We need only ask such a sceptic, What his meaning is? And what he proposes by all these curious researches? He is immediately at a loss, and knows not what to answer. A Copernican or Ptolemaic, who supports each his different system of astronomy, may hope to produce a conviction, which will remain constant and durable, with his audience. A Stoic or Epicurean displays principles, which may not be durable, but which have an effect on conduct and behaviour. But a Pyrrhonian cannot expect, that his philosophy will have any constant influence on the mind: or if it had, that its influence would be beneficial to society. On the contrary, he must acknowledge, if he will acknowledge anything, that all human life must perish, were his principles universally and steadily to prevail. All discourse, all action would immediately cease; and men remain in a total lethargy, till the necessities of nature, unsatisfied, put an end to their miserable existence. It is true; so fatal an event is very little to be dreaded. Nature is always too strong for principle. And though a Pyrrhonian may throw himself or others into a momentary amazement and confusion by his profound reasonings; the first and most trivial event in life will put to flight all his doubts and scruples, and leave him the same, in every point of action and speculation, with the philosophers of every other sect, or with those who never concerned themselves in any philosophical researches. When he awakes from his dream, he will be the first to join in the laugh against himself, and to confess, that all his objections are mere amusement, and can have no other tendency than to show the whimsical condition of mankind, who must act and reason and believe; though they are not able, by their most diligent enquiry, to satisfy themselves concerning the foundation of these operations, or to remove the objections, which may be raised against them.

So, the idea (if you had a hard time navigating the old-school English), is that if skepticism of the external world is true, it leaves one in the unenviable position of nothing mattering. It is not a stance from which one can do any productive theorizing about science, philosophy, or, well, anything except for one’s own mind. (And even that bit of theorizing will stop at the acknowledgement of one’s inner screen accessible to consciousness.)

Do we have a stance from which we can do productive theorizing about things? Assuming that science is generally correct about the state of the world is a good start! After all, science has some of the smartest people in the world (if they and the world exist) applying the most stringent thinking and experimentation known to humanity. And science assumes the existence of things like stones, trees, and cats — things that exist in the world, not merely as ideas in our minds.

Here’s one of the more interesting perspectives on subverting skepticism, from Peter Millican at Oxford:

The gist of the video is that there are two ways to argue every issue. In the case of skepticism of the external world, you can argue, like a naturalist, that you know that stones, trees, and cats are real, therefore you know that there is an external world; or, like a skeptic, you could argue that we don’t know that there is an external world, therefore you don’t know that stones, trees, and cats exist. They are really quite equally plausible strategies, from a strictly logical point of view. And in both cases you have to assume something to be the case in order to get to your desired conclusion. So do you want to assume that you don’t know there’s an external world, or would you rather assume that you know that stones, trees, and cats exist? Your choice.

If you choose the skeptical path, I hope you’ll choose to pass your solipsistic time entertaining dreams of this blog.

On Definitions in Philosophy

When trying to define a term, we think generally of providing a set of necessary and sufficient conditions: a recipe for including or excluding a thing in a particular category of existence. For instance, an even number (definitions tend to work best in the mathematical arena, since definitions there can be as precise as possible) is definable as an integer that when divided by 2 does not leave a remainder. It is easy, given this definition, to ascertain whether or not a given number is even. Divide it by two and see if it leaves a remainder. If it does, then it’s not even; if it doesn’t, then it is. We have here a clear test for inclusion or exclusion in the set of even numbers.

Outside of mathematics, things get trickier. (Inside mathematics, things can be tricky as well. Imre Lakatos‘ excellent book Proofs and Refutations details some of the problems here. If you are mathematically and philosophically inclined, this is a must-read book.)

In Ludwig Wittgenstein‘s Philosophical Investigations, he famously talks about the travails of defining the term “game”. Is there a set of necessary and sufficient criteria that will let us neatly split the world into games and non-games? For instance, do all games have pieces? (No, only board games have these.) Winners and losers? (There are no winners in a game of catch.) Strategy? (Ring-around-the-rosie has no strategy.) Players? (Well, since games are a particularly human endeavor, it would be an odd game that had no human participants. But, of course, some games have only one player.) There seems to be no single set of characteristics that spans across everything we’d like to call a game. Wittgenstein’s solution was to say that games share a “family resemblance” — “a complicated network of similarities overlapping and criss-crossing”. A great many games have winners and losers, and so share this family trait; and then there are games that have pieces, and this is another trait that can be shared. Many (but not all) of the games with pieces also have winners and losers, and so there is significant overlap here. Games with strategy span another vast swath of the game landscape, and many of these games have winners and loses, many of which also have pieces. But not all. And so a networks of resemblances between games is found — not a single boundary that separates games from non-games, but a set of sets that is overlapping and more or less tightly connected.

This is a brilliant idea, but one that often leaves analytical philosophers with a bad taste in their mouths. If you try to formalize family resemblances (and analytical philosophers love to formalize things), you run up against the same problems as you had with more straightforward definitions. Where exactly do you draw the line in including or excluding a resemblance? Games are often amusing, for instance. But so are jokes. So jokes share one resemblance with games. But jokes are often mean-spirited. And so are many dictators. And dictators are often ruthless. As are assassins. So now we have a group of overlapping resemblances that bridges games to assassins. And if you want to detail the conditions under which this bridge should not take us from one group of things (games) to the other (assassins), you are back to specifying necessary and sufficient conditions.

Wittgenstein, I imagine, would have laughed at this “problem”, telling us that we just have to live with the vague boundaries of things. Which is all well and good, but is easier said than done.

Knowledge

The defining of knowledge gives us a great example of definitions at work and their problems. For those of you who haven’t been indoctrinated in the workings of epistemology, it turns out that a good working definition for knowledge is that it is justified true belief.

Is Knowledge Justified True Belief
I take it as axiomatic as can be that something has to be believed to be known. If you have a red car but you don’t believe that it’s red, you don’t have knowledge of that fact. But, clearly, belief isn’t sufficient to define something as knowledge. For instance, if I believe that my red car is actually blue, I still don’t have any knowledge of its actual color. So we have to bring truth into the picture. If I believe that my car is red, and it is actually red, I’m certainly closer to having a bit of knowledge. But, again, this isn’t sufficient. What if my wife has bought me a red car that I haven’t seen yet. I believe it’s red because I had a dream about a red car last night. Do I have knowledge of my car’s color? I’d say not. We need a third component: Justification. If I believe that my new red car is indeed red because I’ve seen it with my own eyes (or analyzed it with a spectrometer, if the worry of optical illusions bugs you), then we should be able to say I do indeed have a bit of knowledge here.

In 1963, Edmund Gettier came up with a clever problem for this definition — one that presents a belief that is justified and true, but turns out to not be knowledge. Here is the scenario:

  • Smith and Jones work together at a large corporation and are both up for a big promotion.
  • Smith believes that Jones will get the promotion.
  • Smith has been told by the president of the corporation that Jones will get the promotion.
  • Smith has counted the number of coins in Jones’ pocket, and there are 10.

The following statement is justified:

(A) Jones will get the promotion and Jones has 10 coins in his pocket.

Then this statement follows logically (and is therefore also justified):

(B) The person who will get the promotion has 10 coins in his pocket.

But it turns out that the president is overruled by the board, and Smith, unbeknownst to himself, is actually the one will be promoted. It also turns out that Smith, coincidentally, has 10 coins in his pocket. Thus, (B) is still true, it’s justified, and it is believed by Smith. However, Smith doesn’t have knowledge that he himself is going to get promoted, so clearly something has gone wrong. Justification, truth, and belief, as criteria of knowledge, let an example of non-knowledge slip into the definitional circle, masquerading as knowledge.

More Games

Let’s get back to the problem of defining games, and say that, contrary to Wittgenstein, you’re sure you can come up with a good set of necessary and sufficient conditions. You notice from our previous list of possible necessary traits that games certainly have to have players. Let’s call them participants, since “player” is something of a loaded word here (a player presupposes a game, in a way). And now you also take a stand that all games have pieces. Board games have obvious pieces, but so, you say, do other games. Even a game of tag has objects that you utilize in order to move the game along. (In this case, you’re thinking of the players’ actual hands.) So let’s add that to the list, but let’s call it what it is: not pieces so much as tools or implements. And perhaps you are also convinced that all games, even games of catch, have rules. Some are just more implicit and less well-defined than others. So let’s stop here, and see where we are. We have participants, implements, and rules.

And now we begin to see the problem. If we leave it at that, our definition is so loose as to allow under the game umbrella many things that aren’t actually games. A group of lab technicians analyzing DNA could fall under the conditions of having participants, implements, and rules. But if we tighten up the definition, we run the risk of excluding real cases from being called games. For instance, if we tighten the definition to exclude our lab workers from the fun by saying that games also have to have winners and losers we immediately rule out as games activities like catch and ring-around-the-rosie.

Lakatos coined two brilliant phrases for these definitional tightenings and loosenings: “monster-barring” and “concept-stretching”. Monster-barring is an applicable strategy when your definition allows something repugnant into the category in question. You have two options as a monster-barrer: do your utmost to show how the monster doesn’t really satisfy your necessary and sufficient conditions, or tweak your definition to keep the monster out.

Concept-stretching allows one to take a definition and run wild with it, applying it to all sorts of odd cases one might not have previously thought to. For instance, perhaps we should expand entry into the realm of games to include our intrepid DNA lab workers. What would that mean for our ontologies? And what would it mean for people who analyze games? And for lab technicians?

Philosophers love to define terms; they also love to find examples that render definitions problematic. It’s a trick of the trade and a hazard of the business.

Philosophy Video Roundup

When I was a young philosopher, I had to trudge 117 miles through a blizzard just to get to a library to read a dry journal article. If the library was closed, I trudged the 117 miles back to my house and turned on the radio in hopes of hearing something interesting. Such was the extent of “multimedia” in those days. Nowadays, through the magic of the Internet, philosophy is available in video form, at your beck and call, 24 hours a day. You’ve all got it too easy.

To make it even easier, I went through the entire Internet and pulled a few philosophical gems from its unclean innards. Here are some famous philosophers, explaining their thoughts in their own words.

Willard Van Orman Quine

Quine was one of the giants of 20th Century analytical philosophy. Equally at home in metaphysics as in the rarified air of hardcore logic, Quine wrote on a variety of subjects, and his 1951 “Two Dogmas of Empiricism” (originally published in Philosophical Review, 60, but reprinted in Quine’s compilation of essays From a Logical Point of View) is still a mainstay of the field.

Here is Quine being interviewed about his work. (The interview is in five parts on YouTube. Parts 2-4 are available here: Part 2, Part 3, Part 4, Part 5.)

Daniel Dennett

People seem to either love or hate Dennett. I think he’s brilliant, personally, even when he happens to be wrong! Dennett specializes in the philosophy of mind, and he’s famous for his views on consciousness (especially that there happens to be nothing intrinsically magical or mysterious about the phenomenon). This lack of mysticism has not endeared him to a certain segment of the philosophical community (as his open atheism has not endeared him to an entire other community).

He’s got a great lecture called “The Magic of Consciousness”, but it seems to be eradicated from YouTube. I don’t know if that means he’s trying to sell the video elsewhere, but if one is curious enough a quick Google search reveals a couple of sources where one could still watch it.

There are a ton of Dennett videos on YouTube, but here’s a good one to get started with, from a lecture he gave at TED. (Watch out: the volume for the intro music is absurdly loud.)

John Searle

Searle is another philosopher of mind, famous for his Chinese Room thought experiment, which, he supposed, proved that artificial intelligence was entirely misguided in its efforts to simulate or recreate consciousness. He and Dennett would probably disagree about most things in the philosophy of mind. Here is Searle late in his career, still talking about consciousness, but explicitly saying he’s sick of the Chinese Room.

There are a bunch of videos from this event online.

Here is some Searle from further back in time, in a more digestible chunk:

Hilary Putnam

People in contemporary philosophy don’t talk about Hilary Putnam quite as much as they used to, but there was a time not very long ago when you couldn’t crack open any philosophy journal without his papers referenced just about everywhere. Putnam philosophized mostly about science and math, but also talked about language, mind, metaphysics, and epistemology. Here he is from an interview on the philosophy of science. (As with the Quine video, this one is in five parts on YouTube: Part 2, Part 3, Part 4, Part 5.)

Bertrand Russell

Russell was another giant of 20th Century analytical philosophy. His work in logic and in the philosophy of language was, even though eventually found wanting, so groundbreaking and important that his legacy is assured. He is also well known for fostering the work of a young Ludwig Wittgenstein.

Here is Russell interviewed in 1959. (In three parts: Part 2, Part 3.)

Some Others…

Ian Hacking on the philosophy of math:

Peter Singer on what we eat:

Judith Jarvis Thomson on normativity:

Let us know if you find any other gems out there…

The Peanut Butter and Jelly Debate

Arguing Over Nothing: A regular feature on the blog where we argue over something of little consequence, as if it were of major consequence. Arguing is philosophy’s raison d’être, and the beauty of an argument is often as much in its form as its content.

Today, we argue about the proper way to make a peanut butter and jelly sandwich. Jim argues for a radical, new approach, while I side with a more standard approach to the endeavor.

Each philosopher is granted up to a 500-750 words to state his/her case as well as up to 250-500 words for rebuttal. The winner will be decided by a poll of the readers (or whoever happens to have admin privileges at the appropriate time).


Jim: Arguing for the bowl method

The purpose of a peanut butter and jelly sandwich, the purpose of any sandwich, I suppose, is to provide a quick bit of sustenance. There are ‘sandwich artists’ in the world, but I have trouble imagining such people working in the medium of peanut butter and jelly. Therefore, the sooner the sandwich is made, the sooner its purpose can be met. Were one to take the time to, after opening two jars and securing two utensils (surely we can both agree that cross-contamination of the ingredients should not occur within the jars), much time has already been lost and invested. From that point, mixing the two ingredients in bowl is the best and most efficient way of creating the sandwich. This is so for, primarily, two reasons.

First, peanut butter, even the creamiest sort, is not so easily spread on bread. I will grant that toasted bread provides a more durable spreading surface, but, again, the sandwich is made for a quick repast so that toasting is often overlooked or bypassed. Inevitably, large divots are raised or even removed from the bread by even the most experienced spreader. Once that has been accomplished, if it were accomplished at all, the jelly must be attended to. Securing jelly from jar with a spreading knife is a feat best left to the young and others with plenty of idle time on their hands. Repeated stabbings into the jar will secure, at best, scant amounts of jelly. It is, obviously, better to use a spoon. However, as is clear to even the dullest imagination, spreading with a spoon leaves much to be desired, literally, as the result tends to be scattered hillocks of jelly, between which are faint traces, like glacial retreatings, of ‘jelly flavor’. Were one to use a spoon for jelly retrieval and a knife for jelly spreading, that is yet another utensil to clean.

The second reason against separate spreads, and so for one bowl of mixed, is corollary to the above. When one makes a peanut butter and jelly sandwich, one is looking to taste both in, maximally, each bite. Given the condition of the bread on the peanut butter side and the pockets of flavor intersticed with the lack thereof on the jelly side, one is lucky to get both flavors in half the bites taken.

By mixing both peanut butter and jelly in a bowl prior to application, both of the concerns above are fully redressed. The peanut butter, by virtue of its mixing with jelly, becomes much more spreadable for two reasons: it is no longer as thick and it is no longer as dry. A thin and moist substance is always much easier to spread. Furthermore, because of the aforementioned mixing, both flavors will be available in every bite taken. The end result is a much more delicious, easily made (and so efficient), quick meal. As an added bonus, one’s fingers end up with less mess since only one slice of bread has needed attending to and so one’s fingers are only up for mess-exposure for the one time and not twice as with the other method.

While there is the bowl left to clean, in addition to the utensils, what has not been removed from the bowl is easily rinsed. The peanut butter-jelly mix, given its thin and moist nature is almost always able to be fully removed from the bowl and transferred to the bread. What is not so transferred, whether by design or not, is, by the the previously mentioned nature, easily washed or wiped away in disposal.

The bowl is clearly the way to go when making a peanut butter and jelly sandwich.


Alec: Arguing Against the Bowl Method

I will grant your utilitarian premise on sandwich making (“the purpose of any sandwich, I suppose, is to provide a quick bit of sustenance”), though I will point out that aesthetics could have a valid role to play in this debate. If your PB&J-from-a-bowl sandwich is singularly visually unappetizing (as I imagine it might be) then it will not provide any sustenance whatsoever, but will end up in the trash can instead. Also, note that your utilitarianism here could lead to the creation of a “sandwich” that is made by tossing the ingredients in a blender and creating a PB&J smoothie the likes of which would be eschewed by any rational hungry person.

But I digress.

You claim that peanut butter — even the creamy variety — is difficult to spread on bread. I have two points to make in regards to this claim. First, I haven’t had difficulty spreading peanut butter on bread since I was 12. Perhaps you should have your motor skills tested by a trained kinesiologist. I grant you that spreading a chunky peanut butter on a thin, wispy white bread can be problematic; but a smooth peanut butter on a hearty wheat bread? Not problematic at all. Second, you have pointed to no scientific research that shows that mixing peanut butter and jelly in a bowl makes it easier to spread than plain peanut butter. I remain skeptical on this point. And even if it is easier to spread, the labor involved in mixing it with jelly in a separate bowl might be far more work than it is worth in the end.

The knife/jelly problem is a thorny one, indeed, as you have noted. Trying to extricate an ample amount of jelly from a jar with a knife is difficult and annoying. You claim that: “Were one to use a spoon for jelly retrieval and a knife for jelly spreading, that is yet another utensil to clean.” However, you have overlooked the obvious: one can use the knife from the peanut butter to spread the jelly that has been extricated with the spoon. Here is some simple math to show how utensil use plays out in both of our scenarios:

You: 1 knife for dishing peanut butter + 1 spoon for dishing jelly + 1 bowl for mixing.

Me: 1 knife for dishing and spreading peanut butter + 1 spoon for dishing jelly, and reuse the knife for spreading jelly.

So we are equal on our utensil use, and you have used an extra bowl.

And on the subject of this extra bowl, it will be readily admitted by all that a knife with peanut butter on it is annoying enough to clean, while an entire bowl with peanut butter on it is proportionately more annoying to clean. (Again, you claim that a peanut butter / jelly mixture is easier to clean than pure peanut butter, but the research on this is missing. Surely you will allow that a bowl with some peanut butter on it is not a simply rinsed affair.) Plus there’s the environmental impact of cleaning an extra bowl each time you make a sandwich. Add that over the millions of people who make peanut butter and jelly sandwiches each day, and you’ve got a genuine environmental issue.

Creating a peanut butter and jelly sandwich my way also leads to an easier-to-clean knife. After spreading the peanut butter on one slice of bread, you can wipe the knife on the other slice of bread to remove upwards of 90% of the residual peanut butter (Cf. “Peanut Butter Residue in Sandwich Making,” Journal of Viscous Foods 94, 2008, pp. 218-227.) This makes cleanup far easier than in your scenario, and results in potential environmental savings as well.

You do make two solid points. First, your PB&J mixture is potentially much more homogenous than the usual sandwich mixture, resulting in a more equitable PB-to-J ratio per bite. Here I can only revisit my aesthetic claim that eating a standard PB&J sandwich is more appealing than the greyish mixture you propose we slather on bread. Second, your sandwich creation process is indeed potentially less messy on the fingers than mine. To this I have no defense. Into each good life some jelly must fall.


Jim: Rebuttal

I must say, I find many of your points and counterpoints intriguing. All wrong, of course, but still intriguing. Let’s go through them, one at a time, and see where you go astray.

1) I grant both the utilitarian and aesthetic aspects to the sandwich. There are some truly beautiful sandwiches out there; few of them, however, are made at home and are made solely of bread, peanut butter, and jelly. The maker of such a sandwich is often working in a limited environment with a limited medium with a time crunch, otherwise, utility be damned and let the sandwich artist sing. As for the smoothie sandwich, I doubt, as surely so do you, that the sole goal of the creator (of the sandwich) is to ingest those ingredients as soon as possible. Ignoring a lack of teeth or the presence of an extremely tight throat, such an option is insane.

2) While I appreciate a gentle jibe as much as the next fellow, to imply that I lack the wrist strength to apply peanut butter to bread is going a bit far. Ad hominem attacks should have no place in philosophical discourse. It is not impossible to spread peanut butter on bread and I will happily grant you the point that it is so much easier to do so on ‘hearty wheat bread’. My point was and is that it is easier to do so if, to use a turn of phrase, the wheels have been greased a bit, and it is my contention that a peanut butter and jelly mixture does just that. However, you are correct that I have no scientific data to back that up. I was under the impression that science need not enter civil discussion, but I will agree that I have no data to back that claim up. Common sense, mere intuition, though, seems to suggest that if jelly is easier to spread than peanut butter, and who would contest that, then surely a mixture of peanut butter and jelly would be easier to spread than peanut butter simpliciter.

3) I fear I only have enough space left to deal with your point concerning the extra cleaning of a bowl. I did take a bit of latitude with that argument and will concede it to you with but one addendum. In almost every home, at the very least in a great many homes, I would guess that the dishes are not washed one at a time, but rather several at once, and rarely immediately after use. If the utilitarian nature of the PB&J sandwich is granted, time is at a minimum and I suspect clean-up will have to wait a more opportune time. While an extra bowl is required during the creation of the sandwich, I do not think that an extra bowl needing to be washed would extend such washing time unduly.

Nonmonotonic Logic and Stubborn Thinking

I was struck recently by some similarities between the psychology of stubborn thinking and the history of science and logic. It’s not just individuals that have trouble changing their minds; entire scientific, logical, and mathematical movements suffer from the same problem.

Logic

When people think about logic (which I imagine is not very often, but bear with me on this), they probably think about getting from a premise to a conclusion in a straight line of rule-based reasoning — like Sherlock Holmes finding the criminal perpetrator with infallible precision, carving his way through a thicket of facts with the blade of deduction.

Here’s a sample logical proof that would do Holmes proud.

Birds fly.
Tweetie is a bird.
Therefore Tweetie flies.

We have here a general principle, a fact, and a deduction from those to a logical conclusion.

The problem is that the general principle here is just that: general. It is generally the case that birds fly. In fact, some birds do not fly at all. (In fact, there’s not ever a general principle that universally applies: even the laws of physics are arguably fallible. Cf. Nancy Cartwright’s wonderful How the Laws of Physics Lie.) Tweetie could be an ostrich or an emu, or Tweetie could have lost his wings in a window-fan accident, or Tweetie could be dead.

You could shore up your general principle in order to try to make it more universal: Birds that aren’t ostriches, emus, wingless, or dead, fly. But this sort of backpedaling is really an exercise in futility. As the past several decades of research in artificial intelligence through the 90s showed us, the more you expand your general principle to cover explicit cases, the less of a general rule it becomes, and the more you realize you have to keep covering more and more explicit cases, permutations upon permutations that will never end. (E.g., even in the case of death, Tweetie might be able to fly. He could be dead, but also in an airplane at 20,000 feet. Would you amend your general principle to cover this case? It would be a strange sort of “scientific” law that stated “Birds fly, except dead birds that aren’t in airplanes.”)

A brilliant solution to this sort of problem was found via the creation of nonmonotonic logic, a logical system that is what they call defeasible — that is, it allows for making a conclusion that can be undone by information that eventually emerges to the contrary. So the idea is that a nonmonotonic system allows you to conclude that Tweetie flies via the logic above, but also allows you to change that conclusion if you then find out that Tweetie is, in fact, e.g., dead.

This may not seem like a big deal, since this is how a rational human is supposed to react on a regular basis anyway. If we find out that Tweetie is dead, we are supposed to no longer hold to the conclusion, as logical as it may be, that he flies. But for logicians it was huge. The old systems of logic pinned us helplessly to non-defeasible conclusions that may be wrong, just because the logic itself seemed so right. But now logicians have a formal way of shaking free of the bonds of non-defeasibility.

Science

The history of science is rife with examples of this principle-clinging tenacity from which it took logic millennia to escape. A famous case is found in astronomy, where the concept persisted for more than a dozen centuries that the earth was at the center of the universe. As astronomy progressed, it became clear that to describe the motion of the planets and the sun in the sky, a simple model of circular orbits centered around the Earth would not suffice. Eventually, a parade of epicycles was introduced — circles upon circles upon circles of planetary motion spinning this way and that, all in order to explain what we observed in the earth’s sky, while still clinging to the precious assumption that the Earth is centrally located. The simpler explanation, that the Earth was in fact not the center of all heavenly motion, would have quickly done away with the detritus of clinging to a failed theory, but it’s not so easy to change science’s mind.

In fact, one strong line of thought, courtesy of Thomas Kuhn has it that the only way for scientists to break free from such deeply entrenched conceptions is nothing short of a concept-busting revolution. And such revolutions can take years to gather enough momentum in order to be effective in mind-changing. (Examples of such revolutions include the jarring transition from Newtonian to Einsteinian physics, and the leap in chemistry from phlogiston theory to Lavoisier’s theory of oxidation.)

Down to Earth

If even scientists are at the mercy of unchanging minds, and logicians have to posit complicated formal systems to account for the ability to logically change one’s mind, we should be prepared in our daily lives to come up against an immovable wall of opinions. Despite what the facts tell us.

Indeed, it isn’t very hard to find people that have a hard time changing their minds. Being an ideologue is the best way of sticking to an idea despite evidence to the contrary, and ideologues are a dime a dozen these days. What happens in the mind of an ideologue when she is saving her precious conclusion from the facts? Let’s revisit Tweetie. (You can substitute principles and facts about trickle-down economics or global warming for principles and facts about birds, if you like.)

Ideologue: By my reasoning above, I conclude that Tweetie flies.

Scientist: That is some nice reasoning, but as it turns out, Tweetie is dead.

Ideologue: Hmmm. I see. Well, by “flies” I really mean “flew when alive”.

Scientist: Ah, I see. But, actually, Tweetie was an emu.

Ideologue: Of course, of course, but I mean by “flies” really “flew when alive if not an emu”.

Scientist: But so then you’ll admit that Tweetie didn’t actually fly.

Ideologue: Ah, but he could have, if he had had the appropriate physical structure when he was alive.

Scientist: But your conclusion was that Tweetie flies. And he didn’t.

Ideologue: Tweetie was on a plane once.

Scientist: But isn’t that more a case of Tweetie being flown, not Tweetie flying?

Ideologue: You’re just bogging me down in semantics. In any case, Tweetie flies in heaven now. Case closed.