The Doctrine of Double Effect

Sometimes doing the right thing involves a morally bad consequence. For instance, if someone is about to murder your family, and the only thing you can do to stop him is to yourself kill that person, it certainly seems that the right thing to do is to kill the murderer. And yet there is the morally bad consequence of killing someone at play here.

It’d be great for moral philosophers if we could adopt simple moral rules that apply in every situation, like “thou shalt not kill”. But situations like the above make it clear that the world is seldom so kind to those of us who would plumb the depths of ethical reality. So, if you’re looking to create a coherent moral system, you’d better be able to explain why it is that you are justified in killing a murderer who is intent on killing you and your family. Under what circumstances is killing okay?

Perhaps if we view the killing in this situation as a regrettable consequence of doing the right thing… That is, perhaps the moral action of saving your family — even if it results in the killing of someone — is the real action that you are undertaking. And perhaps the killing of the murderer is a tangential, unavoidable, bad moral consequence. In this analysis, we might be able to work things out to the effect that you are not a killer — you are a family-saver whose actions led (regrettably) to an unintended killing.


Aquinas, back in the 13th century, was thinking of a similar situation, and came up with four conditions that he thought must be met for acting morally with a tangential bad moral consequence:

  1. The Nature-of-the-Act Condition. The action itself cannot be morally wrong.
  2. The Means-End Condition. The bad effect must not lead directly to the good effect.
  3. The Right-Intention Condition. The intention must be the achieving of only the good effect with the bad effect being only an unintended side effect. The bad effect may be foreseen, but not desired.
  4. The Proportionality Condition. The good effect must be at least as morally good as the bad effect is morally bad.

If Aquinas’ analysis is on the money, then you can save your family with a clear moral conscience, despite the fact that you wound up killing someone in order to do it.

Unfortunately, in the case of killing the murderer, we hit a pretty significant problem right off the bat with condition one. The action itself here seems to be one of killing. Isn’t this almost definitionally morally wrong? To get himself out of this fix, Aquinas argues that the actual action undertaken here is saving one’s family, and that the killing is the bad but unintended side effect: “Accordingly, the act of self-defense may have two effects: one, the saving of one’s life; the other, the slaying of the aggressor.” I’m not sure I buy that, but let’s step through the other conditions…

Actually, condition two seems problematic as well. Indeed, the saving of your family’s lives seems to be a direct consequence of you killing the murderer. But Aquinas would argue that actually the bad effect of killing the murderer somehow comes later in the chain of cause-effect than the good effect of saving your family. Honestly, this seems like complete bullshit to me, but let’s keep riding this train to the station and see where we end up.

Condition three seems really to get at the heart of the matter. You don’t first and foremost intend to kill the murderer; you intend to save your family. Perhaps this is really the keystone of moral goodness. If you don’t intend to kill the murderer, then you’re not committing murder yourself. But if killing the murderer is something that has to happen in order for you to save your family, then so be it.

Condition four is also conceivably well-met by our case. Saving your family, ceteris paribus, is arguably at least as morally important in the positive as killing the murderer is in the negative.

Abortion and Euthanasia

The Catholic Church has used Aquinas’ thoughts on double effect to weigh in on two weighty moral issues of our time: abortion and euthanasia.

Many have argued that even if abortion is immoral, it is morally permissible to perform an abortion to save the life of the mother. The Church, contrary to this, has argued that saving the life of the mother in this sort of case would fail to meet both criteria one and two above.

But you can apply this same reasoning to the case of self-defense above. I’ll leave it to the reader to cogitate on this further. (Hint: If saving-your-family is the true and moral act in the first case, then why isn’t saving-the-mother the true and moral act in this case? In both cases, then, the killing would be consequent to the saving.)

The Church meant to draw a distinction between plain abortion and, for instance, performing a hysterectomy on a pregnant woman with uterine cancer. In the case of our cancerous woman (so goes the Church’s logic), the result of the hysterectomy would be an abortion, but the actual intention of the doctors is to save the woman from cancer, not to kill her fetus. This is a nifty bit of face-saving, but, again, isn’t the real intention of the doctors in the abortion case to save the woman’s life? And thus the abortion is secondary to the life-saving, and should be morally acceptable.

There’s a similar Church line taken on euthanasia. A doctor killing a patient with an overdose of morphine is (argues the Church) unacceptable, because it fails conditions one and two again. That is, even if the desired end-result is that of mercy, getting to that end via a morally bad act (killing) is wrong.

However, the Church allowed for doctors overdosing patients on morphine under the circumstance where the intention is to prevent pain. That is, if the act in question is the morally good one of pain prevention, then the unintended consequence of death is morally okay.

We’ll leave it to another day to discuss the absurdity of the presumed immorality of euthanasia, but note again that these two situations are really not that different. No doctor (or no doctor I’ve ever met, anyway) outright intends to kill her patients. They intend to ease suffering, and they know that death is often the ultimate and only suffering-ender that will work in some unfortunate circumstances.

Trolley Cases and Double Effect

Are you up to speed on philosophical trolley problems? If not, take a quick look at our primer on the subject. In fact, it was the publishing of two recent books on trolley problems in philosophy that got me thinking about double effect for this post. (Both are excellent little books, by the way, and well worth a read: Would You Kill the Fat Man, by David Edmonds, and The Trolley Problem, or Would You Throw the Fat Guy Off the Bridge? by Thomas Cathcart.)

Some will use the doctrine of double effect to justify their intuitions about trolley cases. For instance, in the standard case, a driver of a train with no brakes can either continue down his track and kill five unsuspecting workers, or divert the train down a spur and kill one unsuspecting worker. It turns out that most people believe that killing the one worker is the right thing to do in this situation. And often people will cite utilitarian reasoning here: ‘Well, one life isn’t as valuable as five, so it’s the right thing to kill one if you can save five.’

But if we change the circumstances of our thought experiment, the utilitarian justification loses some weight. Say the only way to save the five workers is to push a heavy object in front of the train. But the only object heavy enough is a fat man who happens to be above the tracks on a bridge. Would it be the right moral thing for you to push the fat main off the bridge and let the train run over him, saving the five lives further down the tracks? Well, it turns out that the general moral intuition here is that it’s actually not the right thing to do. And, if this intuition is correct, utilitarianism fails here. But the doctrine of double effect could be used to explain things! In the first trolley case, you don’t intend to kill the one worker on the spur. And your action isn’t really killing that worker — the action is saving the five workers by steering the train down a different track. The killing of the one worker that results from your action is regrettable, but is not the intended effect of the whole affair. But in the case of the fat man, you have to take direct action against the one person in order to save the five. Your action is directly killing the fat man.

As with the above analyses, I think there’s something actually amiss here. If you put an intermediate step in between your action and the fat man dying, that wouldn’t make it any more or less acceptable. There has got to be another analysis that we can apply.

And, in the spirit of cliffhanger serial short movies from the golden age of Hollywood, I’ll leave you with the promise that we’ll explore this different analysis in a future post…

Choosing a Kantian Maxim

Explaining anything about Immanuel Kant’s philosophy in a short blog post is a daunting and perhaps foolish task, but I am nothing if not undaunted and foolish.

I’d like here to address a particular problematic aspect of Kant’s ethical philosophy (and don’t let the terminology scare you off — it’s not as difficult as it’s about to sound): How one is supposed to go about applying Kant’s categorical imperative by way of universalizing a personal maxim?

Kant’s categorical imperative is the only pure (he had a thing about purity) moral law he could come up with, and it boils down to this: “Act only on that maxim by which you can at the same time will that it should become a universal law.” A maxim is a personal “ought” statement, like “I ought to save that puppy from that oncoming truck”. A universal law is generated from a maxim by applying it to the entire rational population. E.g., “Every rational person ought to save puppies from oncoming trucks.” And Kant’s categorical imperative asks us to use this process every time we wish to make an ethical choice: Come up with a personal maxim for the situation; universalize that maxim; and see if that universal law is something that should be followed by every rational person in every such situation.


Let’s go through an example of Kant’s process. Let’s say you’re faced with an instance where lying would be expedient. Here, then, is your personal maxim for the situation:

Maxim: I ought to lie in order to get out of a jam.

And then Kant asks you to universalize it:

Universal Law: Everyone ought to lie in order to get out of a jam.

According to Kant, this universalized version of your personal maxim shows us that your maxim is in fact immoral. Even though your maxim may seem harmless, and is certainly beneficial to you in the short term, by extending its reach to the whole of humanity, there arises something very bad. If we look at a world where everyone lies in every dicey situation, well, this is a world that is in trouble. And, thus, according to Kant, you should never lie. Period. No exceptions.

Lying to Nazis

This position leads to some obvious problems.

Say you’re in 1940 Germany, and you are harboring your Jewish neighbor in your attic, in order to protect her from the Nazis, who would like to find and kill her. Now imagine that the Nazis knock on your door and ask you: “Are you hiding any Jews in your attic? We’d like to kill them if you are.” The relevant moral question here, of course, is what do you do? Perhaps, as Kant thought, lying is a bad thing, but if you tell the truth in this situation, it will lead to your neighbor’s unwarranted death, which certainly seems worse, on the face of it.

Let’s look in a little detail at how Kant might have examined this situation. His logic went something like this:

  • If it’s okay for you to lie, then (according to the universalization of this maxim) it’s okay for everybody to lie.
  • But if everyone lies, then no one will ever believe anything anyone says.
  • And, thus, lies would become completely ineffectual.
  • Therefore, lying is a rationally inconsistent activity — it leads to its own conceptual destruction.

This rational inconsistency is at the heart of Kant’s claim that lying is immoral — he thinks that ethics has to be based on irrefutable, logical principles in order for it to be anything besides an argument over opinions. A concept that leads to its own self destruction certainly shows us that there is something inherently wrong with it. And so lying, in virtue of this, is immoral.

Choosing Your Maxim

But let’s look more closely at the procedure of picking your maxim in the lying example.

I should lie in order to help someone.

Is this a good candidate for a personal maxim? Well, no, not really. It’s certainly not generally applicable to moral situations. For instance, one could pretty easily argue that lying in order to help a mad bomber who is about to kill a thousand innocent people is probably not a very ethical thing to do.

I should lie in order to keep someone safe.

No, this has the same problem… what if you’re lying in order to keep the mad bomber from being arrested? This is arguably not a moral thing to do.

I should lie in order to save a life.

We’re getting better, but we still have the same problem lurking. If your lie is to save the life of an evil person, it’s at least arguable that the lie is not the morally right thing to do.

So let’s include something in our maxim to account for the idea that you are lying to protect someone innocent:

I should lie in order to save an innocent person from death at the hands of an evil person.

What happens if we universalize this maxim?

Everyone should always lie in order to save an innocent person from death at the hands of an evil person.

This is not bad, actually, but there’s still the Kantian objection of conceptual self-destruction lurking: If we always lie to evil people who want to kill innocent people, the evil people will start to catch on, and thus the lies will become self-defeating.

In fact, the example of lying is one of the best for Kant’s system — when he applies his system to other sorts of moral cases, it all starts to go to hell. But with lying, he has found a case where there is something internally irrational about the endeavor, when applied universally. But I’d like for a moment to talk about a general problem with Kant’s procedure. How, exactly, do you go about choosing your maxim?

The Problem of Specificity

One major problem here is that of specificity of the maxim you choose.

You could make your maxim very general:

I should lie to strangers.

This is just about the most general maxim you could use here; and certainly this isn’t universalizable. Not only would you not want to universalize it (everyone should lie to every stranger would be an odd moral rule!), but it harbors the same problem of lies being self-defeating.

What about if you go to the other extreme, and choose a very specific maxim?

I should lie in order to save the life of the Jewish person hiding in my attic in 1940 Germany from the Nazis who will kill her.

This is about as specific as you can get with your maxim. And actually this is pretty well universalizable, because by universalizing it you don’t lose much specificity — your universalized law is still quite specific and actually probably a good moral rule:

Everyone should lie in order to save the life of the Jewish person hiding in Alec’s attic in 1940 Germany from the Nazis who will kill her.

(You might generalize the universal law here a bit more: Everyone should lie in order to save the life of the Jewish person hiding in his or her own attic in 1940 Germany from the Nazis who will kill that Jewish person. Still, this is arguably easy to accept as a good universal law.)

The issue here is that very specific maxims will be easy to universalize, while very general ones won’t. And this is a problem because very specific maxims will usually be very uninteresting as the basis of moral tenets. Very general ones will usually be interesting.

Imagine instead of a moral law like “Murder is wrong”, we had a law that said “Murdering Joe Smith on August 24, 1968, because he applied the wrong postage to a letter, is wrong”. Other ethicists would mercilessly laugh us out of the business. Our law may be true, but is not very interesting.

So the only way to use Kant’s procedure to generate a sound moral rule is by picking a maxim that is so specific that it is morally mundane.

Other Problems With Kant

There are a million and one problems for Kantian ethics (although there are a million and two Kantian ethicists in the philosophical community today). But perhaps the most obvious concern with Kant’s ethics is that it doesn’t (in fact, explicitly so) account for the ends of one’s actions. Most of us are disposed to say that killing a mad bomber in order to save a thousand innocent lives is a moral action, regardless of the fact that it involves killing someone. Kant disagrees, saying we can’t rely on a good outcome (saving a thousand lives) as the basis of our ethics.

He’s got a point. What if you decide to kill the mad bomber, but by a fluke of luck you actually wind up wounding him instead, and he escapes, only to kill ten thousand people the next day? That fluke of luck turns you from a hero into a villain. This idea of moral luck is a fascinating topic on its own, but for our purposes here, it does cast Kant’s hardcore position in a somewhat better light. If good outcomes are dependent on luck, then perhaps a genuinely moral decision shouldn’t depend on its outcome — perhaps a good act is good no matter what the outcome.

Famously, a school of moral philosophy called utilitarianism (or more generally consequentialism) sprang up in direct opposition to this perspective. We’ll talk about some of its pluses and minuses in a future post.

Trolley Problems

The so-called trolley problems form a set of ethical thought experiments meant to delve into our intuitions about killing, letting die, rights, and obligations.

Driver’s Two Options

The problems come in many forms, but here is the original version. There is a train (or trolley, but who the hell thinks about trolleys anymore) with failed brakes, about to barrel down upon and kill five unsuspecting rail workers. The driver can continue down this track, or steer to the right onto a spur where there is one unsuspecting rail worker awaiting certain doom. What should the driver do?

Trolley - Driver's Two Options

The intuition that is generally thought to be prompted by this is: the driver should steer to the right, killing one but saving five. It’s a numbers game wherein, other things being equal, one should kill as few people as possible. Killing one person, it is thought, is better (or less horrible) than killing five.

Of course, one may take issue with this intuition in any number of ways. For instance, there’s the “other things being equal” clause, which we’ll address shortly. (As a preview, imagine that the one worker is close to discovering a cure for cancer, and the five are shiftless hooligans. Perhaps in such a case the numbers game changes, and the utility of the one outweighs the utility of the five. More on this soon.) But to get at a more subtle problem with the case, let’s examine another trolley problem — one without any trolleys.

Judge’s Two Options

This time, imagine a judge faced with the following dilemma. A serial killer has been killing people for months, and everyone is getting understandably nervous. A vigilante group takes five innocent people hostage, and says to the judge: “if you don’t catch the killer and sentence him immediately to be executed, we will kill all five hostages.” The judge, not knowing who the killer is, has the following choice: do nothing and let the five innocent people die, or sacrifice an innocent person as a scapegoat to appease the vigilantes, thus killing one but saving five.

The intuition meant to be provoked here is that the judge has no moral right to sacrifice an innocent person’s life, regardless of any good consequences that act might have. So, in this case, as opposed to the initial trolley problem, the supposed moral is that it is not acceptable to save five by killing one.

So now we have two cases where killing one person would save five other lives, but in one case the killing of the one seems to be morally acceptable, and in the other the killing of the one seems to be morally unacceptable. What is the morally significant difference between these cases?

Killing versus Letting Die

Perhaps the difference is between killing and letting-die. In the case of the judge, she is not actually killing the five hostages (the vigilantes will do the killing), she is letting them die. If she were to sentence the one innocent person to execution, that would be much more of a case of direct killing. In the original trolley case, the driver has the choice between directly killing five or directly killing one. You might argue that faced with such a choice, the only morally significant factor is the numbers. But the judge is faced with a different situation, wherein she can either kill one or let five die. The numbers add up differently here, perhaps.

But perhaps not.

What happens if we eliminate the driver in the trolley case? Our train is driverless and brakeless, and barreling towards our five workers. A bystander is standing by a switch in the tracks, and can either do nothing, letting the five workers die, or throw the switch and send the train to the right, killing the one worker on the spur. What should the bystander do?

Trolley - Bystander's Two Options

The intuition here is generally that the bystander should throw the switch and kill the one, saving the other five. But wait — our judge was supposed to let the five hostages die, so as to avoid killing one. Why is our bystander obligated to kill one in order to save five, when the circumstances seem so similar?

Well, you could argue that bystander’s case isn’t different at all from the judge’s case, and that, therefore he should not throw the switch. What if the bystander had three options: throw the switch one way and kill the one, do nothing and let the five die, or throw the switch the other way and kill himself.

Trolley Bystander's 3 Options

Is the bystander morally obligated to throw the switch and kill himself? It would certainly be nice of him, but it would generally be regarded that this would be an act of a Super Samaritan, and that it would go above and beyond the normal obligations of morality. But if our bystander is not obligated to save five lives by sacrificing his own life, then perhaps he is not obligated to pay this price with someone else’s life. That is, perhaps the bystander in the two-options case should indeed, like the judge, let the five die, rather than sacrifice one in order to save five.

The Medicine

We’re getting further away from our initial reasoning in the first trolley case, in which we thought numbers were the primary factor. (I.e., if you have a choice between saving one life and saving five, you should generally choose to save five.) But now we’ve seen some cases in which we should choose to save one instead of five. Could it be that in general the numbers aren’t the relevant moral factor?

Here’s another trolley case to consider (another one without any trolley). Six people all need a special drug in order to live. You have enough to treat either one of the five (who needs all of the medicine you have), or to treat the other five (who each need a fifth or the medicine you have). What should you do?

This is another case that, on the face of it, harkens back to our original trolley case. It seems as if, everything else being equal, you should probably save the five instead of the one (let’s call the one “David”), because surely the numbers matter here. Of course, there could be special circumstances involved, and here we have to return to the “everything else being equal” clause that I promised to talk about earlier. Perhaps David has a far greater utility than the five — perhaps he is a cancer researcher, while the five are ne’er-do-wells. Or perhaps the five are all evil — murderers or nazis or CEOs or what have you — while David is a relatively good person. Or perhaps the five are all old and otherwise sick and fairly near death, while David is young and vibrant. Or perhaps there is a more hybrid socio-moral reason to choose to save David over the five: perhaps you are David’s parent, or David’s doctor, or you have signed a binding legal contract to give your medicine to David. These are all justifiable moral factors that break the “everything else being equal” clause here, and would morally allow you to give the medicine to David.

But what if you were simply David’s friend, and had no other reason to give him the medicine than that you want to. Would this make it into the list of justifiable moral reasons to save David instead of saving the five? Well, generally the intuition is that it is indeed not such a reason. You have no moral or contractual obligation to save David, you just want to save him. And generally this isn’t thought to be a good moral reason to act.

But maybe this is wrong. Suppose now that the drug is owned by David, not you. Would you try to persuade him to give his medicine to the five others? Should you? I should think not. David values his life more than the five strangers’ lives, and no amount of utilitarian mathematics would convince him otherwise (“come on, David — five lives are worth five times the value of your life, and so you should give the five your medicine…”). And David is certainly not violating anyone’s rights by keeping his own medicine — none of the five has any claim to the drug. It would be an act of supreme Samaritanism to give up his own medicine to save others.

But given this new analysis, perhaps in our previous case we were too hasty in throwing “I want to give David my medicine” into the category of morally unacceptable reasons. Perhaps valuing David’s life is a morally acceptable reason for saving his life to the detriment of five others. It is still the case that none of the five has any claim to the drug. (Nor does David, of course.) It’s my drug. But perhaps my valuing David’s life is enough to eclipse the concern about the numbers here.

Weighing human lives

What, then, about the original case where you have no special concerns for any of the parties involved? Perhaps the numbers still aren’t an important concern here. John Taurek (from whom I took this example) claims just this, and says we should simply flip a coin. Heads: we save David. Tails: we save the other five. This way, each of the five has a 50-50 chance of living. Taurek’s point is that we can’t measure the value of human lives — at least not in the way that we can measure the value of, say, jewelry. And so, left without this sort of measure, and without any other factors that would count towards breaking the “everything else being equal” deadlock (such as friendship), we should fall back to a random choice. Of course, like many philosophers, he goes a bit too far with his zealotry. He says there’s no difference in a case where you’d be weighing 50 lives against one; I suppose he’d go to the extent of saying there’s no difference in a case where you’d be weighing 5,000,000 lives against one, or 5,000,000,000 against one. But clearly this is just insanity. Just because you can’t weigh a human life’s value in the same way as a necklace’s doesn’t mean there’s no way to measure its value at all. And it certainly doesn’t mean that 5,000,000,000 lives can’t be seen as more valuable than one.


Perhaps the correct account of trolley cases must examine avoidability. Take for instance a new non-trolley trolley case: The Surgeon’s Two Options. A surgeon has six patients, five of whom will die very soon without various organ transplants, and one of whom has a broken toe but is otherwise vital and healthy. By an extreme coincidence, the patient with the broken toe has the exactly right blood and tissue types to match all of the five other patients, and thus would be as perfect a transplant match as could be without being a relative. The surgeon is thus presented with two choices: harvest the organs of the patient with the broken toe, and thus save the lives of the other five patients; or merely fix the patient’s toe and let the other five patients die.

In this case, if the surgeon harvests the organs, she has avoidably violated the rights of the patient with the broken toe. That is, she could have not taken the organs, and thus not violated the patient’s right to life, liberty, and the pursuit of metabolism.

In the original trolley case, whichever decision is made, someone will die. But it will be unavoidable. There’s nothing the driver could do to stop the killing. And if he decides to take the spur and kill the one worker instead of the five, there’s nothing about his decision that could have been impacted by the worker’s wishes. In the surgeon’s case, she could simply have asked the toe patient if he minded having his organs harvested, and the matter would have been perfectly clear.

On one reading of the surgeon’s case, the numbers don’t count, simply because rights are being avoidably violated. On a similar reading of the trolley case, the numbers do count, simply because there are no other morally relevant factors. (And, despite what Taurek claims, the numbers are indeed morally relevant.)

Killing versus Letting Die versus Withdrawing Aid

There’s one more thing we should look at regarding the killing versus letting-die discussion: namely, we have to consider a grey-area between them. Withdrawing aid. It will take us to an interesting place, in the end.

One take on the difference between killing and letting-die is that killing is an act of doing, and letting-die is an act of allowing. (You might have picked up on the strangeness of an act of allowing. That is, you might think these things reside in different metaphysical categories; i.e., you don’t act in order to allow something to happen — in fact, you have to not act in order to allow something to happen. But I think there’s an implicit action in deciding not to act. More on this, soon.) And if this is the proper analysis, then we can apply a similar analysis to the original trolley case and the surgeon’s two options. The driver could just stay on the main track, allowing the train to do what it would have done on its own; and this could be seen as an act of letting-die. If letting-die is a less serious moral offense than actively killing, then perhaps letting five die is still less egregious than killing one. In the case of the surgeon, we have the same issue: letting five die might be less morally egregious than killing one, and thus you’d have your moral decision.

But what about murkier cases of withdrawing aid? Take for example, this: You are swimming with a friend, and she starts to drown. You start to rescue her, but she is so scared and disoriented that she begins to pull you down with her. You realize that you will both die if you don’t disengage from your rescue attempt. You abort the attempt, and she dies. Did you kill your friend, or allow her to die? Well, you have certainly acted, by pushing your friend off of you, but is this really rising to the level of killing? Perhaps you want to say that your action was one of withdrawing aid, which you might well argue is less morally egregious than an act of killing.

We can fruitfully look here to Judith Jarvis Thomson’s famous thought experiment of the violinist. You have been kidnapped by a radical music-lovers group, and you wake up in a hospital bed next to a world-famous violinist. You are told that the violinist needs your kidneys in order to survive, and so has been hooked up to you while you were unconscious. The question is whether or not unhooking yourself from the violinist is murder. (The original case is meant to show us something about the ethics of abortion.) You might argue, as in the last example, that this is a case of withdrawing aid rather than that of outright killing the violinist.

What if, in a similar scenario, while you ponder what to do, the violinist’s arch-enemy sneaks into the room and disconnects you. This is withdrawing aid as much as the last case, but may strike you differently somehow. Is seems more like killing somehow than when you disconnect the violinist yourself.

My take is that these cases are both acts of killing. But when you disconnect yourself, it’s a justified killing. That is, you have rights that have been violated, and it is thus a right you have to disconnect yourself. That said, I think it’s still an act of killing — justified or not, we should call it what it is. The violinist’s enemy does not have the right to kill him, and so this is not a justified killing, though a killing it obviously still is.

The Proper Analysis

Is the trolley problem solvable in every variation via the same reasoning? I doubt it. Hundreds have tried, of course, and perusing the literature is a fascinating pastime for those who are curious. But I do think that, as in many of the cases above, the proper analysis will usually involve an examination of the rights involved, and that this will often take the moral high-ground above any arguments regarding killing, letting-die, or anything similar. We’ll take a closer look at rights-based systems of ethics in future posts.


McMahan, Jeff. (1993) “Killing, Letting Die, and Withdrawing Aid”. Ethics 103.

Naylor, Margery Bedford. (1988) “The Moral of the Trolley Problem”. Philosophy and Phenomenological Research 48.

Taurek, John. (1977) “Should the Numbers Count?” Philosophy & Public Affairs 6.

Thomson, Judith Jarvis. (1971) “A Defense of Abortion”. Philosophy & Public Affairs 1.

Thomson, Judith Jarvis. (2008) “Turning the Trolley”. Philosophy & Public Affairs 36.

Kantian Ethics in a Nutshell

One of my students just wrote the following brilliant observation of Kantian Ethics:

“In my opinion, to be moral in Kant’s eyes is to be miserable.”



When I was in grad school, the Stoics were roundly ignored by my professors, and never mentioned by my fellow students. I had run across Epictetus at one point — a Roman slave whose handbook on how to live a good life harbors such uplifting gems as:

Never say of anything, “I have lost it”; but, “I have returned it.” Is your child dead? It is returned. Is your wife dead? She is returned. Is your estate taken away? Well, and is not that likewise returned? “But he who took it away is a bad man.” What difference is it to you who the giver assigns to take it back? While he gives it to you to possess, take care of it; but don’t view it as your own, just as travelers view a hotel.

I thought at the time that Epictetus’ Handbook was a better reflection of his own psychology than it was a useful aid to achieving a good life. I also, frankly, thought it was rather depressing, and I thought no more about it for a long time.

But revisiting the Stoics recently was an interesting exercise, and I think, after all, that they are a underappreciated bunch.

To get a sense of what they were about, I refer you to this excellent lecture on the subject, by Philip Hansten. (It’s in three parts, but the whole thing is only about 45 minutes in total.)

Any fans or disciples of Stoicism out there? Let us know how you got into it and whether or not Stoicism is a good set of guides for living your life…