Type Book
Date 1984
Pages 543
Tags nonfiction, philosophy, ethics

Reasons and Persons

Table of Contents

Part One: Self-Defeating Theories

Chapter 1: Theories that are Indirectly Self-Defeating

§1: The Self-interest Theory

I shall first discuss the Self-interest Theory, or S. This is a theory about rationality. S gives to each person this aim: the outcomes that would be best for himself, and that would make his life go, for him, as well as possible.

There are three 'plausible' theories, or types of theory, that tell one what is best: the Hedonistic Theory tells me that the greatest (personal) happiness is best; the Desire-Fulfilment Theory (or Success Theory) tells me to fulfil my desires, whatever they may be; and the Objective List Theory prescribes a list of good things to be sought, and bad things to be avoided.

Of note is that these theories assign weights independent of time, so events that happen later do not count for less just because they did happen later than other events.

Pafit distinguishes between ultimate and instrumental aims: the latter are only desired as a means of achieving the former.

§2: How S can be Indirectly Self-defeating

A theory is indirectly individually self-defeating if, when I attempt to achieve its aims, those aims are worse achieved.

A person who is purely self-interested, or never self-denying, never does what he believes will be worse for him.

Parfit gives the example of Kate, a writer, who works so hard on her books that she makes herself miserable. She would be happier if she worked less hard, but she would only work less hard if she cared less about the quality of her books, and in that case, she would find her work boring and be less happy overall.

Kate is a Hedonist, so she believes she should do what will make her happiest. However, aiming to be happier by changing her motivation so that she will work less hard would in fact make her less happy, i.e. it is better for her to be self-denying (accepting misery from overwork) than to be 'never self-denying' (aiming for happiness but ending up bored).

👎 I don't like this example. Parfit presents a false choice to Kate: she cannot actually be happier by working less, since she cannot achieve this with her present desire to write well, and she will not be happier if she changes her desire to be less strong. So, she is not, in my view, being self-denying by continuing to work very hard--she does not have the option of being happy while working less, even if, had she the option, she might do better to take it. One is not self-denying by failing to take an option that is not available.

Then, Parfit gives the following:

Suppose that I am driving at midnight through some desert. My car breaks down. You are a stranger, and the only other driver near. I manage to stop you, and I offer you a great reward if you rescue me. I cannot reward you now, but I promise to do so when we reach my home. Suppose next that I am transparent, unable to deceive others. I cannot lie convincingly. Either a blush, or my tone of voice, always gives me away. Suppose, finally, that I know myself to be never self-denying. If you drive me to my home, it would be worse for me if I gave you the promised reward. Since I know that I never do what will be worse for me, I know that I shall break my promise. Given my inability to lie convincingly, you know this too. You do not believe my promise, and therefore leave me stranded in the desert. This happens to me because I am never self-denying. It would have been better for me if I had been trustworthy, disposed to keep my promises even when doing so would be worse for me. You would then have rescued me.

This is much more convincing. It is the disposition of being never self-denying that is the source of the problem, and within the boundaries of the example, there's no way around it. Notice that the real, practical problem here is the inability to keep a commitment--being never self-denying means never keeping any promises that make one worse off, going forward. I know that I will change my mind later, and I can't convincingly lie, so I am worse off now.

Of course, that assumes that there is no cost to breaking a promise. If I made a promise in order to gain a benefit equal to B, and it would make me lose benefit in an amount greater than B to break that promise, then I could be trusted to keep the promise, and being never self-denying would not present a problem. Naturally, it'd be even better to gain the ability to lie convincingly--or to deceive myself!--if there were no further consequences.

In some sense, this operates as though my future breaking of the promise is retroactively causing me harm. Taken that way, being never self-denying does not prevent me from keeping a promise--but that's going a little far.

§3: Does S Tell Us to be Never Self-denying?

In short, no. If it'd be worse for me to be never self-denying, then I shouldn't be never self-denying.

§4: Why S does not Fail in Its Own Terms

S might tell you to change your beliefs, so as to believe in a theory other than S, or to change your disposition. And check back after sections 6-8 and 18.

§5: Could it be Rational to Cause Oneself to Act Irrationally?

Parfit gives the example of Schelling's Answer to Armed Robbery: a robber will torture me, and kill my children, in order to induce me to give him my gold. Even if I give him the gold, he'll probably kill us all, anyway. So, I take a drug which will cause me to be "very irrational" for a long enough time that the police will arrive. Now I am not susceptible to any threats (though, being irrational, I may cause harm to myself or my family), so the robber should simply leave to get the best chance of escaping the police.

Now Parfit says:

On any plausible theory about rationality, it would be rational for me, in this case, to cause myself to become for a period irrational. This answers the question that I asked above. S might tell us to cause ourselves to be disposed to act in ways that S claims to be irrational. This is no objection to S. As the case just given shows, an acceptable theory about rationality can tell us to cause ourselves to do what, in its own terms, is irrational. Consider next a general claim that is sometimes made:

(G1) If there is some motive that it would be both (a) rational for someone to cause himself to have, and (b) irrational for him to cause himself to lose, then (c) it cannot be irrational for this person to act upon this motive.

In the case just described, while this man is still in my house, it would be irrational for me to cause myself to cease to be irrational. During this period, I have a set of motives of which both (a) and (b) are true. But (c) is false. During this period, my acts are irrational. We should therefore reject (G1). We can claim instead that, since it was rational for me to cause myself to be like this, this is a case of rational irrationality.

👎 I'm not so sure about this hair-splitting about rational irrationality. I have two objections here.

First, I am not at all sure it's reasonable to analyze my behavior while under the influence of the drug as being rational or irrational.

What does the drug do? There are two reasonable interpretations of the drug causing me to become very irrational. One, it may cause me to do the opposite of what I should, rationally, do. This aligns somewhat with the described behavior, e.g. "I say to the man: 'Go ahead. I love my children. So please kill them.'", but it's not satisfying. A good degree of insight is ascribed to the robber, so he would simply use 'reverse psychology' and beat me that way, in this case.

An alternative interpretation seems better: the drug causes my actions to be totally disconnected from my objectives. Now I really am immune to persuasion. But is it sensible to consider my actions as irrational in that case? My state is caused by the drug, and I have no freedom to choose otherwise. As with Kate in §2, Parfit is claiming irrationality because I am not taking an alternative that does not exist. In fact, for the duration of the drug's effect, I am, effectively, making no choices whatsoever, at least with respect to S, and I am not capable of doing otherwise. This is my first objection.

Second, if we consider my actions while under the influence of the drug from the perspective of S as either working for or against my self-interest, then I do not agree with the characterization of the actions as irrational: they are, overall, acting for my self-interest. Parfit admits as much! It would be irrational for me to cause myself to lose this 'motive' precisely because it is working in my interest. It is, as Parfit says, a rational 'motive' to have. My actions, therefore, are only irrational by definition--but this is not convincing. It's not specific enough to reason about. This is my second objection.

What if the drug caused me to fall into an uninterruptable sleep for fifteen minutes, and might cause me to die. This drug is pretty well analogous to the one presented by Parfit, but it clearly isn't an example of irrationality--at worst, it is a cessation of rational action. The drugs are identical in terms of my voluntary decisions (which, in my view, are the only things properly described as rational or irrational): none are being made.

The robber presents too strong a constraint on my actions, and the drug exerts too great a control over them.

§6: How S Implies That We Cannot Avoid Acting Irrationally

A person can be rational, or at worst only very weakly irrational, even while acting irrationally, and that's okay.

§7: An Argument for Rejecting S when It Conflicts with Morality

S might tell us to acquire a belief that for example, one should be never self-denying, except when keeping promises. Acquiring this belief might be better for us, but does that mean that this belief is, in fact, rational?

§8: Why This Argument Fails

A threat-ignorer might suffer by ignoring a threat-fulfiller who, for some reason, makes a threat. Then it would be rational not to ignore that threat. So, just because S told us to believe that it is rational to be threat-ignorers, it does not necessarily follow that it is in fact rational to ignore threats.

§9: How S Might Be Self-Effacing

Suppose that S told everyone to cause himself to believe some other theory. S would then be self-effacing. If we all believed S, but could also change our beliefs, S would remove itself from the scene. It would become a theory that no one believed. But to be self-effacing is not to be self-defeating. It is not the aim of a theory to be believed. If we personify theories, and pretend that they have aims, the aim of a theory is not to be believed, but to be true, or to be the best theory. That a theory is self-effacing does not show that it is not the best theory.

§10: How Consequentialism is Indirectly Self-Defeating

The theory of Consequentialism, or C, makes several claims:

  • (C1) There is one ultimate moral aim: that outcomes be as good as possible.
  • (C2) What each of us ought to do is whatever would make the outcome best.
  • (C3) If someone does what he believes will make the outcome worse, he is acting wrongly.
  • (C4) What we ought subjectively to do is the act whose outcome has the greatest expected goodness.
  • (C5) The best possible motives are those of which it is true that, if we have them, the outcome will be the best.

C is an agent-neutral theory, because it gives to all agents common moral aims. A theory which gives different aims to different agents is agent-relative.

A theory T is indirectly collectively self-defeating when it is true that, if several people try to achieve their T-given aims, these aims will be worse achieved.

C is indirectly collectively self-defeating:

There are several other ways in which, if we were all pure do-gooders, this might make the outcome worse. One rests on the fact that, when we want to act in certain ways, we shall be likely to deceive ourselves about the effects of our acts. We shall be likely to believe, falsely, that these acts will produce the best outcome. Consider, for example, killing other people. If we want someone to be dead, it is easy to believe, falsely, that this would make the outcome better. It therefore makes the outcome better that we are strongly disposed not to kill, even when we believe that doing so would make the outcome better. Our disposition not to kill should give way only when we believe that, by killing, we would make the outcome very much better. Similar claims apply to deception, coercion, and several other kinds of act.

§11: Why C Does Not Fail In Its Own Terms

If being pure do-gooders would make things worse, C tells us not to do that. So it does not fail in its own terms.

§12: The Ethics of Fantasy

Even if we all believed C, most of us would probably not become pure do-gooders.

Because he makes a similar assumption, Mackie calls Act Utilitarianism 'the ethics of fantasy'. Like several other writers, he assumes that we should reject a moral theory if it is in this sense unrealistically demanding: if it is true that, even if we all accepted this theory, most of us would in fact seldom do what this theory claims that we ought to do.

§13: Collective Consequentialism

C is an individualistic theory, and it is concerned with actual effects. Therefore, when we wish to act according to C, we should consider what others will actually do--what we expect will actually happen, in the real world.

One consequence of this is that, since we cannot expect most others to follow C faithfully, C demands of us to behave in an especially self-sacrificing manner. For example, since most people will not give much money to the poor, C demands of us to give nearly all of our money to the poor, to make up for those who won't.

This makes C seem very unfair. An alternative theory is collective consequentialism (CC), which tells us to do what we ideally ought to do if everyone followed CC.

Where C tells us to give nearly all of our money to charity, CC tells us to donate only what money we would donate if everyone gave their fair share. This would doubtless be a larger amount for the very rich than for the very poor, but it would also doubtless be much less than C would demand.

§14: Blameless Wrongdoing

If we act wrongly, but in accordance with motives that are right for us to have, then this is an example of moral immorality or blameless wrongdoing. This is comparable to rational irrationality, discussed in §5. "In such a case, it is the act and not the agent that is immoral."

§15: Could it be impossible to avoid acting wrongly?

It might be impossible to avoid acting wrongly. However, some wrong actions are blameless. According to C, we can always avoid acting in a blameworthy manner.

§16: Could it be right to cause oneself to act wrongly?

It can be right to to cause oneself to gain a disposition to act wrongly. This does not cause those actions consequent to this disposition to be right.

§17: How C Might be Self-effacing

C could be partly self-effacing, and partly esoteric, indicating that some special group should continue to believe C, while others should believe some other theory. This might be the case if outcomes would tend to be better if people believed the other theory, but some people continued believing C as a check against unexpected disastrous developments that might make outcomes worse if people continued to believe the alternate theory.

Collective Consequentialism could not be partly self-effacing in this way, because CC is by definition best if everyone believes it.

C could also be wholly self-effacing (though Parfit finds this unlikely). It could tell us that outcomes would be best if everyone believed not-C. But accomplishing this global belief in not-C could be accomplished by everyone first believing C, so even in this case C would not be useless, but instrumental in bringing about the best course of events.

§18: The Objection that Assumes Inflexibility

"Suppose that Satan rules the Universe. [...] Since we are the mere toys of Satan, the truth about reality is extremely depressing."

If Satan ensures that outcomes are perverse, e.g. that believing the best theory has bad results, that trying to be truthful results in deceit, etc., then it would be better to believe and attempt other things. What we ought to cause ourselves to believe is not necessarily the same as what is true.

If it happens that S or C is the best theory, belief in this theory could lead to bad effects. But since these theories would not then tell us to believe in them, these effects are not a result of following these theories. So this fact would not necessarily be a problem for S or C being the best theory.

§19: Can being rational or moral be a mere means?

Yes (though it need not be so).

It may be moral to keep promises, but it's not better to make and keep meaningless promises for things that you would do anyway. The morality of keeping promises is a mere means achieving the best course of events.

Similarly, being rational may be a mere means of ensuring one's life go as well as possible, but it's still important in that case.

§20: Conclusions

This is a summary of the preceding nineteen sections. Read it again if a review of the chapter is needed.

Chapter 2: Practical Dilemmas

§21: Why C Cannot Be Directly Self-defeating

Parfit gives an example of a two-person scenario where it is possible to get 'stuck' in a suboptimal position, because neither agent can indpendently (i.e. by that agent's sole action) improve the outcome. He denies this as a problem for C, because while being stuck in that position is successfully following C, it would also be successfully following C to be in the optimal configuration, and so C is not forcing worse outcomes.

He argues that C is agent-neutral, giving common aims to all, and:

If we cause these common aims to be best achieved, we must be successfully following this theory. Since this is so, it cannot be true that we will cause these aims to be best achieved only if we do not follow this theory.

p. 55

§22: How Theories Can Be Directly Self-defeating

S cannot be directly individually self-defeating, but it can be directly collectively self-defeating. An example like the prisoner's dilemma is given.

§23: Prisoner's Dilemmas and Public Goods

So-called "true" prisoner's dilemmas are rare, but there are a variety of realistic situations where "if each rather than none does what will be better for himself, this will be worse for everyone."

§24: The Practical Problem and Its Solutions

In a choice between the more egoistic choice, E, and the more altruistic choice, A, a person might do A:

  1. because E becomes impossible
  2. because his situation changes to make A better for him
  3. because he personally changes to make A better for him
  4. because he becomes disposed to do A, and this change makes A no worse than E for him
  5. because he becomes disposed to do A, even though it is worse for him

The first two are political solutions: a law might be passed, or some other arrangement made to accomplish them.

The last three are psychological solutions. Some examples of such solutions:

We might become trustworthy. Each might then promise to do A on condition that the others make the same promise.

We might become reluctant to be 'free-riders'. If each believes that many others will do A, he may then prefer to do his share.

We might become Kantians. Each would then do only what he could rationally will everyone to do. None could rationally will that all do E. Each would therefore do A.

We might become more altruistic. Given sufficient altruism, each would do A.

p. 64

Chapter 3: Five Mistakes in Moral Mathematics

§25: The Share-of-the-Total View

The Share-of-the-Total View in that in which, when working collectively for some benefit, you simply count your proportional share of the benefit produced when making moral calculations. This fails in a number of cases, but for a simple one, consider donating to save a puppy which is aready going to be saved: your donation does not produce a greater benefit than had you failed to donate, so for moral calculations it would be better to consider the benefit you produced to be zero, rather than whatever your share of the total might have been.

The right way to calculate benefits and harms is to consider the total benefits and harms in either of two choices under consideration.

§26: Ignoring the Effects of Sets of Acts

If two people simultaneously fatally shoot me, it could be argued that neither has harmed me, since it would not matter if either acted otherwise (since each shot would independently be fatal). This is wrong, because we should consider that each is participating in a set of actions which is harmful. Parfit suggests that the actions which should be considered together in this way is the smallest group of actions which, if all were changed, would have changed the harm (or, benefit).

The case where there is not a unique smallest group will be discussed in §30.

This also relates to coordination problems, like the prisoner's dilemma. In the case of both prisoners betraying each other, each has acted in an (independently) acceptable way, but the two have acted wrongly as a group. Of course, when deciding what to do, it is necessary to judge what is likely to happen in practice.

§27: Ignoring Small Chances

As a heuristic, people often ignore remote possibilities, but in cases where the possible benefits or harms are very large, or are small but might affect a very large number of people, this is a mistake.

For example, in a presidential election, an individual vote has a small chance of making an impact--perhaps one in a hundred million. However, if a better candidate is elected, then even if the expected benefit is small, the fact that it will accrue to hundreds of millions of Americans is likely to outweigh the slightness of the probability of my vote being decisive.

From the opposite end of things, a very small chance of catastrophic failure in a nuclear power plant--say one in a million per day for a given component--should not be ignored, because the potential harm is very great, and the number of components and time in service is large, and the large numbers cancel out the small numbers.

In short, one should not ignore small chances.

§28: Ignoring Small or Imperceptible Effects

It may seem that if an action has no perceptible effect, it is of no moral consequence. However, when considering actions taken together as a group, even if an individual's contribution is not perceptible, the effect of the group's actions, as a whole, may be. Parfit proposes a rule that reflects this.

This is similar to the coordination problems discussed in §26.

§29: Can there be Imperceptible Harms and Benefits?

If we accept that there are imperceptible effects, and say that an imperceptible effect is no effect (and we accept that relations like at least as bad as for pain are transitive), then we have a heap paradox.

Parfit rejects the idea that imperceptible effects are no effects, but argues that rejecting transitivity leads to the same results.

I agree with Parfit, but don't agree with his argument. He argues, essentially, that people really are feeling differences in effects, but making mistakes in noticing and reporting these differences. This just seems to be equivocating about some distinction between perceiving a difference and noticing that you are perceiving a difference.

I think that this can be rescued, though. Even if you do not perceive the benefit, that does not mean you have not received it. In Parfit's example, drinking one extra drop of water may not have a perceptible impact on relieving your thirst. However, each additional drop of water puts you one drop closer to a state that would be a perceptible improvement, whatever we decide is the limit on that. You are factually in a better position with each additional drop of water. The effects are real, even if not perceptible.

In summary, Parfit asserts that acts can be right or wrong because of their effects, even if those effects are not perceptible.

§30: Overdetermination

If it is the case that your joining a group would make a group too large, such that a member of the group leaving it would not reduce the benefit caused by the group, then you have no reason to join the group.

§31: Rational Altruism

For most of human history, there would have been little reason to consider minute effects such as those discussed above. However, we now find ourselves capable of causing very small benefits or harms to very large numbers of people, so we must consider such effects, or risk making things much worse for ourselves and others.

Chapter 4: Theories that are Directly Self-defeating

§32: In Prisoner's Dilemmas, Does S Fail in its own Terms?

S gives to each the aim that his life go as well as possible. Notably, it does not give to each the aim that he personally causes this to be the case. So, if acting altruistically in a prisoner's dilemma, which seems to be irrational, in fact causes one's life to go better because of the acts of the other prisoner, then that satisfies the aims given by S.

This is true for the Hedonistic Theory, but may not be true for other versions of S, which might include acting rationally as a substantive goal.


See endnote 54 for some further discussion.

§33: Another Weak Defence of Morality

A theory is universal if it applies to everyone. It is collective if it succeeds at the collective level--if following it allows us, together, to best achieve the ends of the theory.

Kantian morality is both universal and collective. S is universal, but not collective. As such, although S may yield worse results for all in prisoners' dilemmas, it still yields the best results for each, so it does not fail in its own terms.

§34: Intertemporal Dilemmas

Imagine a (bad) theory, the Present-aim Theory, or P. This theory tells us to do what best achieves our present aims, regardless of its likely effect on our future aims. It is easy to see that always doing what is best for our present aims is likely to cause us to achieve all of our aims worse than we might otherwise do. These intertemporal dilemmas occur when one has conflicting aims at different times.

§35: A Weak Defence of S

One might argue that following S will cause us to achieve our P-given aims better than following P. However, this is only true over time: in each instance, one will do worse by following S than P. Intertemporal dilemmas and interpersonal dilemmas are special cases of Reason-Relativity Dilemmas.

If it is a problem with P that it is not intertemporally successfull, then it might be claimed that it is a problem with S that it is not interpersonally successful? Parfit suggests that future versions of me are similar to other people, so S may be attacked in this fashion. I am not yet convinced of this.

§36: How Common-Sense Morality is Directly Self-Defeating

Common-sense morality gives us stronger obligations toward some people than others--our children, our family members, our fellow citizens--which can yield prisoner's-dilemma-like results.

§37: The Five Parts of a Moral Theory

There are four considerations for a moral theory:

(a) We are often uncertain what the effects of our acts will be; (b) some of us may act wrongly; (c) our acts are not the only effects of our motives; and (d) when we feel remorse, or blame each other, this may affect what we later do, and have other effects.

A moral theory should have five parts:

  1. An Ideal Act Theory, saying what we should all do, when we know that we shall all succeed
  2. An Ideal Motive Theory, saying what motives we should all ideally have, given (a) and (c)
  3. A Practical Act Theory, saying what we should do, given (a) and (b)
  4. A Practical Motive Theory, saying what motives we should have, given (a), (b), and (c)
  5. A Reaction Theory, saying which are the acts for which each ought to be blamed, and should feel remorse, given (a), (b), (c), and (d)

§38: How We can Revise Comon-Sense Morality so that It would not be Self-Defeating

We can change the theory so that we do what is best in those cases where it is self-defeating--in those cases where following M causes everyone's M-given aims to be worse achieved. Call this revision R.

§39: Why We ought to Revise Common-Sense Morality

Because following M can yield bad results, and even M would tell us to follow a revision in such cases, it is sensible to revise it. M also makes the mistake of ignoring the effects of sets of acts.

Additionally, under most conceptions of morality, M should be a collective theory, so M failing when all follow it is a problem for M.

§40: A Simpler Revision

A simpler revision of M would be an agent-neutral form, N. According to N, we should do what best achieves everyone's M-given aims.

N would not be universally accepted. There may be those for whom, if everyone accepted N, their M-given aims will be worse achieved, even though on the whole, everyone's M-given aims will be better achieved. For those people, it would be worse if everyone accepted N.

R does not suffer from this flaw as a revision of M, because it differs only in those cases where following M causes the M-given aims of each to be worse achieved.

§41: Reducing the Distance between M and C

R revises M to be Consequentialist. Since Consequentialism will tell us to try to have those dispositions that will make outcomes best, it will often tell us to be disposed to do the things that M tells us to do, even if those things, strictly speaking, are not always the best. Outcomes may be better overall if we are often disposed to act according to M.

§42: Towards a Unified Theory

We may be able to developed a theory that combines the revised versions of C and M. Call this the Unified Theory. Sidgwick, Hare, and others attempt to produce such a theory.

Name Role
Clarendon Press Publisher
Derek Parfit Author


Relation Sources
Discussed in
  • Tibetan monks found chanting text by Oxford philosopher - Tricycle: The Buddhist Review (2011-09-13)