Type | Book |
---|---|
Date | 1984 |
Pages | 543 |
Tags | nonfiction, philosophy, ethics |
There are three 'plausible' theories, or types of theory, that tell one what is best: the Hedonistic Theory tells me that the greatest (personal) happiness is best; the Desire-Fulfilment Theory (or Success Theory) tells me to fulfil my desires, whatever they may be; and the Objective List Theory prescribes a list of good things to be sought, and bad things to be avoided.
Of note is that these theories assign weights independent of time, so events that happen later do not count for less just because they did happen later than other events.
Pafit distinguishes between ultimate and instrumental aims: the latter are only desired as a means of achieving the former.
A theory is indirectly individually self-defeating if, when I attempt to achieve its aims, those aims are worse achieved.
A person who is purely self-interested, or never self-denying, never does what he believes will be worse for him.
Parfit gives the example of Kate, a writer, who works so hard on her books that she makes herself miserable. She would be happier if she worked less hard, but she would only work less hard if she cared less about the quality of her books, and in that case, she would find her work boring and be less happy overall.
Kate is a Hedonist, so she believes she should do what will make her happiest. However, aiming to be happier by changing her motivation so that she will work less hard would in fact make her less happy, i.e. it is better for her to be self-denying (accepting misery from overwork) than to be 'never self-denying' (aiming for happiness but ending up bored).
๐ I don't like this example. Parfit presents a false choice to Kate: she cannot actually be happier by working less, since she cannot achieve this with her present desire to write well, and she will not be happier if she changes her desire to be less strong. So, she is not, in my view, being self-denying by continuing to work very hard–she does not have the option of being happy while working less, even if, had she the option, she might do better to take it. One is not self-denying by failing to take an option that is not available.
Then, Parfit gives the following:
This is much more convincing. It is the disposition of being never self-denying that is the source of the problem, and within the boundaries of the example, there's no way around it. Notice that the real, practical problem here is the inability to keep a commitment–being never self-denying means never keeping any promises that make one worse off, going forward. I know that I will change my mind later, and I can't convincingly lie, so I am worse off now.
Of course, that assumes that there is no cost to breaking a promise. If I made a promise in order to gain a benefit equal to B, and it would make me lose benefit in an amount greater than B to break that promise, then I could be trusted to keep the promise, and being never self-denying would not present a problem. Naturally, it'd be even better to gain the ability to lie convincingly–or to deceive myself!–if there were no further consequences.
In some sense, this operates as though my future breaking of the promise is retroactively causing me harm. Taken that way, being never self-denying does not prevent me from keeping a promise–but that's going a little far.
In short, no. If it'd be worse for me to be never self-denying, then I shouldn't be never self-denying.
S might tell you to change your beliefs, so as to believe in a theory other than S, or to change your disposition. And check back after sections 6-8 and 18.
Parfit gives the example of Schelling's Answer to Armed Robbery: a robber will torture me, and kill my children, in order to induce me to give him my gold. Even if I give him the gold, he'll probably kill us all, anyway. So, I take a drug which will cause me to be "very irrational" for a long enough time that the police will arrive. Now I am not susceptible to any threats (though, being irrational, I may cause harm to myself or my family), so the robber should simply leave to get the best chance of escaping the police.
Now Parfit says:
๐ I'm not so sure about this hair-splitting about rational irrationality. I have two objections here.
First, I am not at all sure it's reasonable to analyze my behavior while under the influence of the drug as being rational or irrational.
What does the drug do? There are two reasonable interpretations of the drug causing me to become very irrational. One, it may cause me to do the opposite of what I should, rationally, do. This aligns somewhat with the described behavior, e.g. "I say to the man: 'Go ahead. I love my children. So please kill them.'", but it's not satisfying. A good degree of insight is ascribed to the robber, so he would simply use 'reverse psychology' and beat me that way, in this case.
An alternative interpretation seems better: the drug causes my actions to be totally disconnected from my objectives. Now I really am immune to persuasion. But is it sensible to consider my actions as irrational in that case? My state is caused by the drug, and I have no freedom to choose otherwise. As with Kate in ยง2, Parfit is claiming irrationality because I am not taking an alternative that does not exist. In fact, for the duration of the drug's effect, I am, effectively, making no choices whatsoever, at least with respect to S, and I am not capable of doing otherwise. This is my first objection.
Second, if we consider my actions while under the influence of the drug from the perspective of S as either working for or against my self-interest, then I do not agree with the characterization of the actions as irrational: they are, overall, acting for my self-interest. Parfit admits as much! It would be irrational for me to cause myself to lose this 'motive' precisely because it is working in my interest. It is, as Parfit says, a rational 'motive' to have. My actions, therefore, are only irrational by definition–but this is not convincing. It's not specific enough to reason about. This is my second objection.
What if the drug caused me to fall into an uninterruptable sleep for fifteen minutes, and might cause me to die. This drug is pretty well analogous to the one presented by Parfit, but it clearly isn't an example of irrationality–at worst, it is a cessation of rational action. The drugs are identical in terms of my voluntary decisions (which, in my view, are the only things properly described as rational or irrational): none are being made.
The robber presents too strong a constraint on my actions, and the drug exerts too great a control over them.
A person can be rational, or at worst only very weakly irrational, even while acting irrationally, and that's okay.
S might tell us to acquire a belief that for example, one should be never self-denying, except when keeping promises. Acquiring this belief might be better for us, but does that mean that this belief is, in fact, rational?
A threat-ignorer might suffer by ignoring a threat-fulfiller who, for some reason, makes a threat. Then it would be rational not to ignore that threat. So, just because S told us to believe that it is rational to be threat-ignorers, it does not necessarily follow that it is in fact rational to ignore threats.
The theory of Consequentialism, or C, makes several claims:
C is an agent-neutral theory, because it gives to all agents common moral aims. A theory which gives different aims to different agents is agent-relative.
A theory T is indirectly collectively self-defeating when it is true that, if several people try to achieve their T-given aims, these aims will be worse achieved.
C is indirectly collectively self-defeating:
If being pure do-gooders would make things worse, C tells us not to do that. So it does not fail in its own terms.
Even if we all believed C, most of us would probably not become pure do-gooders.
C is an individualistic theory, and it is concerned with actual effects. Therefore, when we wish to act according to C, we should consider what others will actually do–what we expect will actually happen, in the real world.
One consequence of this is that, since we cannot expect most others to follow C faithfully, C demands of us to behave in an especially self-sacrificing manner. For example, since most people will not give much money to the poor, C demands of us to give nearly all of our money to the poor, to make up for those who won't.
This makes C seem very unfair. An alternative theory is collective consequentialism (CC), which tells us to do what we ideally ought to do if everyone followed CC.
Where C tells us to give nearly all of our money to charity, CC tells us to donate only what money we would donate if everyone gave their fair share. This would doubtless be a larger amount for the very rich than for the very poor, but it would also doubtless be much less than C would demand.
If we act wrongly, but in accordance with motives that are right for us to have, then this is an example of moral immorality or blameless wrongdoing. This is comparable to rational irrationality, discussed in ยง5. "In such a case, it is the act and not the agent that is immoral."
It might be impossible to avoid acting wrongly. However, some wrong actions are blameless. According to C, we can always avoid acting in a blameworthy manner.
It can be right to to cause oneself to gain a disposition to act wrongly. This does not cause those actions consequent to this disposition to be right.
C could be partly self-effacing, and partly esoteric, indicating that some special group should continue to believe C, while others should believe some other theory. This might be the case if outcomes would tend to be better if people believed the other theory, but some people continued believing C as a check against unexpected disastrous developments that might make outcomes worse if people continued to believe the alternate theory.
Collective Consequentialism could not be partly self-effacing in this way, because CC is by definition best if everyone believes it.
C could also be wholly self-effacing (though Parfit finds this unlikely). It could tell us that outcomes would be best if everyone believed not-C. But accomplishing this global belief in not-C could be accomplished by everyone first believing C, so even in this case C would not be useless, but instrumental in bringing about the best course of events.
"Suppose that Satan rules the Universe. […] Since we are the mere toys of Satan, the truth about reality is extremely depressing."
If Satan ensures that outcomes are perverse, e.g. that believing the best theory has bad results, that trying to be truthful results in deceit, etc., then it would be better to believe and attempt other things. What we ought to cause ourselves to believe is not necessarily the same as what is true.
If it happens that S or C is the best theory, belief in this theory could lead to bad effects. But since these theories would not then tell us to believe in them, these effects are not a result of following these theories. So this fact would not necessarily be a problem for S or C being the best theory.
Yes (though it need not be so).
It may be moral to keep promises, but it's not better to make and keep meaningless promises for things that you would do anyway. The morality of keeping promises is a mere means achieving the best course of events.
Similarly, being rational may be a mere means of ensuring one's life go as well as possible, but it's still important in that case.
This is a summary of the preceding nineteen sections. Read it again if a review of the chapter is needed.
Parfit gives an example of a two-person scenario where it is possible to get 'stuck' in a suboptimal position, because neither agent can indpendently (i.e. by that agent's sole action) improve the outcome. He denies this as a problem for C, because while being stuck in that position is successfully following C, it would also be successfully following C to be in the optimal configuration, and so C is not forcing worse outcomes.
He argues that C is agent-neutral, giving common aims to all, and:
S cannot be directly individually self-defeating, but it can be directly collectively self-defeating. An example like the prisoner's dilemma is given.
So-called "true" prisoner's dilemmas are rare, but there are a variety of realistic situations where "if each rather than none does what will be better for himself, this will be worse for everyone."
In a choice between the more egoistic choice, E, and the more altruistic choice, A, a person might do A:
The first two are political solutions: a law might be passed, or some other arrangement made to accomplish them.
The last three are psychological solutions. Some examples of such solutions:
The Share-of-the-Total View in that in which, when working collectively for some benefit, you simply count your proportional share of the benefit produced when making moral calculations. This fails in a number of cases, but for a simple one, consider donating to save a puppy which is aready going to be saved: your donation does not produce a greater benefit than had you failed to donate, so for moral calculations it would be better to consider the benefit you produced to be zero, rather than whatever your share of the total might have been.
The right way to calculate benefits and harms is to consider the total benefits and harms in either of two choices under consideration.
If two people simultaneously fatally shoot me, it could be argued that neither has harmed me, since it would not matter if either acted otherwise (since each shot would independently be fatal). This is wrong, because we should consider that each is participating in a set of actions which is harmful. Parfit suggests that the actions which should be considered together in this way is the smallest group of actions which, if all were changed, would have changed the harm (or, benefit).
The case where there is not a unique smallest group will be discussed in ยง30.
This also relates to coordination problems, like the prisoner's dilemma. In the case of both prisoners betraying each other, each has acted in an (independently) acceptable way, but the two have acted wrongly as a group. Of course, when deciding what to do, it is necessary to judge what is likely to happen in practice.
As a heuristic, people often ignore remote possibilities, but in cases where the possible benefits or harms are very large, or are small but might affect a very large number of people, this is a mistake.
For example, in a presidential election, an individual vote has a small chance of making an impact–perhaps one in a hundred million. However, if a better candidate is elected, then even if the expected benefit is small, the fact that it will accrue to hundreds of millions of Americans is likely to outweigh the slightness of the probability of my vote being decisive.
From the opposite end of things, a very small chance of catastrophic failure in a nuclear power plant–say one in a million per day for a given component–should not be ignored, because the potential harm is very great, and the number of components and time in service is large, and the large numbers cancel out the small numbers.
In short, one should not ignore small chances.
It may seem that if an action has no perceptible effect, it is of no moral consequence. However, when considering actions taken together as a group, even if an individual's contribution is not perceptible, the effect of the group's actions, as a whole, may be. Parfit proposes a rule that reflects this.
This is similar to the coordination problems discussed in ยง26.
If we accept that there are imperceptible effects, and say that an imperceptible effect is no effect (and we accept that relations like at least as bad as for pain are transitive), then we have a heap paradox.
Parfit rejects the idea that imperceptible effects are no effects, but argues that rejecting transitivity leads to the same results.
I agree with Parfit, but don't agree with his argument. He argues, essentially, that people really are feeling differences in effects, but making mistakes in noticing and reporting these differences. This just seems to be equivocating about some distinction between perceiving a difference and noticing that you are perceiving a difference.
I think that this can be rescued, though. Even if you do not perceive the benefit, that does not mean you have not received it. In Parfit's example, drinking one extra drop of water may not have a perceptible impact on relieving your thirst. However, each additional drop of water puts you one drop closer to a state that would be a perceptible improvement, whatever we decide is the limit on that. You are factually in a better position with each additional drop of water. The effects are real, even if not perceptible.
In summary, Parfit asserts that acts can be right or wrong because of their effects, even if those effects are not perceptible.
If it is the case that your joining a group would make a group too large, such that a member of the group leaving it would not reduce the benefit caused by the group, then you have no reason to join the group.
For most of human history, there would have been little reason to consider minute effects such as those discussed above. However, we now find ourselves capable of causing very small benefits or harms to very large numbers of people, so we must consider such effects, or risk making things much worse for ourselves and others.
S gives to each the aim that his life go as well as possible. Notably, it does not give to each the aim that he personally causes this to be the case. So, if acting altruistically in a prisoner's dilemma, which seems to be irrational, in fact causes one's life to go better because of the acts of the other prisoner, then that satisfies the aims given by S.
This is true for the Hedonistic Theory, but may not be true for other versions of S, which might include acting rationally as a substantive goal.
Note
See endnote 54 for some further discussion.
A theory is universal if it applies to everyone. It is collective if it succeeds at the collective level–if following it allows us, together, to best achieve the ends of the theory.
Kantian morality is both universal and collective. S is universal, but not collective. As such, although S may yield worse results for all in prisoners' dilemmas, it still yields the best results for each, so it does not fail in its own terms.
Imagine a (bad) theory, the Present-aim Theory, or P. This theory tells us to do what best achieves our present aims, regardless of its likely effect on our future aims. It is easy to see that always doing what is best for our present aims is likely to cause us to achieve all of our aims worse than we might otherwise do. These intertemporal dilemmas occur when one has conflicting aims at different times.
One might argue that following S will cause us to achieve our P-given aims better than following P. However, this is only true over time: in each instance, one will do worse by following S than P. Intertemporal dilemmas and interpersonal dilemmas are special cases of Reason-Relativity Dilemmas.
If it is a problem with P that it is not intertemporally successfull, then it might be claimed that it is a problem with S that it is not interpersonally successful? Parfit suggests that future versions of me are similar to other people, so S may be attacked in this fashion. I am not yet convinced of this.
Common-sense morality gives us stronger obligations toward some people than others–our children, our family members, our fellow citizens–which can yield prisoner's-dilemma-like results.
There are four considerations for a moral theory:
A moral theory should have five parts:
We can change the theory so that we do what is best in those cases where it is self-defeating–in those cases where following M causes everyone's M-given aims to be worse achieved. Call this revision R.
Because following M can yield bad results, and even M would tell us to follow a revision in such cases, it is sensible to revise it. M also makes the mistake of ignoring the effects of sets of acts.
Additionally, under most conceptions of morality, M should be a collective theory, so M failing when all follow it is a problem for M.
A simpler revision of M would be an agent-neutral form, N. According to N, we should do what best achieves everyone's M-given aims.
N would not be universally accepted. There may be those for whom, if everyone accepted N, their M-given aims will be worse achieved, even though on the whole, everyone's M-given aims will be better achieved. For those people, it would be worse if everyone accepted N.
R does not suffer from this flaw as a revision of M, because it differs only in those cases where following M causes the M-given aims of each to be worse achieved.
R revises M to be Consequentialist. Since Consequentialism will tell us to try to have those dispositions that will make outcomes best, it will often tell us to be disposed to do the things that M tells us to do, even if those things, strictly speaking, are not always the best. Outcomes may be better overall if we are often disposed to act according to M.
We may be able to develop a theory that combines the revised versions of C and M. Call this the Unified Theory. Sidgwick, Hare, and others attempt to produce such a theory.
A unified theory would be consequentialist. In ยง12, we saw the argument that C is too demanding, that even if everyone believed in C, most people would not follow it. A unified theory might be more realistic if it asserted blame and demanded remorse only in the way that makes outcomes best. So, while the unified theory might, like C, demand that wealthy people give nearly all of their income to charity, it might not hold that it is blameworthy if they give a much smaller portion of their income.
Moral scepticism is the belief that no moral theory can be true, or be the best theory. If a unified theory could be produced, we may find little disagreement between adherents of C and M. This might undermine moral scepticism.
Parfit describes three versions of the Present-aim Theory, P.
The Instrumental Theory (IP) says that:
The Deliberative Theory (DP) says that:
The Critical Present-aim Theory (CP) says that:
Parfit will discuss cases in which IP and DP agree, as a way to show that we should reject the Self-interest Theory.
Desires that do not provide reasons (which is to say good reasons) to act may be called irrational (that is, open to rational criticism).
Morality and P are in conflict with S, from opposite sides. An argument that S uses to defend against M may make it more vulnerable to P, and vice-versa.
Psychological Egoism is the assumption that what a person would want, if he knew the facts and were thinking clearly, would be what would be in his own long-term self-interest. In other words, it is the claim that DP agrees with S. Parfit here asserts that "Psychological Egoism cannot survive a careful discussion."
S and morality may coincide, such as trivially in the case where a person's actions affect only himself. Parfit will ignore such cases, considering the cases where S conflicts with morality.
In the case that one's present desires conflict with one's self-interest, CP conflicts with S.
Parfit provides examples of desires for achievement, such as an artist wishing to create a masterpiece, which he asserts are no less rational than bias in one's own favor. It might be that the desire to behave morally is supremely rational.
Note
I am having some difficulty here. It's not clear to me how to judge whether a desire is more or less rational than another (that is to say, more or less open to rational criticism). Parfit claims that these desires for achievement are not less rational than the bias in one's own favor, but on strength of what argument? And even if there is no rational criticism of the desires–if it is not irrational to have those desires– does it follow that there cannot be rational criticism of the priority assigned to those desires, relative to others? Did Parfit already address this?
CP can allow that we should care about our overall self-interest, even if this is not the unique supremely rational desire.
From the conclusion of this section it is clear that the answer to my question from the previous section (How do we know what is less rational?) is that Parfit is asserting something he thinks likely to be true, and if we agree, then the argument goes thus. If not, he will present other arguments.
The S-Theorist might claim that since we will have desires in the future that we will have reasons to fulfil, we have those reasons now as well.
Sidgwick suggests two axioms: the axiom of Rational Benevolence (RB) states that "Reason requires each person to aim for the greatest possible sum of happiness, impartially considered" (139) and the axiom of Prudence (HS) states that "Reason requires each person to aim for his own greatest happiness" (139).
Parfit offers similar objections to each: for the first, why should one's own happiness be sacrificed for the (greater) happiness of another? For the second, why should one's own (present) happiness be sacrified for the (greater) future happiness of oneself? In other words, why should one not privilege the happiness of one's own, present self?
S is agent-relative, but not temporally relative. The Present-aim Theory is like a temporally relative version of S, and on the Present-aim Theory one might object that any claims by S that directly conflict with a temporally-relative version of the claim should be rejected.
Parfit considers the Appeal to Full Relativity to be a strong objection to S.
Sidgwick does not address the Appeal to Full Relativity. Parfit asserts that some of the claims of S remain plausible in the face of objections from P, so we need not reject S wholesale.
ยงยง 59-70, p. 149-186
ยงยง 71-74, pp. 187-198
ยงยง 75-79, pp. 199-218
ยงยง 80-86, pp. 219-244
ยงยง 87-94, pp. 245-280
ยงยง 95-101, pp. 281-306
ยงยง 102-106, pp. 307-320
ยงยง 107-118, pp. 321-350
ยงยง 119-127, pp. 351-380
ยงยง 128-131, pp. 381-390
ยงยง 132-141, pp. 391-418
ยงยง 142-149, pp. 419-442
ยงยง 150-154, pp. 443-456
pp. 457-504
Name | Role |
---|---|
Clarendon Press | Publisher |
Derek Parfit | Author |
Part One: Self-Defeating Theories | 1 |
Chapter 1: Theories that are Indirectly Self-Defeating | 3 |
Chapter 2: Practical Dilemmas | 53 |
Chapter 3: Five Mistakes in Moral Mathematics | 67 |
Chapter 4: Theories that are Directly Self-defeating | 87 |
Chapter 5: Conclusions | 111 |
Part Two: Rationality and Time | 115 |
Chapter 6: The Best Objection to the Self-interest Theory | 117 |
Chapter 7: The Appeal to Full Relativity | 137 |
Chapter 8: Different Attitudes to Time | 149 |
Chapter 9: Why We Should Reject S | 187 |
Part Three: Personal Identity | 197 |
Chapter 10: What We Believe Ourselves to Be | 199 |
Chapter 11: How We are not What We Believe | 219 |
Chapter 12: Why Our Identity is not What Matters | 245 |
Chapter 13: What Does Matter | 281 |
Chapter 14: Personal Identity and Rationality | 307 |
Chapter 15: Personal Identity and Morality | 321 |
Part Four: Future Generations | 349 |
Chapter 16: The Non-Identity Problem | 351 |
Chapter 17: The Repugnant Conclusion | 381 |
Chapter 18: The Absurd Conclusion | 391 |
Chapter 19: The Mere Addition Paradox | 419 |
Concluding Chapter | 443 |
Appendices | 457 |
Relation | Sources |
---|---|
Discussed in |
|