There are three 'plausible' theories, or types of theory, that tell one what is best: the Hedonistic Theory tells me that the greatest (personal) happiness is best; the Desire-Fulfilment Theory (or Success Theory) tells me to fulfil my desires, whatever they may be; and the Objective List Theory prescribes a list of good things to be sought, and bad things to be avoided.
Of note is that these theories assign weights independent of time, so events that happen later do not count for less just because they did happen later than other events.
Pafit distinguishes between ultimate and instrumental aims: the latter are only desired as a means of achieving the former.
A theory is indirectly individually self-defeating if, when I attempt to achieve its aims, those aims are worse achieved.
A person who is purely self-interested, or never self-denying, never does what he believes will be worse for him.
Parfit gives the example of Kate, a writer, who works so hard on her books that she makes herself miserable. She would be happier if she worked less hard, but she would only work less hard if she cared less about the quality of her books, and in that case, she would find her work boring and be less happy overall.
Kate is a Hedonist, so she believes she should do what will make her happiest. However, aiming to be happier by changing her motivation so that she will work less hard would in fact make her less happy, i.e. it is better for her to be self-denying (accepting misery from overwork) than to be 'never self-denying' (aiming for happiness but ending up bored).
👎 I don't like this example. Parfit presents a false choice to Kate: she cannot actually be happier by working less, since she cannot achieve this with her present desire to write well, and she will not be happier if she changes her desire to be less strong. So, she is not, in my view, being self-denying by continuing to work very hard--she does not have the option of being happy while working less, even if, had she the option, she might do better to take it. One is not self-denying by failing to take an option that is not available.
Then, Parfit gives the following:
This is much more convincing. It is the disposition of being never self-denying that is the source of the problem, and within the boundaries of the example, there's no way around it. Notice that the real, practical problem here is the inability to keep a commitment--being never self-denying means never keeping any promises that make one worse off, going forward. I know that I will change my mind later, and I can't convincingly lie, so I am worse off now.
Of course, that assumes that there is no cost to breaking a promise. If I made a promise in order to gain a benefit equal to B, and it would make me lose benefit in an amount greater than B to break that promise, then I could be trusted to keep the promise, and being never self-denying would not present a problem. Naturally, it'd be even better to gain the ability to lie convincingly--or to deceive myself!--if there were no further consequences.
In some sense, this operates as though my future breaking of the promise is retroactively causing me harm. Taken that way, being never self-denying does not prevent me from keeping a promise--but that's going a little far.
In short, no. If it'd be worse for me to be never self-denying, then I shouldn't be never self-denying.
S might tell you to change your beliefs, so as to believe in a theory other than S, or to change your disposition. And check back after sections 6-8 and 18.
Parfit gives the example of Schelling's Answer to Armed Robbery: a robber will torture me, and kill my children, in order to induce me to give him my gold. Even if I give him the gold, he'll probably kill us all, anyway. So, I take a drug which will cause me to be "very irrational" for a long enough time that the police will arrive. Now I am not susceptible to any threats (though, being irrational, I may cause harm to myself or my family), so the robber should simply leave to get the best chance of escaping the police.
Now Parfit says:
👎 I'm not so sure about this hair-splitting about rational irrationality. I have two objections here.
First, I am not at all sure it's reasonable to analyze my behavior while under the influence of the drug as being rational or irrational.
What does the drug do? There are two reasonable interpretations of the drug causing me to become very irrational. One, it may cause me to do the opposite of what I should, rationally, do. This aligns somewhat with the described behavior, e.g. "I say to the man: 'Go ahead. I love my children. So please kill them.'", but it's not satisfying. A good degree of insight is ascribed to the robber, so he would simply use 'reverse psychology' and beat me that way, in this case.
An alternative interpretation seems better: the drug causes my actions to be totally disconnected from my objectives. Now I really am immune to persuasion. But is it sensible to consider my actions as irrational in that case? My state is caused by the drug, and I have no freedom to choose otherwise. As with Kate in §2, Parfit is claiming irrationality because I am not taking an alternative that does not exist. In fact, for the duration of the drug's effect, I am, effectively, making no choices whatsoever, at least with respect to S, and I am not capable of doing otherwise. This is my first objection.
Second, if we consider my actions while under the influence of the drug from the perspective of S as either working for or against my self-interest, then I do not agree with the characterization of the actions as irrational: they are, overall, acting for my self-interest. Parfit admits as much! It would be irrational for me to cause myself to lose this 'motive' precisely because it is working in my interest. It is, as Parfit says, a rational 'motive' to have. My actions, therefore, are only irrational by definition--but this is not convincing. It's not specific enough to reason about. This is my second objection.
What if the drug caused me to fall into an uninterruptable sleep for fifteen minutes, and might cause me to die. This drug is pretty well analogous to the one presented by Parfit, but it clearly isn't an example of irrationality--at worst, it is a cessation of rational action. The drugs are identical in terms of my voluntary decisions (which, in my view, are the only things properly described as rational or irrational): none are being made.
The robber presents too strong a constraint on my actions, and the drug exerts too great a control over them.
A person can be rational, or at worst only very weakly irrational, even while acting irrationally, and that's okay.
S might tell us to acquire a belief that for example, one should be never self-denying, except when keeping promises. Acquiring this belief might be better for us, but does that mean that this belief is, in fact, rational?
A threat-ignorer might suffer by ignoring a threat-fulfiller who, for some reason, makes a threat. Then it would be rational not to ignore that threat. So, just because S told us to believe that it is rational to be threat-ignorers, it does not necessarily follow that it is in fact rational to ignore threats.
The theory of Consequentialism, or C, makes several claims:
C is an agent-neutral theory, because it gives to all agents common moral aims. A theory which gives different aims to different agents is agent-relative.
A theory T is indirectly collectively self-defeating when it is true that, if several people try to achieve their T-given aims, these aims will be worse achieved.
C is indirectly collectively self-defeating:
If being pure do-gooders would make things worse, C tells us not to do that. So it does not fail in its own terms.
Even if we all believed C, most of us would probably not become pure do-gooders.
C is an individualistic theory, and it is concerned with actual effects. Therefore, when we wish to act according to C, we should consider what others will actually do--what we expect will actually happen, in the real world.
One consequence of this is that, since we cannot expect most others to follow C faithfully, C demands of us to behave in an especially self-sacrificing manner. For example, since most people will not give much money to the poor, C demands of us to give nearly all of our money to the poor, to make up for those who won't.
This makes C seem very unfair. An alternative theory is collective consequentialism (CC), which tells us to do what we ideally ought to do if everyone followed CC.
Where C tells us to give nearly all of our money to charity, CC tells us to donate only what money we would donate if everyone gave their fair share. This would doubtless be a larger amount for the very rich than for the very poor, but it would also doubtless be much less than C would demand.
If we act wrongly, but in accordance with motives that are right for us to have, then this is an example of moral immorality or blameless wrongdoing. This is comparable to rational irrationality, discussed in §5. "In such a case, it is the act and not the agent that is immoral."
It might be impossible to avoid acting wrongly. However, some wrong actions are blameless. According to C, we can always avoid acting in a blameworthy manner.
It can be right to to cause oneself to gain a disposition to act wrongly. This does not cause those actions consequent to this disposition to be right.
C could be partly self-effacing, and partly esoteric, indicating that some special group should continue to believe C, while others should believe some other theory. This might be the case if outcomes would tend to be better if people believed the other theory, but some people continued believing C as a check against unexpected disastrous developments that might make outcomes worse if people continued to believe the alternate theory.
Collective Consequentialism could not be partly self-effacing in this way, because CC is by definition best if everyone believes it.
C could also be wholly self-effacing (though Parfit finds this unlikely). It could tell us that outcomes would be best if everyone believed not-C. But accomplishing this global belief in not-C could be accomplished by everyone first believing C, so even in this case C would not be useless, but instrumental in bringing about the best course of events.
"Suppose that Satan rules the Universe. [...] Since we are the mere toys of Satan, the truth about reality is extremely depressing."
If Satan ensures that outcomes are perverse, e.g. that believing the best theory has bad results, that trying to be truthful results in deceit, etc., then it would be better to believe and attempt other things. What we ought to cause ourselves to believe is not necessarily the same as what is true.
If it happens that S or C is the best theory, belief in this theory could lead to bad effects. But since these theories would not then tell us to believe in them, these effects are not a result of following these theories. So this fact would not necessarily be a problem for S or C being the best theory.
Yes (though it need not be so).
It may be moral to keep promises, but it's not better to make and keep meaningless promises for things that you would do anyway. The morality of keeping promises is a mere means achieving the best course of events.
Similarly, being rational may be a mere means of ensuring one's life go as well as possible, but it's still important in that case.
This is a summary of the preceding nineteen sections. Read it again if a review of the chapter is needed.
Parfit gives an example of a two-person scenario where it is possible to get 'stuck' in a suboptimal position, because neither agent can indpendently (i.e. by that agent's sole action) improve the outcome. He denies this as a problem for C, because while being stuck in that position is successfully following C, it would also be successfully following C to be in the optimal configuration, and so C is not forcing worse outcomes.
He argues that C is agent-neutral, giving common aims to all, and:
If we cause these common aims to be best achieved, we must be successfully following this theory. Since this is so, it cannot be true that we will cause these aims to be best achieved only if we do not follow this theory.
S cannot be directly individually self-defeating, but it can be directly collectively self-defeating. An example like the prisoner's dilemma is given.
So-called "true" prisoner's dilemmas are rare, but there are a variety of realistic situations where "if each rather than none does what will be better for himself, this will be worse for everyone."
In a choice between the more egoistic choice, E, and the more altruistic choice, A, a person might do A:
The first two are political solutions: a law might be passed, or some other arrangement made to accomplish them.
The last three are psychological solutions. Some examples of such solutions:
We might become trustworthy. Each might then promise to do A on condition that the others make the same promise.
We might become reluctant to be 'free-riders'. If each believes that many others will do A, he may then prefer to do his share.
We might become Kantians. Each would then do only what he could rationally will everyone to do. None could rationally will that all do E. Each would therefore do A.
We might become more altruistic. Given sufficient altruism, each would do A.