Newcomb's problem

Newcomb's problem was presented by Robert Nozick in his 1969 paper, Newcomb's Problem and Two Principles of Choice. The problem was created by Dr. William Newcomb, of the Livermore Radiation Laboratories in California, and Nozick first heard the problem in 1963.

The problem, as stated by Nozick, runs thus:

Suppose a being in whose power to predict your choices you have enormous confidence. (One might tell a science-fiction story about a being from another planet, with an advanced technology and science, who you know to be friendly, etc.) You know that this being has often correctly predicted your choices in the past (and has never, so far as you know, made an incorrect prediction about your choices), and further-more you know that this being has often correctly predicted the choices of other people, many of whom are similar to you, in the particular situation to be described below. One might tell a longer story, but all this leads you to believe that almost certainly this being's prediction about your choice in the situation to be discussed will be correct.

There are two boxes, (B1) and (B2). (B1) contains $1000. (B2) contains either $1000000 ($M), or nothing. What the content of (B2) depends upon will be described in a moment.

You have a choice between two actions:

  1. taking what is in both boxes
  1. taking only what is in the second box.

Furthermore, and you know this, the being knows that you know this, and so on:

  1. If the being predicts you will take what is in both boxes, he does not put the $M in the second box.
  1. If the being predicts you will take only what is in the second box, he does put the $M in the second box.

The situation is as follows. First the being makes its prediction. Then it puts the $M in the second box, or does not, depending upon what it has predicted. Then you make your choice. What do you do?

Nozick notes:

I should add that I have put this problem to a large number of people, both friends and students in class. To almost everyone it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.

In Nozick's view, the problem is in the nature of the predictor. If you believe that there is backwards causality (i.e. that your choosing one or two boxes causes the predictor to place or not place money in box B2), or that the predictor works by seeing the future and is therefore really infallible (which amounts, I think, to backwards causality), then you should take only one box.

If, on the other hand, you believe that by the time you make your choice, the predictor is already done and there is no backwards causality, then you should take both boxes.

Why is the problem so controversial? Nozick opines that it is because the explanation of the final states (i.e. there is money in box B2, or not) refers to (the being's belief about) the actions you take. Even though your actions do not actually alter the probability that one state or the other obtains, it seems as though they should.

Nozick suggests a variety of similar problems with might (or might not) similarly confuse, as he works to isolate the features of Newcomb's problem that make it so challenging. The paper is well worth reading.

One-box or two-box?

Different authors have come down on different sides of the argument, often (as Nozick noted) clarifying for us that the answer is actually obvious. Some of these are examined in more detail below.

A survey conducted in 2009 of professional philosophers found that 31.4% of philosophers two-box, 21.3% one-box, 23.5% were insufficiently familiar with the problem to answer, and 13.3% were undecided (Bourget & Chalmers, 2014).

One-boxers

  • Bar-Hillel and Margalit argue that one-boxing correlates strongly with becoming a millionaire, and "though we do not assume a causal relationship, there is no better alternative strategy than to behave as if the relationship was, in fact, causal" (Bar-Hillel & Margalit, 1972).
  • Lukowski essentially argues that the problem has backward causality, so you should clearly one-box (Łukowski, 2011).

Two-boxers

  • Sorensen concludes that "given that there is a best choice, the two-boxer is right" (Sorensen, 1983).
  • Swain argues that those who one-box are simply making mistakes in their reasoning, and so Newcomb's problem doesn't really reveal any interesting conflict in rationality (Swain, 1988).

Other

  • Wolpert suggests that the problem is not well-posed, and proposes two interpretations, one that will recommend one-boxing, and one that will recomment two-boxing (Wolpert & Benford, 2011).

Some responses

Swain on mistaken arguments for one-boxing (1988)

Swain argues that those who one-box are simply making mistakes in their reasoning, and so Newcomb's problem doesn't really reveal any interesting conflict in rationality. To reach this conclusion, Swain also asserts: "We should add that you do not believe in backwards causation, nor do you believe that you are infallible or that the predictor is infallible" (Swain, 1988).

Swain characterizes three lines of faulty reasoning:

  1. If the predictor is reliable, I can make it more likely that there is a million dollars in the opaque box by taking one box. Therefore, I should take one box.
  2. If the predictor is reliable, then I make it less likely that the million dollars is in the opaque box by deciding to take two boxes. Therefore, I should not decide to take two boxes.
  3. People who take one box get rich. I want to be just like them. Therefore, I should do what they did, namely, take one box.

I don't think this really exhausts the space of arguments for one-boxing, but I'll leave that aside.

The first two arguments are countered by pointing out that the decision has already been made, and there is by hypothesis no backwards causation. The most interesting thing here is the observation that, in fact, the "make it more likely" in each of these arguments properly refers to the chooser's certainty about what prediction the predictor made. In other words, you "make it more likely" in the sense that it would be rational for you to make a bet on the issue at greater odds, but you do not make it more likely in the sense that you in any way affect the real probability of there being a million dollars in the box (which is already decided at that point).

Swain dismisses the third argument because it does not attribute becoming a Newcomb millionaire to the correct cause (and, by the time one is using this reasoning to decide how many boxes to open, it is already too late).

In my view, Swain fails to adequately refute the third argument. One assumes that all of us reading the paper have yet to be presented with a Newcomb choice, and (perhaps) the prediction for our personal Newcomb choice has yet to be made. So by accepting the reasoning of the third argument, even to the point of acting on it and one-boxing, we would in fact causally accomplish gaining a million dollars, and by rejecting the argument to the point of two-boxing we would in fact causally accomplish losing the million.

Swain's contention is that a person who accepts this line of reasoning after the prediction has been made and therefore one-boxes is leaving a thousand dollars on the table, and so behaving irrationally. That's true, but it really doesn't address the force of the argument.

What if Newcomb's problem were set up in the usual way, except that we do not know precisely when the prediction is made? It might have been made a week ago, or before we were born, or it might only be made so soon before our choice is made that there is nearly no chance of its being altered. Crucially, no matter which of these is the case, people who accept and follow the third line of reasoning will almost certainly do better than those who reject it. Since Swain doesn't deal with this, the refutation is not convincing.

In short, Swain contributes very little to the discussion beyond what was already in Nozick's original essay.

Hurley on cooperative reasoning (1994)

Hurley argues that rather than one-boxers applying evidential reasoning ("if you're so smart, why aren't you rich?"), we might understand them as appealing to cooperative reasoning, viewing Newcomb's problem like a prisoners' dilemma, in which the (imagined) preference orderings of the predictor and chooser make one-boxing (and getting the million dollars) the second-best outcome for each (Hurley, 1994).

Wolpert and Benford on probablistic structure (2011)

Wolpert and Benford argue that the apparent difficulty in Newcomb's problem is that the game is not well defined. They show how the two conflicting solutions of the problem stem from conflicting conceptions of the probablistic structure of the game. Once the structure is specified, the solution is well-defined.

Call the predictor's guess g, and your choice y.

The one-boxer, called Fearful by W&B, conceives of the probability distribution as P(y, g) = P(g | y)P(y). Here, you control P(y)–this represents your free will–and the predictor controls P(g | y)–effectively, the accuracy of its prediction. In this case, one-boxing will maximize the expected utility, given a sufficiently accurate predictor.

The two-boxer, called Realist by W&B, instead conceives of the relevant probability as P(y, g) = P(y | g)P(g). Here, you control P(y | g), and expected utility is maximized–regardless of P(g)–by two-boxing.

W&B characterize the predictor (in the article's abstract) as "an antagonist" who "guarantees that you made the wrong choice" (Wolpert & Benford, 2011).

Łukowski's 'solution' (2011)

Lukowski presents this problem in a slightly (but importantly) altered form. In place of Nozick's "being" who is "almost certainly" correct in its judgment, Lukowski substitutes an "infallible predictor" who "knows […] in advance" what choice we will make. Our choice, Lukowski asserts, directly causes the money to be in the box (or not), no different than pressing a button to activate a machine that places money in the box. So one-boxing is the obviously correct choice (Łukowski, 2011).

Having thus rendered the problem toothless, Lukowski concludes that "there is nothing interesting in Newcomb's paradox from the point of view of rational decision making". And:

As we can see, introducing a fairy tale assumption leads to fairy tale effects. There is nothing surprising, and so nothing paradoxical, that strange assumptions generate strange conclusions.

This is disappointing. Nozick already considered this, forty years earlier, and the wealth of literature on the subject in those intervening forty years indicates that perhaps the problem really is not so simple as Lukowski makes it out to be.

The Predictor is impossible

Bar-Hillel and Margalit argue that such an extremely reliable predictor is the source of our difficulty: "Can any situation lead a rational man to simultaneously believe that the Being plays his move, irrevocably, prior to your move and yet that the probability of there being a million dollars in the covered box is different if you play one strategy than if you play another?" This would be difficult to believe: "You would suspect foul play. You would suspect that you are being taken." (Bar-Hillel & Margalit, 1972, p. 300)

Schlesinger agrees: "Our proof can be extended to show that not only is a perfect predictor impossible but that no predictor with the slightest competence can exist either." (Schlesinger, 1976, p. 224)

Alternative versions

A common way to attempt to shed light on the problem is to propose alternative puzzles with certain similarities or dissimilarities to Newcomb's problem.

Many authors will specify that there is no backward causation (or at least that you don't believe in it), or alter the amount of money or the accuracy of the predictor to show how this affects our reasoning. I won't list such examples here.

  • Nozick listed a variety of alternative formulations in his original essay. (Nozick, 1969)
  • Bostrom proposes a meta-Newcomb problem with a Meta-predictor that predicts the actions of the Predictor. (Bostrom, 2001)
  • Sorensen proposes a (reliable) predictor that places money in a blue box if it predicts you'll take the brown one, and vice versa. (Sorensen, 1983)

Bibliography

Bar-Hillel, M., & Margalit, A. (1972). Newcomb’s Paradox Revisited. The British Journal for the Philosophy of Science, 23(4), 295–304. https://doi.org/10.1093/bjps/23.4.295
Bostrom, N. (2001). The Meta-Newcomb Problem. Analysis, 61(4), 309–310. https://doi.org/10.1111/1467-8284.00310
Bourget, D., & Chalmers, D. J. (2014). What do philosophers believe? Philosophical Studies, 170(3), 465–500. https://doi.org/10.1007/s11098-013-0259-7
Hurley, S. L. (1994). A new take from Nozick on newcomb’s problem and prisoners’ dilemma. Analysis, 54(2), 65–72. http://www.jstor.org/stable/3328823
Łukowski, P. (2011). Paradoxes. Springer Netherlands. https://doi.org/10.1007/978-94-007-1476-2
Nozick, R. (1969). Newcomb’s problem and two principles of choice. In Essays in honor of Carl G. Hempel (pp. 114–146). Springer. https://doi.org/10.1007/978-94-017-1466-2_7
Schlesinger, G. (1976). Perfect diagnosticians and incompetent predictors. Australasian Journal of Philosophy, 54(3), 221–230. https://doi.org/10.1080/00048407612341231
Sorensen, R. A. (1983). Newcomb’s Problem: Recalculations for the One-Boxer. Theory and Decision, 15(4), 399–404. https://doi.org/10.1007/BF00162115
Swain, C. G. (1988). Cutting a gordian knot: The solution to newcomb’s problem. Philosophical Studies, 53(3), 391–409. https://doi.org/https://doi.org/10.1007/BF00353513
Wolpert, D. H., & Benford, G. (2011). The lesson of newcomb’s paradox. Synthese, 190(9), 1637–1646. https://doi.org/10.1007/s11229-011-9899-3