Sunday, August 30, 2009

Further Reflections on "Let's Make a Deal"

I recently (August 25) alluded to the "Let's Make a Deal" problem (also known as the Monty Hall problem). It goes like this: There are three doors. Behind two of the doors is a worthless prize and behind one a valuable prize. The Wikipedia article linked above refers to these as goats and a car, respectively. The contestant chooses one door, and then Monty Hall, who knows what is behind each  door, opens one of the other doors and shows that there's a goat behind it. He always shows a goat, never the car. He then offers the contestant the choice of sticking with the door previously chosen, or switching to the other remaining door. The question is, should you switch? Everyone's intuition is that there's no reason to switch-- we already knew that one of the other doors had a goat, so we haven't learned anything. This is wrong. You have a 2/3 chance of winning the car by switching.

When Marilyn Vos Savant published this in Parade magazine, it created a huge uproar, with tenured professors of mathematics writing to tell her she was wrong, but she stuck to her guns. (There are a number of different ways of explaining why she's right, no one of which works for everyone. I'll post some if there's demand.)

But what recently struck me is how unusual this is among cases where our intuitions lead us astray in statistics. Typically, the mistake we make is in seeing things as non-random that are actually random. Public-policy examples include cancer clusters and the general problem of whether some policy or treatment is more effective than another. Our natural tendency is to try to find patterns in things, whether they exist or not.

What makes this case unusual is that it's one where our intuitions would be correct if something that is not random actually were random. If Monty Hall chose the second door randomly, and it had a goat, then the probability of the other doors having a car would be 1/2 for each, and there would be no reason to switch. (Of course, a third of the time he'd expose the car, which would tend to kill the suspense.) But somehow our intuitions don't take account of the fact that Monty knows what's behind the doors, even though it's no secret. I don't know of any psychological research on errors like this.

Contest: Public-policy examples of the second type of mistake. Prize: kudos only.


  1. I read the Wikipedia entry. It's an interesting illustration of the disconnect between intuition and logic. I just can't convince myself, no matter how many ways it's proven logically, that it makes any difference to switch.

  2. That's the issue, all right. Suggestions:

    1. Take 20 cards from a deck, including the ace of spades. Shuffle them well and lay them out face down. Guess which one is the ace of spades. Clearly your chances of guessing right are very small: 5%. Now ask a friend to look under all the cards and turn over all except the one you chose and one other, so that the ace of spades remains face down. Has your chance of guessing right somehow increased to 50%? If so, how?

    2. Paraphrase of Mike Hecht's suggestion: Use three cards, one of which is the ace of spades. Have your friend choose one blind. Now ask, would you rather turn over that card or both the other two? Surely it's better to choose to turn over two rather than one, since a priori you had no idea which was right. Turn over one card of the two that's not the ace, and ask,"Do you still want to choose two rather than one?"

    3. Using Bayes's Theorem, the key insight is that if you choose, say, Door A, then the probability of Monty opening, say, Door B is 1/2 if the car is behind Door A, 0 if it's behind Door B, and 1.0 if it's behind Door C. Then flip the conditional probabilities.

    Tell me if this helps. Then please try the contest, because I have no idea how to answer it.