The Monty Hall Problem

Yeah, the tricky part about probability is that it depends on counterfactuals - things that could have happened but didn’t, based on your knowledge of the other person’s behavior or the world’s behavior, still affect the probability.

If that seems crazy in the abstract, it’s because human brains are bad at abstract math. And they’re bad at reasoning about cases that don’t happen in real life. But there are a lot of specific real-life cases where it’s pretty intuitive, actually. Suppose a particular friend doesn’t call you today…

  • If you weren’t expecting a call, then that’s business as usual, nothing surprising.
  • If normally they call and you chat daily and they basically never skip, then them skipping might lead you to think that they’re unusually busy and or something is wrong.

So the same literal observation (no call) gives you a very different conclusion based on your knowledge of the other person’s behavior.

So in the case of Monty Hall, it turns out:
A In the standard version, if Monty always opens one door that doesn’t have the car (taking advantage of his knowledge of where the car is), then switching is better, 2/3 to 1/3.
B If Monty opens doors randomly, or forgets where the door with the car is, and takes a guess himself and just happens not to reveal the car in a particular case, then switching doesn’t help, it’s 1/2 to 1/2.
C If Monty is out to fool the people who have memorized the standard answer and only ever offers the option to switch in the first place if you initially pick the correct door (if you initially pick the wrong door then he just makes you open it and that’s the end of the game), then in the situation where you’re offered to switch, switching is bad, it’s 0 to 1.

In all three cases, A,B,C, you might in a particular run end up facing the same thing - being offered to switch after seeing a door with a goat opened, but the probabilities are all different because, like in the simple phone call case, what you infer from a given observation changes based on what you know about the other person’s behavior. In case C, obviously if you’re facing a choice to switch, and you know with certainty Monty behaves this way, then you definitely shouldn’t switch.


If I’m sure that switching doesn’t lower the chances, then switching is the only logical option. Nothin whimsy about it.

But you don’t know whether it increases your chances, so you do it pretty much in the hopes that it increases your chances, and not in the knowledge that it increases your chances.

Hence, you can explain that your strategy is at least as good, but not that it is better than not switching.

To have a non-whimsical case for switching, you have to be convinced that switching offers greater chance of success than sticking, no?

Am I at fault for considering sticking as the default?

Basically it translates to P(A) <= P(B). Given that anyone would choose B, right?

Yes, but if it turns out that P(A) = P(B), then switching was whimsical, right? In the way, that someone who genuinely understands the problem would’ve just as easily not switched.

Not given the available information at that time.

You mean your epistemic evaluation of the information, since there is enough information to actually be sure about what the probabilities are :stuck_out_tongue:

But why are we making the assumption that A, B, and C have equal probability?

Wouldn’t it be just as reasonable to assume, for instance, that B is less probable than A and C because Monty is a professional and isn’t going to forget where the car is?

Deducing P(A) = P(B) requires complete information on how the door to be opened was chosen, which I did not assume to have.

We aren’t assuming cases A, B, C are equally likely. In real life, if you don’t know how the other person behaves with certainty, all you can do is take your best guess. There’s no purely logical or mathematical solution, because the real world is messy that way, when you don’t have perfect information.

If the problem specifies that a person always behaves in a certain way though, then that determines it. So, if the problem specifies that Monty for sure behaves like A, you definitely should switch. If it specifies that Monty for sure behaves like C, then you definitely shouldn’t. If it specifies he behaves like B, then it doesn’t matter.

And if the problem doesn’t take careful pains to specify one of A,B,C or another precise behavior, then the problem is underspecified and doesn’t have a purely mathematical solution - it’s back to being like most of real life, where there’s not for sure a right (or at least, an exactly-numerically-calculable) answer.


So basically there is no clear solution, unless the problem clearly defines the behaviour pattern (which corresponds to a probability space).

I have yet to see a formulation of the problem where this is clearly given. Perhaps the reason is that the whole problem is designed to be astonishing.

1 Like

My problem is still that the solution seems to rely on data that is not actually implied by the setup.

I don’t believe that the equal assumption of A, B, and C is derived from the data given.

Compare this to a tsumego problem. There are several tsumego conventions, like:

  • escape from the puzzle area is considered life
  • ladders are good for the defender
  • the attacker has more ko threats

These are conventions and aren’t actually implied by the tsumego. I see this logic puzzle the same way: the Monty Hall problem relies on the “Monty Hall convention” of considering A, B, and C equally, and that is not expressed in the problem.

It’s an interesting puzzle, yes, but it appears to me to be flawed for that reason.

1 Like

It isn’t. A, B and C is just a thing hexahedron made up to talk about life and things. It’s not from the puzzle.

Or what are we even talking about?

If you read the OP, it featured Monty stylizedly claiming to behave like case A, therefore switching is correct if you trust him. But yes, if you want to be technical, it’s missing a few things to be 100% precise.

  • The OP is missing a statement like “Monty is always truthful and honest”. Just because someone claims they behave in some way doesn’t mean they will, so Monty’s stylized dialogue doesn’t tell you for sure behavior A is implied, except by convention.
  • The OP is missing a statement like “the car is was randomly chosen to be placed behind these 3 doors each with equal probability”, which is also assumed by convention, and indeed the forum post format itself didn’t allow for such randomization. This doesn’t matter if you also randomize uniformly your own initial choice too, but it is a technical point (and it technically could truly matter if in a given problem setup, if for example, you don’t rule out that the car was placed by mind-reading omniscient aliens who could predict which door you’d choose).

A In the standard version, if Monty always opens one door that doesn’t have the car (taking advantage of his knowledge of where the car is)

Monty: (…) behind one of [the doors], I’ve hidden a brand new car!

Wait, is the crux of the problem that we have to take into account that Monty might be lying by omission and that there could be a second car?

1 Like

I did not consider this possibility. Does that mean that, by a worst-case analysis, it is best to not switch, as this should in the worst-case yield a 1/3 probability of winning?

I’m still not sure what you guys are talking about. In the original problem everything is fair, you’re told honest information about how the game is played. Because it’s a math problem. And switching is better odds of winning even though it might feel not so. That’s the trick.

@martin3141 not you too. Of course, it doesn’t. Because we can make up scenario where Monty is generous and offers you to switch only when you initially pick the incorrect door. If we assume the set-up of the problem contains lies, we can’t make any analysis whatsoever.

Anyway, back to original problem. Of course, we might argue about what “genuinely understanding” solution means. Rationally once you written out all possible variations like Vso did, it becomes obvious. To make it intuitive I think it’s useful to realize that all that monkeying with choices is a ruse. Under that you have two bets:

  1. Betting that the prize is in one single door you chose.
  2. Betting that the prize is in any of other N-1 doors (where N is total number of doors).

And then it becomes intuitive that second is preferable which corresponds to switching.

But the confusion the problem causes to our intuition, I feel, is similar to this classic:


Just for fun, cases analogous to A, B, and C presented in a much more extreme way, and worded and stylized to deliberately try to evoke the correct intuitions for each case.

“Here’s a big heavy bucket full of sand. Exactly one grain of sand somewhere in the entire bucket is colored bright ruby red instead of brown like all the others, but is identical in all other ways. You’re going to be blindfolded and we’re going to play a game, your goal is to end up with the ruby sand grain.”


Blindfolded, you’ll take one grain of sand out of the bucket. Then, still blindfolded, you’ll either get the choice between:

  • Keep that one grain you took.
  • Instead take the entire remaining bucket, with all the sand other than that one grain you separated out.

"Oh by the way, after you take the one grain out, if the bucket does still contain the ruby sand grain at that point, I’ll help you out by taking out all the sand in the bucket other than the ruby grain for you! So that if you do decide pick the whole bucket, you don’t have to dig around in it yourself for hours trying to find that one ruby grain. Also, if you really want, I guess I could let you also keep all the regular sand in the bucket. I’m just separating it out for you, but it’s still yours too if you choose the bucket. Unlike goats, a bucket of brown sand isn’t worth so much to me. :stuck_out_tongue: "

“And it’s probably not gonna happen, but: in the one-in-a-gazillion chance that do you take out the ruby grain from the bucket, I guess I’ll still go through the motions of taking out all the sand from the bucket except one ordinary grain, in case you can hear the difference between me doing that or not while blindfolded.”

You know Monty’s honest. When offered the choice, do you keep the sand grain you took at first, or do you instead choose the whole the remaining bucket of sand?


Blindfolded, you’ll take one grain of sand out of the bucket. Then, Monty is going to stick a vacuum hose into the bucket and randomly vacuum up all the sand in the rest of the bucket (almost surely vacuuming up the ruby grain, leaving you no chance to win), but stopping when the bucket only has one grain left. Still blindfolded, you get to choose:

  • Keep that one grain you took.
  • Instead take the one remaining grain of sand in the bucket that wasn’t vacuumed.

“Haha, you’re never gonna win this! I’m gonna vacuum up that ruby grain and you’ll never have a shot at it!”

(after vacuuming, as Monty watches the sand flow out of the vacuum hose over on his end)
“HOLY MOLY! There’s no ruby grain in here!”

It seems luck’s on your side. Well, either that, or luck is just teasing you with hope, only to crush it in a moment. There are only two grains that aren’t vacuumed, the one you took out at first, and the one that the vacuum missed in the bucket, and you still have to guess right. Are you feeling lucky?


Blindfolded, you’ll take one grain of sand out of the bucket. In the absurdly unlikely event that it’s the ruby grain, Monty will go through the whole rigamarole like in situation A above to try to convince you to switch to the whole remaining bucket and therefore lose. If you take out an ordinary grain, right after you take it out, he’ll laugh and say you lose, giving you no further choices, and that’s the end of it.
You know for an absolute ironclad fact that this is how things are planned to go.

You take the grain, and somehow, amazingly, Monty starts rattling off his spiel about how now he’s helpfully separating out the grains in the other bucket for you, and would you rather have the whole remaining bucket rather than that one grain you picked out blindfolded, and so on.

Do you keep the sand grain you took, or do you switch to take the bucket?


A tricky aspect of applying probability theory is that in order to capture many situations for proper decision making, we must use the theory to model not only the inherent physical randomness of a situation, but also the uncertainty of our perspective. The counter-intuitive thing is that this perceptual uncertainty can shift as more information is revealed. To formally model such shifts, auxiliary conditioning variables representing these views should often be used.

There are interesting philosophical questions of the boundaries between aleatoric and epistemic uncertainty. There are even physical questions about whether inherent physical randomness even exists. We may treat the eventual outcome of a thrown die flying through the air as random, even though how it will eventually come to rest is predetermined by the initial conditions of its flight and environment.

If we play a game where a die has already been fairly rolled, but remained concealed (say under a cup) such that no one could see it, then it is still sufficient for decision making to view that already rolled die as still being “random” even though it is now in a deterministic (but unknown) state. Even if someone else (not involved in the game) peaked at the die, but kept that secret, then we’d still keep the same distributional model for that die. However, if that person revealed something about the die (like saying whether the result was even or odd), then we’d have to revise our uncertainty model, even though nothing has changed about the state of the die or the random process that originally generated it.

In the Monty Hall problem, the crux is that Monty opening a door to reveal a goat is providing information that should force us to revise our uncertainty model.

Various people above (and the wikipedia article) have already given valid explanations from different perspectives, and I don’t want to just rehash those, but offer yet another perspective instead.

I think it’s straight forward to see that an “always keep” strategy yields a 1/3 chance of finding the car. Revealing the goat behind another door does not change the odds of picking the right door with the initial pick.

Another strategy that we have not yet considered is flipping a fair coin to decide between switching vs keeping. With this strategy, since there are only two doors left, the odds of finding the car is now improved to 1/2.

Even though direct analysis (as explained by many above) shows that we have a 2/3 chance of winning by “always switching”, we could instead deduce these odds from the odds of the “always keep” and “coin flip” strategies.

The odds for the coin flip strategy is equal to (1/2 times the odds of the “always keep” strategy") plus (1/2 times the odds of the “always switch” strategy). Some simple arithmetic shows that the odds of winning under the “always switch” strategy must then be 2/3.