Daniel Ellsberg* proposed this thought experiment in his 1961 article "Risk, Ambiguity, and the Savage Axioms". It is intended to clarify a problem in decision theory. The majority of theories about how people make decisions when confronted with uncertainty assume that the decision-maker has at least a subjective probability distribution over possible outcomes. But, as Frank Knight pointed out, there's "measurable uncertainty" (also called "risk") which can be represented by numerical probability distributions, and "unmeasurable uncertainty" which can't.

Those who wish to apply some form of decision theory which relies on probabilities (such as expected utility theory) argued that even if people were faced with unmeasurable uncertainty, they would generate a probability distribution with which to make their decisions (even if that probability distribution was pulled out of their ass).

Ellsberg proposed (essentially) the following experiment:
There are two urns.
Urn I contains 50 red balls and 50 black balls.
Urn II contains 100 total red and black balls, in unknown numbers. (So there could be 0 black and 100 red, or vice-versa, or 80-20, etc.)

Now, first suppose you were asked to draw a ball from Urn I (without looking), and bet $100 on what color you'd draw. Generally people wouldn't care much which color they bet for. Again, suppose you're asked to make a similar bet on Urn II. Most people again wouldn't have a strong preference for one color over the other.

In a second series of bets, you're asked to bet $100 on drawing a black ball, but you get to choose whether you draw it from Urn I or Urn II. And, similarly afterward you're asked to bet $100 on drawing a red ball, from whichever urn you choose.

In this second series, people will have a tendency to draw from Urn I regardless of whether the bet is on drawing a black or red ball. But that doesn't fit with the decision-maker having a subjective probability distribution over the odds of drawing a red or black ball from Urn II. If they think there are more red than black balls in Urn II, they should prefer Urn I when betting on black, but Urn II when betting on red.

This kind of avoidance of truly ambiguous uncertainty is called "uncertainty aversion" in contrast to "risk aversion", which is avoidance of probabilistically defined risk.

* kto9 reminds me that this Daniel Ellsberg is the same fellow who famously leaked the Pentagon Papers. Shows the interesting kind of people associated with the RAND Corporation.

This discussion of the paradox is based on a slightly different formulation: An urn contains 30 yellow balls and 60 other balls any or all of which may be red (the remainder of the 60 being blue). Given no further information about the distribution of red and blue balls, we are asked in one case A to bet on whether a random ball drawn from the urn is a yellow or b red, and in another case B to bet on whether the ball is c either red or blue, or d either yellow or blue. Any bet pays 100 if correct and 0 otherwise. Now, intuition might well inform us that bets a and c are in our best interests, but a simple calculation demonstrates that regardless of our respective utilities, we must choose d if we would choose a and b if we would choose c to maximize expected utility.

By thus demonstrating that in this particular case our presumably rational intuition and formal utility theory seem to be at odds with one another, Ellsberg's paradox tempts us to believe that utility theory is flawed. However, upon closer examination it seems the flaw lies in either our formulation of the problem or in our intuition itself. In thinking about our choice of bets, we may project a degree of pessimism onto the problem which leads us to believe that a and c are the optimal bets. Perhaps more convincing though is that even in complete ignorance we simply prefer a known-probability bet (as offered by a or c) to one of uncertain probability which we have no reason to believe yields a higher expected utility (as with b or d), and that this need not be irrational.

In exploring the reasoning behind our supposedly incorrect intuitions in this case, we might consider several re-castings of the problem which yield different expected utilities. We might suppose that, knowing our strategy beforehand, a malicious adversary might fix the distribution of red and blue balls in the urn so as to lower our expected utility. We might even imagine that if we made multiple bets, such an adversary might use different distributions of balls for cases A and B to further decrease our chances.

Let us consider each possible strategy in the case of adversarial distribution of red and blue balls: If we choose a and d, the adversary may put 60 red and no blue balls in the urn, giving us an expected value of 100/3 for either bet. If we choose b and c, the adversary may put no red balls in the urn, leaving us with expected values of 0 and 200/3 for the two bets. The "irrational" strategies a,c and b,d, however, leave the adversary no way to reduce our expected value for both bets simultaneously. Additionally, if he is allowed to decide the makeup of the balls in the urn after it is decided whether our bet is for case A or case B, the strategy b,d is also eliminated (given this strategy, the adversary may harm us by choosing no red balls in case A and no blue balls in case B for expected values of 0 and 100/3, respectively). Against a malicious adversary, then, a,c is the optimal strategy, giving us expected values of 100/3 and 200/3 for the two cases regardless of any adversarial action taken.

While such a discussion may be enlightening on the origins of our apparently flawed intuition, in the original formulation of the problem we certainly have no reason to believe that the distribution of red and blue balls in the urn is being chosen based on our betting strategy by a malicious adversary. (Since we are in complete ignorance, it might even be possible that an ally is choosing the distribution so as to increase our chances at higher values.) It may thus be beneficial to delve a little deeper into the implicit reasoning behind our intuition. In preferring bet a, we seem to be guarding against the possibility that there might be very few red balls in the urn (so that choosing b would give us a very low chance of winning the bet). However, given that we are ignorant of the distribution of balls, it might certainly be possible that there are in fact many more red than blue balls in the urn, so we balance our bet a by choosing c so we do not suffer a severely lowered expected value in case B in the case that there are very few blue balls in the urn. If on the other hand we were to choose a,d, we would seem to be banking on the possibility that there might be more blue than red balls in the urn (or conversely that there might be more red balls than blue if we choose b,c). This does not seem intuitively justified given our ignorance.

This brings us to the crux of the matter: the "irrationality" of choosing both a and c was derived from the fact that, given any fixed probability p that the ball drawn will be blue given that it is not yellow and any fixed utilities for the possible payoffs, we have the equality E(a) - E(b) = - (E(c) - E(d)) (where E(i) is the expected utility of bet i) (106). Note however that if p = ½, we have E(a) - E(b) = 0 = E(c) - E(d). That is, when a blue or red ball is equally likely to be drawn, we have no reason to prefer any pair of bets over any other. Being indifferent between a,d and a,c in terms of expected utility, we may choose the latter without subjecting ourselves to claims of irrationality. Now, since we admit to utter ignorance about the distribution of red and blue balls it can certainly be argued via the principle of insufficient reason that we are unjustified in assigning any probability other than ½ to the event that a blue (or red) ball is drawn. And if we refuse to admit the principle of insufficient reason into the discussion, the point is moot (the principle of maximizing expected utility cannot even enter our reasoning if we refuse to admit some probability distribution on the unknown variables). However, we might still like to believe that the intuition is rational which suggests that the certain risks of bets a and c are in fact more desirable than the uncertain risks of a and d. This conclusion only seems reasonable if we admit that uncertainty aversion in itself is rational.

Let us consider first an uncertainty-averse (but otherwise rational) utility maximizing agent C who incurs some positive disutility ε whenever she takes on an unnecessary uncertainty, along with a rational agent D who is not uncertainty-averse. We may then construct a betting opportunity as follows: bet e always pays 50 utiles, and bet f pays 100 + ε utiles with probability 0.5 and 0 utiles otherwise. C will always take bet e (with 50 expected utility), and D will always take bet f (with 50 + ½ ε expected utility). That is, this opportunity to choose one of these bets is more valuable to D than to C. Even if the disutility of risk for C is proportional to the risk taken, we may still construct such opportunities in which D will always obtain a higher expected utility. Even in the case of Ellsberg's paradox, it would be possible (given information about the agent's subjective probabilities and uncertainty aversion) to adjust the payoffs so as to make such aversion clearly suboptimal. If we then regard uncertainty-aversion as a free choice (which in this abstract context seems reasonable), it certainly seems that it is irrational to be uncertainty-averse in this way.

If, however, we do not allow our aversion to affect the payoffs in any way, we may still be able to avoid such irrational behavior. In particular it is in no way irrational, when faced with a forced decision between two options between which we are otherwise indifferent, to choose the one that minimizes the uncertainty involved. In the specific case of Ellsberg's paradox, if we surmise by the principal of insufficient reason that given our ignorance there is an equal chance of a red or blue ball being drawn from the urn it is perfectly rational to choose the strategy for which we know the risks and associated expected utilities exactly, that is a,c. Of course, if we have reason to suspect that the probability of drawing, say, a blue ball is greater than (or less than) ½ by any positive amount, our optimal strategy will be a,d (or b,c).

Recall that perhaps our most immediately powerful intuition for choosing a,c is pessimistic in nature--that is, implicit fear of a malicious adversary. This may be irrelevant to the problem as posed, but it is difficult to separate our intuition from this possibility. Secondly, we might see this "irrational" strategy as a way of hedging our bets against the possibilities that there might be very few balls of either uncertain color. This is only rational, however, if we think that there is equal probability of drawing a red or a blue ball, in which case utility theory informs us that we should be indifferent between all four possible strategies. Since this probability distribution is the only one that makes sense to impose in total ignorance, our intuition does in fact lead us to a rational strategy, namely that of choosing bets a and c. This decision only becomes irrational (and presumably we will decide otherwise) when we have information suggesting that one color (either red or blue) is more likely to be drawn than the other.

References taken from Michael D. Resnik's Choices: An Introduction to Decision Theory, 1st ed.

Log in or register to write something here or to contact authors.