# How rational are you?

42 posts / 0 new

Suppose you play the following game:

There is a container with 50 purple balls and 50 white balls. You must pick a color. If the ball drawn [at random by an official] is the same color as the color you chose, you win \$50,000. If it isn’t, you receive nothing.

Now you play the following game:

There is a container with an unknown proportion of purple and white balls, with a total of 100 balls. The proportion is chosen randomly beforehand by an official. You must pick a color. If the ball drawn [at random by an official] is the same color as the color you chose, you win \$50,000. If it isn’t, you receivenothing.

How much would you pay to play the first game? How much would you pay to play the second game?

I remember this homework problem from when I was in school!

“I lost my wife to a margin call. Wives get mad when you come home and say, ‘Sweetheart, I lost the house today.’” - Dennis Gartman on trading mistakes

I’m not sure what this has to do with rationality. One could rationally choose either game depending on their aversion to ambuiguity.

But then again, I don’t think any of the alleged decision theory paradoxes are all that paradoxical. They are only paradoxes (or irrational) if you take a very narrow view of rationality/decision-making/behavior.

isn’t the expected outcome of both games the same?

thereby given expected outcome the chooser should be indifferent?

FrankArabia wrote:

thereby given expected outcome the chooser should be indifferent?

Depends on how many times you get to play the game imo.

well, expected outcomes imply its infinity or very high number of times right?

It seems like the first draw is random 50/50 in either case. In the second case, if the outcome is not 50/50, there would be an argument about why picking one color is better than they other. I cannot think of such an argument.

With multiple draws, the expected outcome changes. With the first container, assuming no replacement of balls, each failed draw results in known and increasing probability of success in the next round. With the second container, each failed draw changes the conditional expected original distribution. Each failure in the second container increases your expectation for the original number of balls of the opposite color (given that I pulled a black ball, there is a high probability that most of the balls were black to begin with).

So with multiple draws, the first game is more valuable than the second game. Unfortunately, I can’t think of how to derive the value of the second game off the top of my head. With multiple draws, the value of both games are obviously more than 50%.

“I’m a CPA! I got money b***h!”

FrankArabia wrote:

well, expected outcomes imply its infinity or very high number of times right?

Agreed, I was just trying to make the point that if you only get to play once, the first option has more certainty, which imo should be more valuable - and to your point, if you play a billion times, same diff no diff (kind of, I’d probably still pay more for the first option). JMH made a good point about aversion to ambiguity.

What is “aversion to ambiguity”? Can’t ambiguity just be expressed in terms of probability?

“I’m a CPA! I got money b***h!”

The first one is less than 50/50. It does not state that the ball is replaced. Therefore, the EV is a little less than \$25k, so you should pay \$24,747.47 to play (which is what the actual question is).

ohai - Isn’t that the point?  You don’t know, with certainty, the probability of each turn using the second option - that’s not the case for the first option.

…if you have no aversion, you pay the same, if you have some aversion, you pay less for the second (to what degree depends on your level of aversion).

king_kong wrote:

The first one is less than 50/50. It does not state that the ball is replaced. Therefore, the EV is a little less than \$25k, so you should pay \$24,747.47 to play (which is what the actual question is).

Actually the \$24,747.47 is the indifference point, so you should pay anything less than that to play.

LPoulin133 wrote:

ohai - Isn’t that the point?  You don’t know, with certainty, the probability of each turn using the second option - that’s not the case for the first option.

…if you have no aversion, you pay the same, if you have some aversion, you pay less for the second (to what degree depends on your level of aversion).

But everything can ultimately be expressed in terms of expected value. With no information about the ratio, the game is equivalent to 50/50. With a little bit of information (the color of your first draw), the implied ratio changes to say, 48/52. It is definitely possible to calculate the implied probability of winning the second game. The second game is worth less in subsequent rounds not because of lack of information, but because the information revealed by each failed draw implies a lower probability of future success. Thus, “aversion to ambiguity” is meaningless - it’s not a rational concept.

“I’m a CPA! I got money b***h!”

I’m not sure why you’re bringing multiple draws into this - the wording (imo) implies one draw which determines a win or a loss.

“You must pick a color. If the ball drawn [at random by an official] is the same color as the color you chose, you win \$50,000. If it isn’t, you receive nothing.” (seems the outcome is decided after one draw win or lose)

Well. the discussion of multiple draws was an extension to the problem. However, even with a single draw, “aversion to ambiguity” is still irrational. With a single draw, the equivalent probability is still 50/50.

“I’m a CPA! I got money b***h!”

The expected return on the 1st one is \$25,000. Depending on how much expected premium you’d demand to play, the max anyone should put up is \$24,999.99

Even though the second one’s distribution of colored balls is unknown, there’s still a 50% chance its higher than option 1 and a 50% chance its lower.

I think you should be completely indifferent between the 2 options. Most just “feel” like option one is safer.

Disclaimer: I could be completely wrong about all of this.

ohai,

The first is a case of risk (known probability distributions) and the other is a case of uncertainty (unknown probability distributions). Risk aversion is how one trades off the expected return vs. the risk (both of which can be calculated from the distribution). Ambiguity aversion is when you are trading off some known thing (or some risky thing) vs. some other thing where the potential distribution is known.

When resolving these paradoxes, it helps to think like a Bayesian. If the distribution of the balls is uncertain, then I try to take a non-informative prior. For this case, you could just assume a prior that they are evenly split. That’s not quite sufficient to resolve the Ellsberg paradox (come to think of it, I’m not sure it really would resolve this one either). For that paradox (using the ball colors from wikipedia), I assume that the split between the black and yellow balls is a uniformly distributed random variable. I can then produce a distribution of each of the gambles in terms of an actual probability distribution. Basically, I try to convert the ambiguity into a risk. I then evaluate the gambles in terms of the expected payoff vs. the risk.

I thought it was implied in the question, but let me clarify: you play each game only once.

jmh530 wrote:

ohai,

>

Thank you for the explanation. I still don’t see how this relates to the question though. Re-frame the question as “you have equal probabilities of choosing Color A and Color B”. Which is the superior color? If neither choice is superior, than they must be equal. If both choices are equal, it is equivalent to 50/50 win probability.

“I’m a CPA! I got money b***h!”

BrownianBridge wrote:

Suppose you play the following game:

There is a container with 50 purple balls and 50 white balls. You must pick a color. If the ball drawn [at random by an official] is the same color as the color you chose, you win \$50,000. If it isn’t, you receive nothing.

The first time I read this question, I figured you pick a ball and then a ”ball drawn at random”.  My immediate reaction is that the odds are LESS Than 50/50 not in your favor because you picked one already.

But re-reading the question, it said “PICK A COLOR”, not “pick a ball”.  So with a total 50/50 split of balls inside the container, it becomes a question on risk tolerance.

First, you don’t LOSE \$, you just get nothing.   So naturally, it’s a positive NPV game, because the expected value is \$25,000.  But since it’s a binary event, the real question is: “What are you willing to gamble to play a 50/50 win lose game where you get a chance to win \$50,000”

I’d probably gamble \$5k

Hope. It is the quintessential human delusion, simultaneously the source of your greatest strength, and greatest weakness.

Well the expected values of the two games are the same, and the expected payoff is 50% * 50,000 or \$25000.

Now how much should one pay to play this game.  Clearly, not more than \$25000.  That’s a losing proposition, unless you are risk-loving and are willing to play just for the thrill of the risk.

If you’re risk-neutral, then \$25,000 is your price.  Most of us are not risk neutral, we are risk averse so we’ll pay something less depending on how much it will hurt us to pay and not win.  So we will pay \$25,000 / (1 + RP) where RP is the premium for risk.

Risk aversion is a natural consequence of a declining marginal utility curve:  that’s because the expected utility of uncertain oucomes is less than the utility of the expected value of an uncertain outcome (yes, that’s a mouthfull, but go over it slowly and it should be comprehensible).  So the value of RP depends on the shape of our utility curve, which is unique to every player.  In general, if playing and losing eats up a smaller portion of your net worth, your risk premium is lower.

So the question really is whether the risk premium should be different for a game with a known risk and no uncertainty (game 1, where you have a 50-50 chance with known probability) or risk plus uncertainty (game 2, where you hava a p-to-(1-p) chance of winning but you have no idea what p is).

If you play only once, and you are sure that the judge hasn’t chosen a value of p just to make you upset, then really the games are effectively identical.

If you play more than once, then there is the possibility of learning what the ratio is and changing your strategy.  However that will take time.  In that time, the zero-uncertainty game stays constant, so if you are playing the game repeatedly, you will pay less to play game 2 because there are more sources of risk, and those risks will demand higher risk premiums in the beginning.  Ironically, 50-50 is the riskiest distribution, so over time, you may discover that game 2 is less risky than game 1, but until you have enough data to estabilsh that, game 1 is less risky.

That’s my take on it, although you can also bring portfolio size into it via the Kelly Criterion.  How much you are willing to pay can be linked to how likely you are to lose too much to recover from later.

You want a quote?  Haven’t I written enough already???

ohai,

For the second game, if the probabilities are p for purple and (1-p) for white, then the expected value of betting on purple is 50,000p. The expected values are only the same under the assumption that p=50% (contra bchad). If you assume p is a random variable, (say uniformly distribution between 0 and 1, then the expected value is 25,000 and the same for both games.

However, the tricky part is the standard deviations. The standard deviation on the first game can be easily calculated from the continuous uniform distribution. In the second in the assumptions I mention it’s like a double uniform distribution (as best as I can explain it).

This has the effect of increasing the standard deviation in the second game, which is why a averse person would reasonably be less inclined to accept it.

BrownianBridge wrote:

Suppose you play the following game:

There is a container with 50 purple balls and 50 white balls. You must pick a color. If the ball drawn [at random by an official] is the same color as the color you chose, you win \$50,000. If it isn’t, you receive nothing.

Now you play the following game:

There is a container with an unknown proportion of purple and white balls, with a total of 100 balls. The proportion is chosen randomly beforehand by an official. You must pick a color. If the ball drawn [at random by an official] is the same color as the color you chose, you win \$50,000. If it isn’t, you receivenothing.

How much would you pay to play the first game? How much would you pay to play the second game?

There are a lot of ways to answer this.  Both have identical uniform distributions if you play a million times.  But assuming you only can play once, it comes down to what your tolerance to risk is.

My answer is I wouldn’t play because I do not gamble on chance, I gamble where I feel that I have skill or superior information.  This is a chance game.  I might throw down a fraction of the mathematical expectation, but that is it.

~~~~~Live. Laugh. Love.~~~~~

Jmh, I dont think std deviation has anything to do with this… the distribution isnt normal, its uniform and binary.

For the second problem, assume the worst, let’s say 1/100 for you vs 99/100 against you

So, if it were me, 1/100 out of 50,000 is \$500.  So, the max I would pay to play is \$500

Hope. It is the quintessential human delusion, simultaneously the source of your greatest strength, and greatest weakness.

Everyone one who responded by saying that the odds are the same (50/50) for both game is correct. Proving this is a trivial task. Thus, the expected value of earnings is also equal.

It’s true that when faced with clearly defined risk vs uncertainty a rational person would prefer clearly defined risk, however this is only true in the presence of irreducible uncertainty. In this case, the risk in the second game is parameterizable and the uncertainty is reducible. A rational, utility maximizing individual should therefore not prefer one game over the other.

BrownianBridge wrote:

Everyone one who responded by saying that the odds are the same (50/50) for both game is correct. Proving this is a trivial task. Thus, the expected value of earnings is also equal.

It’s true that when faced with clearly defined risk vs uncertainty a rational person would prefer clearly defined risk, however this is only true in the presence of irreducible uncertainty. In this case, the risk in the second game is parameterizable and the uncertainty is reducible.

Yes.

BrownianBridge wrote:
A rational, utility maximizing individual should therefore not prefer one game over the other

No.

It’s a stupid game of chance.  Would you load a revolver with one bullet, spin it, and put it to your head for a million dollars?  I wouldn’t.  But there is a 5/6 chance you win.

~~~~~Live. Laugh. Love.~~~~~

I didn’t say that a rational, utility maximizing person should play one game or the other. I said that if they choose to play one game, they should have no preference of which one to play.

This is bringing to me flashbacks for L3 behavior finance.

Rational Man using mean-variance optimization VS the normal behavior-biased driven man

Hope. It is the quintessential human delusion, simultaneously the source of your greatest strength, and greatest weakness.

@Systematic

The uniform distribution still has a standard deviation. Check wikipedia for the formula.

## Pages

Subscribe toComments for "How rational are you?"