How rational are you?

The first time I read this question, I figured you pick a ball and then a “ball drawn at random”. My immediate reaction is that the odds are LESS Than 50/50 not in your favor because you picked one already.

But re-reading the question, it said “PICK A COLOR”, not “pick a ball”. So with a total 50/50 split of balls inside the container, it becomes a question on risk tolerance.

First, you don’t LOSE $, you just get nothing. So naturally, it’s a positive NPV game, because the expected value is $25,000. But since it’s a binary event, the real question is: “What are you willing to gamble to play a 50/50 win lose game where you get a chance to win $50,000”

I’d probably gamble $5k

Well the expected values of the two games are the same, and the expected payoff is 50% * 50,000 or $25000.

Now how much should one pay to play this game. Clearly, not more than $25000. That’s a losing proposition, unless you are risk-loving and are willing to play just for the thrill of the risk.

If you’re risk-neutral, then $25,000 is your price. Most of us are not risk neutral, we are risk averse so we’ll pay something less depending on how much it will hurt us to pay and not win. So we will pay $25,000 / (1 + RP) where RP is the premium for risk.

Risk aversion is a natural consequence of a declining marginal utility curve: that’s because the expected utility of uncertain oucomes is less than the utility of the expected value of an uncertain outcome (yes, that’s a mouthfull, but go over it slowly and it should be comprehensible). So the value of RP depends on the shape of our utility curve, which is unique to every player. In general, if playing and losing eats up a smaller portion of your net worth, your risk premium is lower.

So the question really is whether the risk premium should be different for a game with a known risk and no uncertainty (game 1, where you have a 50-50 chance with known probability) or risk plus uncertainty (game 2, where you hava a p-to-(1-p) chance of winning but you have no idea what p is).

If you play only once, and you are sure that the judge hasn’t chosen a value of p just to make you upset, then really the games are effectively identical.

If you play more than once, then there is the possibility of learning what the ratio is and changing your strategy. However that will take time. In that time, the zero-uncertainty game stays constant, so if you are playing the game repeatedly, you will pay less to play game 2 because there are more sources of risk, and those risks will demand higher risk premiums in the beginning. Ironically, 50-50 is the riskiest distribution, so over time, you may discover that game 2 is less risky than game 1, but until you have enough data to estabilsh that, game 1 is less risky.

That’s my take on it, although you can also bring portfolio size into it via the Kelly Criterion. How much you are willing to pay can be linked to how likely you are to lose too much to recover from later.

ohai,

For the second game, if the probabilities are p for purple and (1-p) for white, then the expected value of betting on purple is 50,000p. The expected values are only the same under the assumption that p=50% (contra bchad). If you assume p is a random variable, (say uniformly distribution between 0 and 1, then the expected value is 25,000 and the same for both games.

However, the tricky part is the standard deviations. The standard deviation on the first game can be easily calculated from the continuous uniform distribution. In the second in the assumptions I mention it’s like a double uniform distribution (as best as I can explain it).

This has the effect of increasing the standard deviation in the second game, which is why a averse person would reasonably be less inclined to accept it.

There are a lot of ways to answer this. Both have identical uniform distributions if you play a million times. But assuming you only can play once, it comes down to what your tolerance to risk is.

My answer is I wouldn’t play because I do not gamble on chance, I gamble where I feel that I have skill or superior information. This is a chance game. I might throw down a fraction of the mathematical expectation, but that is it.

Jmh, I dont think std deviation has anything to do with this… the distribution isnt normal, its uniform and binary.

For the second problem, assume the worst, let’s say 1/100 for you vs 99/100 against you

So, if it were me, 1/100 out of 50,000 is $500. So, the max I would pay to play is $500

Everyone one who responded by saying that the odds are the same (50/50) for both game is correct. Proving this is a trivial task. Thus, the expected value of earnings is also equal.

It’s true that when faced with clearly defined risk vs uncertainty a rational person would prefer clearly defined risk, however this is only true in the presence of irreducible uncertainty. In this case, the risk in the second game is parameterizable and the uncertainty is reducible. A rational, utility maximizing individual should therefore not prefer one game over the other.

Yes.

No.

It’s a stupid game of chance. Would you load a revolver with one bullet, spin it, and put it to your head for a million dollars? I wouldn’t. But there is a 5/6 chance you win.

I didn’t say that a rational, utility maximizing person should play one game or the other. I said that if they choose to play one game, they should have no preference of which one to play.

This is bringing to me flashbacks for L3 behavior finance.

Rational Man using mean-variance optimization VS the normal behavior-biased driven man

@Systematic

The uniform distribution still has a standard deviation. Check wikipedia for the formula.

With a risk neutral decision maker, standard deviation usually (always?) doesn’t determine preferences though.

Since when is it irrational to be risk averse?

No one said it is irrational to be risk averse. Most people who answered the original question assumed risk neutrality, which I guess should have been stated. My statement, that dispersion of outcomes is irrelevant to a risk neutral decision maker, does not say anything about the rationality or irrationality of risk aversion. Also, as others have stated, risk aversion does not affect preference of one game in the original question over the other, although the absolute value of each game is affected.

I did, however, state that “aversion to ambiguity” is irrational. This is not the same as risk aversion. Ambiguity can always be expressed in probabilities. Thus, it is irrational to think that ambiguity is anything but a different set of probabilities.

I think Systematic’s argument is that the standard deviations aren’t relevant, not that they don’t exist. And for a single play with binomial outcomes, there is an argument for that.

It’s irrational in a mathematical sense. But it is not irrational in a strategic sense. What I mean is that if you don’t know how something is randomized (i.e. the probability distribution), you are effectively not all that sure about the rules of the game. If the judge in game 2 is allowed to change the probability, p, as he/she pleases, it is possible that they can use that information to defraud you. So if the ambiguity is not so much about what the actual probability distribution is, but whether there are parts of the game that can be used to put you at a disadvantage, it is not irrational to have a higher discount rate for those possibilities.

One can argue that that is just the same as having a different overall probability distribution, but if you have no idea how that distribution is actually distributed, then you are going to need to puff up your risk premium substantially to account for that.

The question states that that the distribution in game 2 is chosen randomly. However, even if the judge gets to select whichever distribution he desires, this does not change the fact that the odds for game 2 are still 50/50 and that the expected value of the earnings are equal to that of game 1.

Say that this game has been played previously and that people have a tendency to select the color purple 57% of the time (and that both the judge and the players are aware of this). The judge knows this, and can use this information when selecting the distribution. So he may select 100 white balls and zero purple balls, for example. But you know that people have a tendency to select the color purple 57% of the time and that the judge knows this, so you can therefore select your color based on the fact that he may try to fool you. But the judge knows that you know this, and you know that the judge knows you know this, etc. After this infinite recursion, you are still left with 50/50 odds.

Complete information - I know that you know that you know that I know that I know that you know that you know that i know that you know that I know that you know that I know I know that you know that you know that I know that I know that you know that you know that i know that you know that I know that you know that I know I know that you know that you know that I know that I know that you know that you know that i know that you know that I know that you know that I know I know that you know that you know that I know that I know that you know that you know that i know that you know that I know that you know that I know that you have syphiilis. I choose to take my chances.

[/quote]

so you can therefore select your color based on the fact that he may try to fool you. But the judge knows that you know this, and you know that the judge knows you know this, etc. After this infinite recursion, you are still left with 50/50 odds.

[/quote]

…so i can clearly not choose the wine in front of me…