Reading 8 Practice Question #3B- Multiple Regressions

The p-value is the lowest level at which we can reject H0. If H0 is X=0; the p-value is 0.00236 and the t-statistic of X is -3.0565, then why do we not reject H0? n=500. Its a 2 tailed test. So do we reject H0 because -3.0565 is less than -0.00236 and it is a 2-tailed test?

That’s better.

You might reject H0, and you might not. You have to compare p to α. What’s α here?

You’re trying to compare a t-statistic to a level of significance. They’re completely different things.

You either compare p to α, or else you compare the calculated t-statistic to the critical t-value.

Thank you! The question did not give a though. So in that case should I use the p-value and the df to find the critical t-value? The T Table has columns for p=0.005 and p=0.01, so I guess the critical t-value would fall somewhere between?

No, you cannot use p to get the critical values. If you did, you’d essentially be comparing p to itself.

The question doesn’t ask whether you will reject H0 or not; it simply asks you to interpret the p-value of 0.00236. The answer is that that is the lowest level of significance (i.e., the lowest α) at which you would reject H0.

Classic case of overthinking. Thank you. You have been a great help

My pleasure.

It’s sad that the CFAI considers a functional definition as an interpretation. Then again, it’s better they’re correct with a definition (correct answer to a different question) instead of interpreting the p-value as a type I error (or the probability the null is true), which they had in the books in prior years (grossly wrong answer to the correct question).

Agreed, and agreed.

Alpha is an arbitrary threshold. What if I change my threshold to 0.00236 this time instead of 0.05 or 0.005 or whatever is the common threshold for the specified study. Wouldn’t I still able to reject H0? I think yes.

So, under the most common threshold of 0.05 or whatever, having a p-value of 0.00236 indeed gives me the right to say that lowest threshold possible to choose and still be able to reject H0 would be an alpha = 0.00236.

Sincerely, didn’t understand the mistake about mixing “functional definition” vs “interpretation”. Perhaps you can share more light over it.

Even without an alpha threshold, you can still make that claim for any p-value.

Functional definition: it’s explaining that you compare alpha and the p-value and if alpha is no smaller than the p-value, you reject the null; if you wanted, you could choose an alpha equal to p-value and still reject the null. This is just explaining how the p-value and alpha are used (and is a disgraceful way of doing so because it implies you can willy nilly change alpha after seeing the p-value-- which is improper and doesn’t preserve the Type I Error rate of alpha).

The interpretation of a p-value is NOT saying it’s the lowest alpha that you could choose to reject Ho. A p-value is a probability-- what does it mean? It’s the probability of obtaining a summary measure at least as extreme as the observed one if we assume the null hypothesis is true. This tells you it’s a probability, what it’s the probability of, and what assumptions are entailed in this p-value.

The functional definition isn’t an interpretation; it’s like saying “I hear noise” when someone asks for an interpretation of a jazz music piece.

Not jazz.

Brahms:

[video:https://www.youtube.com/watch?v=jfg-9_k-flo]

Ok, so, the problem is not saying that the p-value is the lowest possible alpha in order to still be able to reject H0. The mistake resides in that this is not the correct interpretation of a p-value.

I think the problem of mistaking the interpretation of a p-value resides (unluckily) in the definition of alpha and its use with p-value (we compare both to take decisions). As alpha threshold is considered a probability of error type 1, then, as we compare p-values with alpha we mistakenly consider p-values also as probabilities of committing type 1 errors. However, they are not.

Right. It’s not an interpretation; it’s telling you an (incorrect) way to use a p-value.

If you look at the definition of each, there’s no way to mix them up. Poor teaching (CFAI, nonstatisticians) leads to confusion about the two ideas, but even looking at the definitions makes it pretty clear.

I think this misunderstanding just arises from insufficient or outright incorrect education about the topic (or use silly “explanations” of topics-- such as their “interpretation” of a p-value). This is literally covered in undergraduate statistics texts-- which is why my issue with the CFAI is that they claim to provide world class education but are heavily goofing up the accuracy of the material.

Correct.

Do statistics book used in undergrad cover p-values adequately? I mean I have the impression p-values are just mentioned when we practice regressions, and are not properly explained / defined until the student get advanced statistics books. Perhaps we need to bring down p-values for the pedestrians before keep falling in the mistake of confusing p-values’ conceptual definition.

Quoting Wikipedia’s p-value definition:

“…the p-value or probability value or asymptotic significance is the probability for a given statistical model that, when the null hypothesis is true, the statistical summary (such as the sample mean difference between two compared groups) would be greater than or equal to the actual observed results.”

This definition is hard to be understood when the student have not achieved yet a reasonable level of knowledge :confused:

I agree that CFAI, at least, shouldn’t use the incorrect interpretation we talked about.

Statistics books (written by people with actual degrees in statistics) usually do cover this adequately. It is an incredibly fundamental idea. When you have books written by non statisticians (people with PhD in finance, economics, econometrics from a poor program) then you get this p-value avoidance or incorrect definitions. It may depend on the text, too. If you pick up a regression text they’re going to assume you have adequate knowledge of significance testing, p-value and alpha, confidence intervals and the correct interpretations, so the regression text may not cover this.

Wikipedia is a correct definition, and it’s much easier to understand with a picture. But this also is easy to understand when you know how to calculate a tail probability (say, the probability that some random variable is at least as large in magnitude of a z score of 1.2). This idea transfers easily when you show that in order to calculate the z score you need to assume a true mean value (null hypothesis) and that the probability comes from the assumption that chose true mean value is correct and the distribution (and all other assumptions) is correct. Once you do this for a single observation, you transfer this idea to sampling distributions (making the move to a p-value) which are single observations of sample statistics. The previous example is literally something covered in intro to stat classes. A good teacher will make these connections but also the teacher’s education is important, too (i.e. learning stats from a nonstatistician is more likely to be wrong than learning from a statistician).

Long story short, the deficit is in the fundamental knowledge; the basic teaching is inadequate, so the subsequent topics are much harder for the student. CFAI could rectify this in the QM book for level I, but I double that will happen. People aren’t avoiding a “hard definition” it’s that they don’t know or don’t understand it themselves.