The Null Hypothesis spins my head round, when do we determine that the null can’t be rejected? is it when the t-test values lie between the two ends of the critical values or outside the range?

So first think about the actual theory you’re trying to test. If we’re trying to test something we **WANT** to be surprised right? Why would we conduct an experiment and write a null hypothesis for something we already know?

So, a good experiment is designed in such a way where your aim is to reject the null and accept the alternative. So start thinking in the sense that every time you read an item set, the analyst is conducting a hypothesis test to find an anomaly - and the null **DOES NOT REPRESENT** the anomaly. But if our results from testing our coefficients are within a certain range of standard deviations around the mean (expected outcome) then, it tells us that our coefficients are **NOT** different from the null, and therefore, whatever we’re testing, **does not** substantiate an underlying theory.

The way this looks is that if our T-stats are inside the acceptance range (e.g. 1 or 2 standard deviations around the mean), then we fail to reject the null - which means we haven’t found jack shit, and it’s back to the lab again.

Hope that helps.

edit: T-stats represent the number of standard deviations you are away from the mean. And the critical T stat represents the number of standard deviations away from the mean your t-stat **MUST** be before you HAVE found something that is statistically significantly different from the mean/null.

It helped a lot, thanks

This may appear the case from the CFA curriculum, but a good experiment is not one where the “aim is to reject the null”. There are many cases where you do not want to reject H_{0} because if the implications it has for your research, but the null is designed to be a falsifiable hypothesis (or should be). The null is the assumed state of nature. For example, in a standard physics lab problem for high school students, it’s assumed the true mean acceleration towards Earth is -9.8 m/s^{2 }which most people would say we “know”. Failue to reject H_{0} doesn’t prove the null, but it says our observations are reasonably compatible with that null.

If you are thinking of an anomaly, then by definition the null is true, because you’re implying what you’re observing is such an outlier that is not representative of the null, but is deemed an outlier under the assumption that the null is true. If the null isn’t true, then this wouldn’t be an anomoly, but rather, an observation consistent with the non-null truth.

If your test statistic is within the failure to reject region, it doesn’t tell you that they’re the same as the null, it just says they aren’t different *enoug__h* at some predetermined threshold for you to call the null into greater question.

Please elaborate on how the true mean acceleration towards Earth has anything at all to do with an analyst researching correlations to generate alpha? I didn’t realize we were all astrophysicists on exam day! Since you want to play straw-man I’m going to have to double down on the fact that you completely missed the mark on anything I said and remind you, that, if you’re a research analyst and you’ve designed an econometric hypothesis test, virtually anything you’re going to test your null to see if it just ‘holds’ would be pointless. Why? Because everyone else and their mom has that same information/model and you’re not going to beat anyone - which is the point of actually doing these tests - to find statistically significant correlations that we can act on to generate a positive return.

It’s a very tangible example of how you’re misunderstanding the statistical theory and the application. “Econometrics” involves, at least partially, the application of statistical methods to economic and financial data. The fact that 8th grade physics isn’t on the CFA exam doesn’t validate a generally incorrect explanation of a statistical concept.

This isn’t a straw man since it is a perfectly legitimate example of the application of *statistical* methods and theory in another field. I can assure you, though, that on these kinds of topics, I’m *probably* not missing the mark.

Let’s use a *crystal clear* application *directly* in econometrics, since you felt another real world application was “straw man”. When you test the regression model assumptions it is your hope that you fail to reject H_{0} (your hope is the assumption is correct, and you do the test hoping you FTR H_{0}…unless your theory is that an assumption doesn’t hold in a particular scenario, but that’s not the majority). If this weren’t the case then you would have implemented a different model in the first place. Remember, the assumptions in statistical tests aren’t just test fodder, you’re actually supposed to verify them and remedy them when they are not reasonably valid. You *should* be testing these every analysis you run before you make any conclusion, and after you’ve implemented a remedy to a prior violation.

Also, just for fun, correlations (or any parameter) are not “significant” or “nonsignificant”. They either take on some value or they do not. Statistical significance is a dichotomization (based on some threshold) of a continuous summary of evidence (p-value) for a particular test of hypothesis-- “significance” is a test outcome, not a property of a phenomenon. So, correlations are not “significant” or “nonsignificant”, but your hypothesis test might be; the correlation is nonzero or it isn’t. That’s all there is to it.

Let’s try to play nicely, children.

I’m trying

This you brah?

That was a fantastic triggering video … I’m waiting for your serious reply to my prior post. You were just about to explain how you it’s irrelevant to say you don’t want to reject Ho regarding checking model assumptions. I’m keeping an eager watch for your thoughts.