Example: Researcher gathered data on portfolio of call options over recent 250 day period. Mean daily return is 0.1%, sample standard deviation of daily portfolio returns is 0.25%. The researcher believes mean daily portfolio return is not equal to zero.
So this should be a two tail test either (H0: M = 0) VS. (HA: M not equal to 0).
At 5% level of significance, critical z-values for two tail test is +/- 1.96, thus reject H0 if test statistic < -1.96 or > +1.96.
Standard error of sample mean for a sample size of 250 is 0.0025 / square root of 250 = 0.000158.
Our test statistic: 0.1% / 0.0158% = 6.33 > 1.96, thus reject the null hypothesis.
Ok, there are two things I don’t get in this example, please help me out:
How did they come up with significance level of 5%, and why not 10% or 1%?
Why use mean daily return to divide by standard error? 0.1%/0.0158% = 6.33? Shouldn’t the comparison be between 0.000158 vs. 1.96?
Hypothesis testing is driving me crazy, and what is the real life implication of this testing.
Thanks a lot in advance to those who comments.