Help! Hypothesis testing

Schweser notes, book 1, page 303, the example. If the researcher believes that the meain daily return is not equal to zero, I can understand that H0 is daily return = 0 However, I don’t understand the role of the formula: sample mean / (sample standard deviation /Square root of sample size) Who can help me? Many thanks.

What does the number 6.33 in this example mean? Does it mean only when the sample mean is 6.33 standard error different from the population mean, population mean could be 0?

the formula is to allow you to use the standardized normal distribution curve. I don’t want to pull out my Schweser book, but i believe the 6.33 should be compared to the Z table to determine the probability that the daily return is NOT equal to 0.

thanks. can you intuitives explain what test statistics is? I know by definition, a test statistic is a quantity, calculated based on a sample, whose value is the basis for deciding whether or not to reject the null hypothesis. But I can’t understand this definition. I know the formula is sample mean minus hypothesis of population mean then devided by the standard error of the sample means. but I can’t understand the logic behind it. THanks.

Can I say: Standard error is the standard deviation of the difference between sample mean and population mean, it means that the sample mean has to move 6.33 standard deviation to become population mean.

njblain Wrote: ------------------------------------------------------- > Try this for logic. > > Firstly, understand the standard error, the SD of > the sampling distribution. This reflects the level > of variation in the sample mean (“observed > value”). Bigger sample ==> smaller SE. > > Now think of the test statistic as “(observed > minus hypothesised) / SE”. > > “Observed minus hypothesised” is the absolute > error (in a sense), i.e. how far out our sample > was from what we guessed. This could be measured > in % points, centimeters, minutes, dollars, or in > fact whatever the underlying distribution’s units > are. > > Suppose our absolute error is 2.5% or 1cm or 3 > minutes or $200. Are these significant? Simple > answer: haven’t the foggiest, because they are not > scaled in a meaningful way. How do we know if 1cm > is significant unless we know what exactly > “significant” means? > > Hence we take the ABSOLUTE error and divide it by > the STANDARD error to scale it. This gives us a > number of standard deviations. So, for example, an > absolute error of 1cm and a standard error of > 0.2cm gives us a test statistic of 5 (note: no > units). This is a number of SDs. Really good to here. > With, say, a 95% > 2-tailed test, we can say that 5 is highly > significant. Likewise the 6.33 is significant. > Hmmm, I think we missed the punchline. So now is the magic of statistics. If our hypothesis above was true, this number of standard deviations is distributed as a standard normal r.v… The exact reasons for that depend on the circumstances of the test and might even be pretty deep. But the cool thing is that if you observe a really large value of the test statistic, you have three choices: a) Your hypothesis isn’t true b) An unusual event has happened c) Your sampling is messed up For the CFA exam, it’s not c though it often is in the real world. So for CFAI you decide between a) and b). The rather arbitrary rule is that if such an extreme event would happen less than 5% of the time, we conclude a) that your hypothesis isn’t true. > Does that logic make any sense?