Standardizing and hypothesis testing

I’m getting a bit confused here. Just going over my formulas, i see that standardizing is Z= (x - mean)/ std deviation Then, in my hypothesis testing formulas I find that a pop mean test statistic is computed Z = (mean - observation)/ std error Am I learning this wrong?

I’ll give this a shot… Standardization gives the normal distribution OF A POPULATION a mean of zero and STDEV of 1 ( N~(0,1)). It converts observation of a random variable to its z-value: z= (x-mean)/stdev. For a SAMPLE the test stat is calculated by comparing the point estimate of the population parameter with the hypothesized value of the parameter (value specified in the null hypothesis). We are concerned with the difference of the mean of the sample and the hypothesized mean (Null hypothesis, if true, makes your belief null and void, if rejected our hypothesis is not null and void). To your question: Test stat is the difference between sample stat and hypo’d value, scaled by the standard error Test stat = (observation - hypo’d value) / standard error and as you may already know, the standard error is the adjusted STDEV of the SAMPLE. For a two tailed test we have Ho : Mu=Muo vs. Ha : Mu != Muo, where Muo is the hypothesized value. The general decision rule for a two tailed test is Reject Ho if test stat is > upper critical value and if test stat < lower critical value (eg Reject Ho if test stat < -1.96 or if test stat is > +1.96 at significance level of 5%). For two tailed tests: z-stat a/2 = 1.645 for 90% confidence (significance level is 10%, 5% each tail) z-stat a/2 = 1.96 for 95% confidence (significance level is 5%, 2.5% for each tail) z-stat a/2 = 2.575 for 99% (significance level 1%, .5% for each tail).

While the above is somewhat correct, it is vastly over complicating the matter. To standardize we divide by std. deviation. However, in hypothesis testing we don’t know the true variance (otherwise we wouldn’t be conducting the test), and therefore don’t know the true std. deviation. Thus, to correct for a sample bias, we use the std. error of the sample instead of some unknown value.