Test Statistics vs. Z Score

When do we know to use standard deviation vs. standard error in the denominator for a test stat? I am asking this because the z score allows us to use the standard deviation in the bottom, while other test statistics want us to use the standard error in the denominator…

If you’re doing an hypothesis test for a single observation (e.g., an hypothesis test for next month’s return on a bond portfolio), then you use the standard deviation.

If you’re doing an hypothesis test for the mean of a set of observations (e.g., an hypothesis test for the average return on an equity portfolio), then you use the standard error.

S2000 is correct, but in the case of test of a single observation, the standard deviation and standard error are the same - the standard error is sigma/root(n), so for a sample of 1, standard error is sigma/1 = sigma.

Perhaps I wasn’t clear enough in my answer.

Suppose that you have a sample of 5 years of monthly returns on a bond fund: 60 data points.

If you’re asked to test the hypothesis that next month’s return on the bond fund is a particular value, you will use the standard deviation of the (60) monthly returns in computing the t-statistic.

If you’re asked to test the hypothesis that the average monthly return on the bond fund is a particular value, you will use the standard error of the (60) monthly returns in computing the t-statistic: the standard deviation of the 60-month sample divided by √60.