# test statistic for null hypothesis

Variable: Coefficient t-stat Spread 1.0264 4.280 We are testing the null that the coefficient on Spread =1; alternative is that it is not =1. Answer says: the calculated value of the t-stat = (1.0264 - 1) / standard error The standard error is given as: = 1.0264 / 4.28 = 0.24 When calculating the standard error here, why did they not subtract the hypothesised slope coefficient? tstat = (estimate of slope coefficient - hypothesised slope coefficient) / standard error 4.28 = (1.0264 - 1) / Standard error Standard error = 0.006

Well maybe you are testing for unit root? When we test for unit root using the Dickey Fuller test we say Ho = that unit root exits and Ha = unit root does not exist; by definition unit root = 1.

Therefore you are subtracting the 1 while solving for standard error. I maybe wrong!

Any other explanations?

The t-stat given in the vignette is for the test hypothesis whether the coefficient is significantly different than 0. (always the case). Then to test for the hypothesis if it is different from 1, you have to first calculate its standard error. Then use that standard error to arrive at the test statistic.

This is correct. Stat packages usually give t-tests on individual coefficients to test for statistical significance, i.e. that the coefficient is statistically different from zero. so:

(coefficient - 0)/ standard error = coefficient/standard error

Backing out the standard error from the t-stat is the only way to calculate a t-statistic for the hypothesis of the coefficient being 1.

This.

Calculate the standard error from the original t-stat, so (1.02 - 0)/4.28 = 0.24. Then to find the t-stat for the new null hypothesis, (1.02-1)/0.24 = 0.11

What question is this from?

Great explanations, thanks! And this is from 2014 cfai mock, AM, item set 9, question 3.