Regression Coefficients

when we are testing for the regression coefficients, typically our hypotheses would be h0:b0=0 or h0:b1=1.

Is b0 considered the risk free rate and we are trying to test that b0 is not equal to 0 because otherwise we get no return? And for b1=1, we are hoping to reject this hypothesis and get a market beta different than 1 because otherwise the stock’s will be as sensitive as the market?

What are you regressing against what else?

I am not sure i have understood your question

You say, " when we are testing for the regression coefficients".

What regression coefficients?

Are you regressing equity returns of one stock vs. the returns of the overall market, or are you regressing the prices of bonds vs. their date of issuance, or are you regressing the implied volatility of call options vs. their strike prices, or . . . ?

I can’t tell you what the coefficients mean if I don’t know what your _X_s and _Y_s are.

the book says that when testing regression estimates, we put the following: h0:b0=0 or h0:b1=1 . Why?

I don’t know, because you’re leaving out a lot of important details.

The book certainly _ doesn’t _ say that those are the null hypothesis values _ in general _.

What’s a specific example in which you see these hypotheses?

@S2000magician I think I may have got it. Both B0 and B1 are based on what we are trying to test.

For example if we think that a stock is more sensitive than the market risk, then we would do H0: B1=1 and Ha: B1 > 1,hoping that we can reject our null.

I think I got confused reading that when set up our null for either B0 and B1, they should always be like this: h0:b0=0 or h0:b1=1, but to me wasn’t making much sense.

I think I got it.

Sorry to be a bother with so many questions

Absolutely correct!