For the first and fourth question, I probaby need a refreher of level 1, but here are the questions: 1) Bk1, pg. 178 - We are testing the following hypothesis: H0: Intercept >= -10% versus Ha: Intercept < -10% (using a 1% significane level) The 1% one-tailed critical t-value with 43 degrees of freedom is ~ 2.42. We should reject the null hypothesis if the t-stat is less than -2.42. Question - why do we reject the null hypothesis if the t-stat is LESS than -2.42 (other than b/c it’s a one-tailed test)? Why not reject the null if the t-stat is greater than +2.42? 2) Bk1, pg. 185 - 2nd example - “The analyst would prefer the first model b/c the adjusted R square is higher and the model has 5 independent variables as opposed to nine.” Question - regarding the 2nd part of the statement. Does having fewer independent variables in the model increase the accuracy of it? Why do we want fewer independent variables? 3) Bk1, pg. 215, answer to 8. - “The Durbin-Watson is not significant because the DW statistic is greater than the lower limit, so serial corrleation does not appear to be a problem.” Given: DW test-statistic: 1.8; and lower and upper limits for the DW test are .40 and 1.90, respectively. Question - Since the DW test-stat is between the lower and upper limits, shouldn’t the test be INCONCULSIVE given the DW Decision Rule (pg. 199) that If d lower < DW < d upper, the test is inconclusive?? 4) Bk1, pg. 227, answer to example - The critical two-tail t-value at the 5% significance level is 1.98. The t-stats indicate that none of the autocorrelations of the residuals is statistically different from zero because their absolute values are less than 1.98. Question - Why are we taking the ABSOLUTE VALUE of the t-stats? why not test to see if the t-stats are in between -1.98 and +1.98? THANK YOU!!

I don’t have my books here at work but can take a look when I get home- 4- maybe it’s just an easier way of them writing the answer instead of saying less than negative 1.98 or greater than positive 1.98? so if the t stat came out at -2, if you took abs value, it’d be +2 which would be greater than the 1.98, but at negative 1.5, taking the abs value, you wouldn’t reject b/c less than 1.98. Sounds like you understand it fine but my guess is just their choice of wording for it… unless there’s something bigger that i’m missing as well. 1- don’t have books here, but sounds like it is b/c it’s a 1 tailed test that you’d only be looking at 1 tail. draw a graph… most of the time that helps visualize. 2- take a look at multicollinearity- Multicollinearity refers to the condition under which a high correlation exists among two or more of the independent variables in a multiple regression. This condition distorts the standard error of estimate, which distorts the coefficient standard errors, leading to problems when conducting t-tests for statistical significance of parameters. Multicollinearity is detected by a significant F-statistic and a high R2 but insignificant t-statistics on individual coefficients. It is corrected by removing one or more of the correlated independent variable(s), but it is sometimes difficult to identify the source of the collinearity. so you check the adjusted R2 because as you add pretty much any variable into a model, the R2 goes up (even though sometimes this doesn’t reliably explain the regression model). adjusted R2 helps to account for that problemo, so in that example, the adjusted R2 is higher and most likely all of those 4 other variables in the larger model might not be statistically significant even though with basic R2 it seems to be making the overall model’s significance rise. 3- i’ll have to look at the problem when i get home to see if i agree that it should be in that inconclusive region or not… hope that helps a bit- sorry not more specific but don’t have the texts here!

Thanks bannisja! Your explanations make sense and def. help with my understanding of these concepts and problems. = )

i’m struggling through this quant stuff myself- statistician i am not. just started FI, seems far easier so far. ask away anything on quant- i need to reinforce this stuff in a big way, as it’s pretty fuzzy still in my mind.

I’m not reading the book - but here is my $.02: #2 - There comes a point when adding more independent variables just to increase the R squared of a model is not real useful. For example, if you have a model with two independent variables you get a Rsq of, say, .70, but with six independent variables you get .72 - are those other four additional variables REALLY worth it in the model? That is why adjusted R squared is used. #4 - You have it right. Its just an easier way of saying it.

on #1, read this, it should help http://www.analystforum.com/phorums/read.php?12,561274,561561#msg-561561