The intuitive answer to that is that the usual s.e. estimate is indifferent to the order of the residuals. That is, a large residual could be in the neighborhood of X-bar where it exerts little influence on the regression fit. In a heteroscedastic model, the big residuals are located near the edge of the data where they exhibit the most leverage on the model.
CFAI?
CFAI what?
That was to ruhi.
Yes, in the CFAI text.
Okay. I will look when I get home.
JoeyDVivre Wrote: ------------------------------------------------------- > The intuitive answer to that is that the usual > s.e. estimate is indifferent to the order of the > residuals. That is, a large residual could be in > the neighborhood of X-bar where it exerts little > influence on the regression fit. In a > heteroscedastic model, the big residuals are > located near the edge of the data where they > exhibit the most leverage on the model. Joey, Is this because the OLS line will always pivot around the mean? Thus the outliers would move it more…
I am new to Analyst Forum, first time Level II test taker as well. my question for you guys/girls is - are you studying right now? Are you students? How do you find time to do this during the day? find the answers, read through the texts, etc… indeed i can join the blog but break out my CFAI books.
I leave for a meeting and come back to more confusion. Great. I understand why the standard errors are smaller if what ruhi said is correct: " the variance of the error terms incr. with the increase in the value of the independent variable, this means that there is a constant variance." I will take a look at the CFAI book when I get home tonight and see if that clears anything up.
wanderingcfa Wrote: ------------------------------------------------------- > I leave for a meeting and come back to more > confusion. Great. > > > I understand why the standard errors are smaller > if what ruhi said is correct: " the variance of > the error terms incr. with the increase in the > value of the independent variable, this means that > there is a constant variance." > > > I will take a look at the CFAI book when I get > home tonight and see if that clears anything up. Why would you think the standard errors would be smaller if the variance is increasing? I am not being an idiot, I would really like your thoughts.
mwvt9 Wrote: ------------------------------------------------------- > JoeyDVivre Wrote: > -------------------------------------------------- > ----- > > The intuitive answer to that is that the usual > > s.e. estimate is indifferent to the order of > the > > residuals. That is, a large residual could be > in > > the neighborhood of X-bar where it exerts > little > > influence on the regression fit. In a > > heteroscedastic model, the big residuals are > > located near the edge of the data where they > > exhibit the most leverage on the model. > > Joey, > > Is this because the OLS line will always pivot > around the mean? > > Thus the outliers would move it more… Yes although influential observations and outliers aren’t exactly the same thing.
ruhi22 Wrote: ------------------------------------------------------- > On second thoughts, I think what I said was > correct. Take a look at p. 293, Book 1 > > Edit: I’m tossing like a ping pong ball. I need to > get off AF. I took a look at the book. Here is my take on it. Remember that one of the assumptions that needs to hold for regression to work is that the residuals (errors) are constant. Heteroskedacity is a violation of this assumption. The first scatterplot they show has that feature of constant errors. That is the one that DOES NOT exhibit conditional heteroscedacity. The one that DOES exhibit conditional heteroskedacity does not have constant error terms. I think I am correct in what I was saying earlier. You seem to be defining constant as a continued increase in the amount of the error terms (this is how I am reading what you are saying anyway). I am defining constant as a consistent amount of error around the regression line. That is why you can see the heteroskedacity on the scatterplot. What do you (or anybody else) think?
mwvt9 Wrote: ------------------------------------------------------- > ruhi22 Wrote: > -------------------------------------------------- > ----- > > On second thoughts, I think what I said was > > correct. Take a look at p. 293, Book 1 > > > > Edit: I’m tossing like a ping pong ball. I need > to > > get off AF. > > I took a look at the book. Here is my take on it. > Remember that one of the assumptions that needs > to hold for regression to work is that the > residuals (errors) are constant. Heteroskedacity > is a violation of this assumption. > > > The first scatterplot they show has that feature > of constant errors. That is the one that DOES NOT > exhibit conditional heteroscedacity. > > The one that DOES exhibit conditional > heteroskedacity does not have constant error > terms. > > I think I am correct in what I was saying earlier. > You seem to be defining constant as a continued > increase in the amount of the error terms (this is > how I am reading what you are saying anyway). I > am defining constant as a consistent amount of > error around the regression line. That is why you > can see the heteroskedacity on the scatterplot. > > What do you (or anybody else) think? I am not defining constant as a consistent “amount” of error. I’m defining it as having “constant” value. As long as you understood what you had to, the purpose is solved. And I agree with the rest of your points. Probably you are not getting what I’m trying to say.
Ok. We’ll let it go then. These types of threads are the ones that really help me. I feel like I have a much better handle on multiple regression after trying to understand this. Thanks for your effort.
Sorry it took me a while to get back to this. I think your question made me think about this harder and I realized I did not understand this enough to explain myself. I think this is what I understand is that the assumption we are looking at is that the variance of the error terms is constant over all observations. I understand that to mean that all of our data points are expected to fall in a band within a constant width from the regression line. I suppose that would mean regardless of which data point you estimate, it’s value should fall within +/- error term of what your regression equation estimated. So the error term should be the same across all data points. Even though I still do not completely understand exactly how the variance of the error terms makes the standard error terms smaller, I am not sure if I will. I do however understand heteroskadacity much better.
Can anyone explain to me why the standard errors would be smaller ? After reading this post i am confused