hmm… these answers make sense but it confuses me a bit.
Its simple… Here is a formula:
E® = Rf + C1 (Variable X) + C2 (Variable Y) + C3 ( Variable Z) + e
Think, C1, C2, C3, Rf and e all have standard errors calculated in a regression. The standard errors for the first group are calculated using the estimated variance of the error term (SEE-squared).
Also, your equation should be
1) E® = Rf + C1 (Variable X) + C2 (Variable Y) + C3 ( Variable Z)
OR
2) R = Rf + C1 (Variable X) + C2 (Variable Y) + C3 ( Variable Z) + e.
If you talk about the expected value, E®, then you carry this expecation through both sides of the equation. Recall, a regression assumption is that the expected error is equal to zero, which is why e disappears when we talk about E®.
So in multicollinearity… Variable X , Y and Z are correlated correct? Correct, or X and Y, X and Z,… basically any combination of x-variables can be correlated with multicollinearity.
So if this happens, lets say that X goes up 100% and since the other variables are positively correlated to X, they all move up, so this causes E® to move up… I agree here, under the condition that Y and Z are likely to move in the same direction as X. Positive correlation does not rule out a movement in opposite directions X and Z, for example. Overall, they tend to move together in the same direction when they are positively correlated.
but if this happens a lot, then it will continuously move up uniformly, thus the standard error should be small (similar to autocorrelation, but this is in the variables). This is where I am having trouble following you (first part through …be small). Autocorrelated errors means the errors are correlated and therefore, less random in their distribution. This makes the standard deviation of the errors (SEE) smaller than it should be (less free variation in the errors when correlation exists between errors), but this is a separate topic from multicollinearity.
Sp, why is the standard error greater when they are all moving together (if correlatd)? This comes down to the math behind the concept. I guess another way to try and understand it is by an analogy. If I want to make a decision accurately (accurately estimate se of C1), I need as much unique information as possible. The more unique info I get, the more accurately I can make a decision (more unique sampling variation in X to estimate C1). When I am reviewing all info I have gained to make the decision, I must get rid of the redundancies, as they will not help me. If I have more redundant info, I am less certain about my decision (less accurate estimate SE of C1). I really believe trying to understand the equation for VAR(Bi-hat) will help clear this concept.
That I don’t know. E stays the same in this case… are you referring to the error term when you say E stays the same? I don’t know if I agree/disagree because I can’t tell what the topic is here.
It would make sense if say th variables were negatively correlated, but I dont get it when it is positive. This is also a statement I am having trouble understanding.
If you can answer this, it would be great. Please use the previous formula as a based so I can visually understand.