# Time Series -> 2 questions

Q1. The data below yields the following AR(1) specification: xt = 0.9 0.55xt-1 + Et , and the indicated fitted values and residuals. Time-------xt---------- fitted values----------residuals 1---------- 1---------- - ---------- - 2---------- -1---------- 0.35---------- -1.35 3---------- 2---------- 1.45---------- 0.55 4---------- -1---------- -0.2---------- -0.8 5---------- 0---------- 1.45---------- -1.45 6---------- 2---------- 0.9---------- 1.1 7---------- 0---------- -0.2---------- 0.2 8---------- 1---------- 0.9---------- 0.1 9---------- 2---------- 0.35---------- 1.65 The following sets of data are ordered from earliest to latest. To test for ARCH, the researcher should regress: A)(1, 4, 1, 0, 4, 0, 1, 4) on (1, 1, 4, 1, 0, 4, 0, 1) B)(0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225) on (1, 1, 4, 1, 0, 4, 0, 1) C)(0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225) on (1.8225, 0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01). D)(-1.35, 0.55, -0.8, -1.45, 1.1, 0.2, 0.1, 1.65) on (0.35, 1.45, -0.2, 1.45, 0.9, -0.2, 0.9, 0.35) Q2.Alexis Popov, CFA, wants to estimate how sales have grown from one quarter to the next on average. The most direct way for Popov to estimate this would be: A) an AR(1) model with a seasonal lag. B) a linear trend model. C) an AR(1) model. D) an MA(1) model.

#1. The squared residual should actually be regressed on a constant and one lag of the squared residual. So, it should be -1.35, 0.55 regressed on their squares of the previous lag value of the residual. The formatting of your question is a little off. so, i can’t figure out the equation of the first column in the data. #2. B? not very sure though.

1. No idea!! 2. C?
1. D 2. D? I see “on average” and immediately think of a MA(1) model

Hmm. I remember reading about this on a beach in Mexico whilst drinking Margueritas. But only vaguely. We’re testing for CH - conditional heteroskedasiticity. So we need to know if the errors are related to the predicted value. So whatever we regress has to include the error terms. So I’ll pick D. For the second one, If we just want the average q-on-q change, we just need to draw a line through it. So the most direct way would be a linear trend. B. This should be a good test for my drunken study skills.

D C ???

I’m going with D, B.

Just installed QBank and was hit by these 2 questions on Time Series [basic - LOS Quiz] The correct answers are: C and B Could anybody explain why C could be the answer to Q1, I don’t remember CH being covered by Schweser (or I would have missed it in haste) For Q2: My reasoning fell exactly on lines with pdxanalytics. Why is Linear Model a better model than a MA(1) for this scenario?? Any inputs appreciated!! QBank Score: 41 ques @ 90%

That’s incorrect. Actually, none of the answers are correct. The CFAI book clearly states on page 296 “Regress the squared residuals from the estimated regression equation on the independent variables in the regression.” They have it backwards in answer C.

Answer A is also correct for Q1.

This is what the QBank says. Heteroskedasticity describes one possible pattern of the squared residuals. The ARCH model is the regression of the squared residuals on their corresponding lagged values. The squared residuals are (1.8225, 0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225). Regressing the last 7 on the first 7 would be a first-order ARCH model. Regressing the squared residuals on xt, i.e., (0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225) on (1, 1, 4, 1, 0, 4, 0, 1), would be a test for another type of conditional heteroskedasticity, but not ARCH. I am still not able to translate this Greek to English

autoregression X(t+1) = F(X(t)) – series is a function of the previous value -> we just shift by one element. does it make sense to you, Dinesh?

Where’s that bloody deVivre when you need him?

Finally I understood 1% of the complete issue here. Wating for Joey for remaning 99% (if at all my 1% is correct) This problem is more like visiting each option and finding if any of it exhibits a pattern in their residual errors. So when we go to option C - we figure out that the residuals exhibit a pattern there, which is squaring the ‘current residual error’ and regressing it to it’s lagged value of the ‘previous squared residual error’ As shown… (0.55^2, -0.8^2, -1.45^2, 1.1^2, 0.2^2, 0.1^2, 1.65^2) on (-1.35^2, 0.55^2, -0.8^2, -1.45^2, 1.1^2, 0.2^2, 0.1^2) (0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225) on (1.8225, 0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01). Which clearly is a pattern in residuals and hence violate the homoskedastic assumption of the multiple regression theory. … is this BS???

C has the correct numbers, it is just that they are reversed. You should be regressing (1.885, .3025…) on (.3025, .64…) instead of the way they have it in the question.

wyantjs Wrote: ------------------------------------------------------- > C has the correct numbers, it is just that they > are reversed. You should be regressing (1.885, > .3025…) on (.3025, .64…) instead of the > way they have it in the question. that’s wrong. you should use 1.885 to predict .3025, .3025 to predict .64 -> you always use past values to predict future values.

Just square each residual and regress it on dependent variables which will be the squared residual from the previous period (one time lag)

also, linear model is better because they’ve asked for the most “direct model”. MA model is more complicated compared to the linear model. Edit: Can’t believe I got both of these correct. Yahoo!

maratikus Wrote: ------------------------------------------------------- > wyantjs Wrote: > -------------------------------------------------- > ----- > > C has the correct numbers, it is just that they > > are reversed. You should be regressing (1.885, > > .3025…) on (.3025, .64…) instead of > the > > way they have it in the question. > > that’s wrong. you should use 1.885 to predict > .3025, .3025 to predict .64 -> you always use past > values to predict future values. I love how you disagree with me, and are typically incorrect. The goal is to test if the variance of the residuals is conditional upon the observed values. Therefore, you should regress the SQUARED RESIDUALS ON THE INDEPENDENT VARIABLES.

Wow. What happened here? Anyway, there’s a bunch of different things going on here. For #1 - C is the best answer. For ARCH you are regressing squared residuals on the lagged squared residuals (and here they are suggesting only the simplest ARCH structure). There’s a bunch of stuff up there about regressing squared residuals on everything else but ARCH says that the variance of the residuals depends on the prior residuals. This means that the error terms depends on previous errors and is pretty much the essence of ARCH. As dinesh pointed out above, you can imagine other kinds of more plain vanilla heteroscedasticity where the variance of the error depends on one or more independent variables or the predicted values and then you’re back in Breusch-Pagan land. That would be wyantjs comment "The CFAI book clearly states on page 296 “Regress the squared residuals from the estimated regression equation on the independent variables in the regression.” That’s a test for a different kind of heteroscedasticity (that word just means “different variances” and the differences can come from anything including stuff not modelled). Also, the order given in C seems to be right. For 2, ruhi is right - AR(1) kind of looks like a linear regression (i.e., if Y(t) = b0 + b1*t + e(t) then Y(t) - Y(t-1) = b1 + e(t) - e(t-1) => Y(t) = b1 + Y(t-1) + error) but that’s the unit root problem and you’re supposed to go running away from that. Anyway what do you believe? Do you believe that the sales are related to time so they are growing through time or do you believe that they are related to previous sales? It’s pretty clear that the first order effect is time here. Now that doesn’t mean that there isn’t some autoregressive effect as well.