Just did the Schweser questions at the back, and although I realize these are quite easy and straightforward, I killed them and I’m feeling A LOT more confident in this section. I really took my time on this yesterday. Just one question for anybody that has the book handy. Schweser, Book 1, Pg. 250, #14: How did they determine t-critical to test the t-stat against. I know we have 250 observations, so I would use T-2 df in the table, but what is the significance level? OR, can I just test that p-value of Lag 12 (0.5612) against the p-value of Lag 1 coefficient (<0.0001). If this is the case, obviously p-value is NOT less and therefore NOT statistically significant, meaning NO evidence of seasonality. If someone can clarify, I would appreciate it. Otherwise, I’m feeling decent about this stuff. Now, just need to get my hands on more difficult stuff. CFAI book, perhaps? ROCK >W< MUSIC!

I think the p-value is just so large you don’t need to look up any table.

the p-value is .5612. that would make it pretty insignificant at any reasonable level of significance. no you wouldn’t test it against another p-value.

Zim, I am not sure the significance level matters because the t-stat is so small. 10% is the lowest we ever go and it is still smaller than the t-crit at that level.

Okay, cool, so it’s just the fact that the p-value is SOOO F@CKING RIDICULOUSLY out of the stratosphere, that it’s not even in the same time zone of being considered legit?! No further testing necessary. Thanks boys!

That p-value means that you could reject the null at the 56% level! That is the SMALLEST level of rejection. So 10,5 or 1 won’t matter at all.

Thanks dude, got it. Gonna head home now. Feeling good, even though dead tired from my average of five hours/night of sleep this weekend. Have you guys just been hammering questions for time series from the Q-bank or has anybody done the CFAI questions? What kinda difficulty are we looking at on the samples so far? You guys been blown outta the water or not?

zim just wanted to remind you to get a solid eight hours of sleep wednesday and thursday as we are going to hit the books with no mercy again this weekend.

Yessir, dude! Records will fall once again. I will show Memorial Day NO f@cking mercy.

Also, I make a personal practice of testing models at the 60% level of significance all the time, I don’t see a problem with it. That’s how I found out there is a significant relationship between the market closing at an increase and Zims wearing a lime green banana hammock.

The approved term is budgie smuggler.

Just to add another question on the Quant post… The data below yields the following AR(1) specification: xt = 0.9 – 0.55xt-1 + Et , and the indicated fitted values and residuals. Time xt fitted values residuals 1 1 - - 2 -1 0.35 -1.35 3 2 1.45 0.55 4 -1 -0.2 -0.8 5 0 1.45 - 1.45 6 2 0.9 1.1 7 0 -0.2 0.2 8 1 0.9 0.1 9 2 0.35 1.65 The following sets of data are ordered from earliest to latest. To test for ARCH, the researcher should regress: A) (1, 4, 1, 0, 4, 0, 1, 4) on (1, 1, 4, 1, 0, 4, 0, 1) B) (0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225) on (1, 1, 4, 1, 0, 4, 0, 1) C) (-1.35, 0.55, -0.8, -1.45, 1.1, 0.2, 0.1, 1.65) on (0.35, 1.45, -0.2, 1.45, 0.9, -0.2, 0.9, 0.35) D) (0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225) on (1.8225, 0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01).

ahh… nevermind that came out soo bad… I thought it was a though question If anybody cares the ans is D Heteroskedasticity describes one possible pattern of the squared residuals. The ARCH model is the regression of the squared residuals on their corresponding lagged values. The squared residuals are (1.8225, 0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225). Regressing the last 7 on the first 7 would be a first-order ARCH model. Regressing the squared residuals on xt, i.e., (0.3025, 0.64, 2.1025, 1.21, 0.04, 0.01, 2.7225) on (1, 1, 4, 1, 0, 4, 0, 1), would be a test for another type of conditional heteroskedasticity, but not ARCH.