Random Walk

@Harrogath - I got this from one of the past posts. I did not understand - how past value is the best predictor of the value today when you cannot forecast a Random walk?

[quote=“Harrogath”]

A random walk represents a random variable. Random means erratic, something with no predictable forecast. For example, exchange rates are best defined or modeled as random walks. So why E(e)=0 ?, because this means that the best forecast of X(t) is x(t-1). Sounds weird, but the explanation in simple words is that due you have no way to forecast a random walk, its past value is the best predictor of the value today (at t). So on average, E(e) should be zero.

[quote=“gargijain”]

@Harrogath - I got this from one of the past posts. I did not understand - how past value is the best predictor of the value today when you cannot forecast a Random walk?

In general, Harrogath’s post is inaccurate regarding the terms “random” “erratic” “no predictable forecast” and “random variable”.

True, a variable said to follow a random walk is a random variable but not all random variables are described by random walks (which is more of a data generating process/function). This was not entirely clear. Random does not mean erratic. Erratic is also poorly defined in this usage. Erratic is usually though of as highly variable rather than “not predictable”. Random variables are not necessarily unpredictable, and in general, are predictable because random variables have an underlying data generating process that can be expressed as a model in many cases (all loosely speaking). In that sense, it is predictable.

I believe the intention of that post was to say only that random walks are variables for which the expected value at time t is the previous value rather than the previous value and some other growth or shrinkage. If I recall, this is related to martingale processes (or something like that).

Tickersu always comes in strong with the mathematical statistics knowledge.

Tickersu, random and erratic are almost interchangeable in practice, unless clearly differentiated cases. Erratic is indeed not predictable. “Erratic” means an unsteady variable even in the long term. You can label many variables as erratic or not, depends on your level of grueling. If you think you can predict erratic variables most of the time, then you are mismatching with the definition of “erratic”.

You are not being accurate regarding long term and short term forecast. With enough historical data you can predict long term tendencies, but not necessarily short term outcomes (which is the most important goal most of the time). Indeed, trying to correctly forecast short term outcomes from random variables is a pain in the a.ss.

I don’t remember the question of the OP in that thread, but it was probably something specific.

People can incorrectly use terms, but that doesn’t make them correct. Random variables are not necessarily ‘erratic’ in the sense that the need not have high variability. Erratic in the sense of high variability might be harder to get accurate predictions, sure. But your earlier post definitely conflates the meanings of erratic and random, of which the latter has a more technical definition in probability and statistics. I’m not making any claim of how often someone can predict an ‘erratic’ process (although I’m still not quite sure which definition you’re using, because equating it with ‘random’ is incorrect-- full stop). The point I made was that random does not equal erratic and therefore, random does not mean unpredictable (despite common, non-technical usage of the word). In the context, erratic and random are not equal. Erratic may be hard to predict, but this is not necessarily so of something that is “random” (think random variable). If a variable weren’t “random” it would be deterministic and predictable without error; this is fundamental to many ideas in statistics. The other think to note is that predictions shouldn’t be evaluated necessarily on a one-by-one basis (this is helpful sometimes in looking for issues in the model), but the “goodness” is typically assessed “on the whole” with some summary measure of performance.

I never made any statement about long-term versus short-term forecasting. I agree that you can predict longer term trends and that short-term predictions might be harder. There are plenty of people who make reasonable predictions in the short term for random variables. Again, random does not mean erratic; if a variable is not a random variable, then there is some deterministic model and predictions are without error (this is demonstrated by the need to quantify the variance of the error term in linear regression, for example; the estimated equation is our estimate of the deterministic component of the variable and theoretically there is a random error from a specified probability distribution that adds to the deterministic component to equal the true value).

I was summarizing what I believed to be the intention of your post because the OP seemed to be unclear on some ideas.

Suppose we have a time series with unit root problem and we first difference it and then run a regression. If R2 is say 22%, then what is the interpretation?

Does the model have an intercept?

Yes it does. Its 0.9958 and after first differencing, its -0.3128.

What is the R-squared from each model?

R2 always measures the same thing: what % of the variation of the dependent variable is explained by the variations of the set of independent variables.

First or second differences changes the interpretation of slope coefficients mostly. This is the challenge of differentiating variables, how you interpret them now.

No, this is not the case, which is why I clarify if there is an intercept in both models. The “usual” R2 has the interpretation that most people are familiar with when the model has an intercept. R2 also has many interpretations depending how it’s calculated. I understand there is context for the exam, but I also take the effort to make sure things are accurate because people carry this knowledge to the real world. It also isn’t clear if the OP needs this for test purposes. Also, differencing the dependent variable does change more than the slope coefficient meaning because the model is for a whole new variable than the original undifferenced variable.

99% of models in real life has an intercept. Intercept explains a lot, so do not bother too much trying to be exhaustive with theory at expense of practical exercise. I do respect your point of view, btw.

I respect yours too, which is good to clarify before I say that I’m not sure your purview of modeling is wide enough to make a statement like the emboldened text. The idea of models without intercepts isn’t unusual at all, and many theories are best represented by models without an intercept (set to zero). Further, the intercept often doesn’t explain anything or have a practical meaning; it is frequently nonsensical or outside of the relevant range to set all independent variables to a value of 0, which makes interpreting an intercept meaningless. Interpreting an intercept requires both that the sample values for the independent variables include zero and that it is theoretically possible for the value of each variable to be zero (data entry errors or “forcing” zeros is illegitimate; if zeros are possible but you have not observed them, you’re opening yourself to extrapolation risks).

First its around 99% and after first differencing its 0.26%.

Yes it does. Its 0.9958 and after first differencing, its -0.3128.

What is the R-squared from each model?

[/quote]

So to clarify:

R2 before differencing is 99%

R2 after differencing is 0.26%

Correct?

And because both models have an intercept, you get the following:

About 99% of the sampling variation in the original dependent variable is accounted for by the model with (whichever independent variables were included). This is a good example to show how time series can look really good, in terms of R2 if you aren’t paying attention.

About 0.23% (basically none) of the variation in the first difference of the original DV is accounted for by the model with (whichever independent variables were included). This continues the good example showing how first differencing can alleviate some issues, but your dependent variable is now different and the models are different.

Where I have used generic terms, you should substitute the specific variable names to make this directly applicable to each problem (DV, independent variables)…

Thanks

We have an autoregressive model with a seasonal lag- Equation is ln (Salest) - ln (Salest-1) = b0 + b1[ln (Salest-1) - ln (Salest-2)] + b2[ln (Salest-4) - ln (Salest-5)] + εt. where: b0 = 0.0121; b1 = -0.0839; b2 = 0.6292 If sales grew by 1 percent last quarter and by 2 percent four quarters ago, use the model to predict the sales growth for this quarter. (CFAI EOC question)

In the answer, the following equation is given. How does the above equation (with coefficients filled in) transform into the following equation?:

ln (Salest) - ln (Salest-1) = 0.0121- 0.0839ln(1.01) + 0.6292ln(1.02) = e0.02372 - 1 = 2.40% How do we get 1.01 and 1.02? When e1% = 1.01 and e2% = 1.02? But why do we do that?

Pls help someone…

ln(a) − ln( b) = ln(a / b)

So, if sales grew by 1% last quarter, then,

ln(Sales_t-1) − ln(Salest-2) = ln(Salest-1 / Salest_-2) = ln(1.01)

Similarly for the 2% growth four quarters ago.

But why 1.01 and why not 1%?

Because sales _ grew _ by 1%, so new sales divided by old sales = 1.01.

You need to think about this stuff.