SEE=RMSE?

The formula for SEE and RMSE is the square root of MSE. What’s the difference? Is RMSE just another term for the same thing when determining the accuracy of an AR model?

I assumed that they are the same. SEE = MSE^(1/2) RMSE = Square root of average square error (same thing) A lower value for either SEE or RMSE means that the model provides a better fit

SEE is basically used to judge the effectiveness of the regression model (in sample). The RMSE is basically the same (divisors are different, n vs. n-2) but is derived from the errors of the out of sample forecast. Same formula but different beasts. Its like me telling you what the last 48 months Dow returns are and asking you to come up with a regression model and you tell me that using the OLS method you derived a SEE of x. Now we use the same regression model to forecast for the next 12 months and compare with actual realized returns and calculate the RMSE. If you are a betting man you would expect the RMSE to be higher than the SEE in absolute terms most of the time. This part made up by me, dunno if there is statistical proof of probability of RMSE>SEE ? :slight_smile: HTH.

As far as I know the RMSE has to do with time series

It’s something Quant-like.

lol! mcpass you are hilarious…

Thank you mcpass… :slight_smile:

sarthak Wrote: ------------------------------------------------------- > SEE is basically used to judge the effectiveness > of the regression model (in sample). > > The RMSE is basically the same (divisors are > different, n vs. n-2) but is derived from the > errors of the out of sample forecast. > > Same formula but different beasts. > > Its like me telling you what the last 48 months > Dow returns are and asking you to come up with a > regression model and you tell me that using the > OLS method you derived a SEE of x. > > Now we use the same regression model to forecast > for the next 12 months and compare with actual > realized returns and calculate the RMSE. If you > are a betting man you would expect the RMSE to be > higher than the SEE in absolute terms most of the > time. This part made up by me, dunno if there is > statistical proof of probability of RMSE>SEE ? :slight_smile: > > HTH. If your model is correct, data stationary, calculate them both right, etc. then they are both unbiased estimators for the same thing. If any assumption is not true then E(RMSE) > E(see)

Joey, E(SEE)? ; There is nothing ‘expected’ about SEE. We can calculate SEE given a data set of historical “in sample” values. Once we derive the intercept and co-efficients from the OLS regression, we then forecast for the “out of sample” period and use the RSME of the forecast w.r.t. ‘out of sample’ actual results to calculate the reletive effectiveness of the forecast. Am i missing something?

SEE is a statistic so you can take its expectation. It’s an unbiased estimator for the variance of the residuals.