risk model vs. expected returns models

OK, I’m supposedly quantitatively savvy, so I’m a little embarrassed to be asking this (and maybe I’ll go to Wilmott to ask later, where I might seem even more uninformed). I’m trying to figure out what the difference is between risk model and an expected returns model. Here’s what I understand (or think I understand): * I understand that an expected returns model looks at a number of factors (market beta, economic factors, fundamental factors like P/E, capitalization, etc) and comes up with a prediction for the expected return for the next period based on these things, generally with some kind of regression. * I understand that the factors that go into an expected returns model are generally called risk factors (market risk, growth/value risk, capitalization risk, etc.) * I understand that a risk model tries to predict the risk of the investment, generally by trying to predict the distribution of returns around the expected return. So why are these models constructed differently? Why wouldn’t you just use the same factors in a risk model and an expected return model, and then look at the standard error of the prediction to get your measure of risk (or sum the distribution functions and then calculate whatever risk measure you think is appropriate for your purposes from that). Why would a risk model include things that aren’t in your returns model. I understand that there might be factors that affect the risk that aren’t expected to affect expected returns much, but I would think you would still want to control for those variables in the expected returns regression. Can someone help clear this up for me?