Put simply the basic point of any regression is to
[a] Understand and quantify the relationship between Y (dependent variable) and one or more Xs (explanatory variable)
[b] Make predictions regarding values of Y given this relationship.
Regarding [a], the relationship between Y and X is governed by intercept and coefficients (alpha,betas). But these population coefficents are not known with certainty and must be estimated from samples. Since data are likely to change from sample to sample -> the estimates of alpha and betas will change too. What we need is some measure of precision with regards to these coefficient estimates…this measure of reliability is captured by the standard errors of OLS estimates.
Regarding [b], how good is our model’s estimate of Y relative to actual values of Y? The standard error of the estimate is a measure of the accuracy of predictions.
The smaller the standard error of OLS estimates -> The closer our sample coefficent estimates are to population coefficients -> The better the relation betweeen Y and X is captured by our coefficients -> The closer the model Ys and actual Ys are -> the lower the standard error of the estimate.
Standard errors of OLS estimates : governs the precision of our coefficients.
Standard error of estimate: governs accuracy of our model-Ys relative to actual-Ys.