Serial Correlation means that the model (in this case a simple “a + b(t) + e”) will systematically accumulate errors from the calculated residuals. For example, if you try to model a parabolic-behaving data set with a trend model, a straight line will account severe errors at the extremes of the data creating therefore serial correlation.
In the case of heteroskedasticity, having “bubbles” of bigger errors along the line will be compensated with smaller “bubbles” of errors in others, so a trend model is still useful.
Was that a “yes” or a “no” to my question?
Heteroskedasticity is a problem for any model, however, some data sets present that behavior (exaggerated bubbles of data points along the set, or depressed valleys of data points along the set). It doesnt mean that that data set is unusable, you just must watch for possible problems in the model fit by monitoring error tests. To your question: You CAN ALLOW some levels of heteroskedasticity as long as errors don’t present systematic deviations.
In the case of that exercise, serial correlation is a problem that you cant alow in your model because it will generate systematic accumulation of errors, a problem you may solve by adding more explanatory variables.