can someone explain why you multiply the z score by standard error to calculate a range when rejecting/failing to reject the null? understand answer but not concept. for example, question from Schweser: test performed at 5% level significance, random sample of 64 port mgrs, where mean time on research is found to be 2.5 hrs. Pop standard deviation is 1.5 hrs. 95% confidence interval for population mean is: ((2.5 +/- (1.96 x .1875)) 1.5/sq root 64 = 8 thanks, John
Multiplying with the standard error ensures that you capture the uncertainty that your sample is representing the reality. After all the mean was calculated from a random sample. If you were to take an other sample, your mean may deviate. This variation between different samples out of the same data is accounted fro by the standard error. At least thats how I understand this…