Hypothesis Testing and P Value

Southside Hospital in Bay Shore, New York, commonly conducts stress tests to study the heart muscle after a person has a heart attack. Members of the diagnostic imaging department conducted a quality improvement project with the objective of reducing the turnaround time for stress tests. Turnaround time is defined as the time from when a test is ordered to when the radiologist signs off on the test results. Initially, the mean turnaround time for a stress test was 68 hours. After incorporating changes into the stress-test process, the quality improvement team collected a sample of 50 turnaround times. In this sample, the mean turnaround time was 32 hours, with a standard deviation of 9 hours. (Data extracted from E. Godin, D. Raven, C. Sweetapple, and F. R. Del Guidice, “Faster Test Results,” Quality Progress, January 2004, 37(1), pp. 33–39.)

a. If you test the null hypothesis at the 0.01 level of significance, is there evidence that the new process has reduced turnaround time?

b. Interpret the meaning of the p-value in this problem.

For you to learn this stuff, it’s probably better for you to work out the answers rather than merely to read what others here write about it. So, with that in mind:

  1. Is this a 1-tail test, or a 2-tail test?
  2. Is this a small sample or a large sample?
  3. Will you be using a Z-table or a t-table?
  4. If a t-table, what is the number of degrees of freedom?
  5. What are the critical values from the table?

Answer those and we’ll continue.

  1. Is this a 1-tail test, or a 2-tail test? 1 tail, from process to signoff.
  2. Is this a small sample or a large sample? Large, greater than 30.
  3. Will you be using a Z-table or a t-table? Greater than 30 -> Z (Plus we know the mean, std dev).
  4. If a t-table, what is the number of degrees of freedom? DF = N-1. Not a t though.
  5. What are the critical values from the table? .01 given a 1 tail test.

Again, give me the critical values from the Z-table for #5. Everything else is fine. #4 is 49.

Isn’t it 2.33?

Yup.

So, what’s μ – 2.33σ?

32 - 2.33(9)?

Not 32: 68; 32’s the value we’re testing.

We can compute the test statistic (32 – 68) / 9 and compare that to -2.33, or:

(32 – 68) / 9 ? -2.33

32 – 68 ? -2.33(9)

32 ? 68 – 2.33(9)

So 68 – 2.33(9) = ?

S2000magician,

1, why we will use Z-Score?

we are not provided with population varince, 9 hours is sample S.D.

S2000magician is right

Once the sample is big enough, z values well approximate the t-values due to the central limit theorem or more technically in this case only, because of the way the t-distribution is derived, but that’s out of the scope of the CFA.

Sample of 50 is pretty big, you can check on a t-table and compare the values with the z-table values.

Latheorie

yes i know that, but we can use t-dist as well in this case right?

And S2000magician why we divided test statistics by sample SD of 9 hours? Why we didnt use standard error in deniminator? 9/(50^.5)?

Need your help

Because we have a large sample we can use a Z-statistic; n = 50 > 30.

My mistake – gotta stop answering these on the fly when I’m working on other stuff – you’re correct: we’re testing for the _ mean _, so we should be using 9 / √50.

Thank you S2000 forclarifying.

mates if you can see in CFA curriculum volume 1 page 564

though the sample size is 50 but the test statistics used is that of t distribution the reason being population variance is unknown.

i will be really thankful if you clarify this too.

When the population variance is unknown, but the sample size is large, you can use either a t-distribution, or a Z-distribution. The former is (slightly) more conservative.