p-value

I want to simplify p-value concept to me i want the simple concept, not the equations and formulas i want to explain it and what it help us in hypothesis

Definition: the p-value is the probability of obtaining a test statistic at least as extreme as the observed one, assuming the null hypothesis (and all of our other assumptions) are true.

Practical meaning: smaller p-values tell us that our observed data “disagree” more strongly with the null hypothesis than if a larger p-value had been obtained.

Use: when the p-value exceeds our threshold, alpha, we fail to reject the null hypothesis. The data do not disagree strongly enough with our null hypothesis in favor of the alternative. If the p-value is less than (or equal to, technically) alpha, we reject the null hypothesis because our observed data are “too inconsistent” with the null hypothesis.

Hope this helps!

Edited to be a little more accurate.

In a nutshell:

  • If p < α, reject the null hypothesis
  • If p > α, don’t reject the null hypothesis

P value is the value where ur actual sample statistic lie if you compute the confidence % from the tail rather than from the center/mean of a normal distribution…

This is how i think of it to simplify

This is quite far from the definition of a p-value. I would avoid this approach as it appears to complicate things while losing accuracy.

Dear smagician have i complified the simplicated side or simplified the complicated side? as per Mr ticker su I have complified the complicated side!

The p-value is the LOWEST level of significance for which you can reject the null hypothesis. That’s the way I always thought of it. For example, if the level of significance is .05 but your p-value is .06, you can’t reject the null.

@S2000Magician

Sorry, but are you sure you don’t have this backwards?

He does have it backwards! I admittedly didn’t think to check him since it was a short post, good catch.

This isn’t factually wrong, but it’s more of an operational definition that can lead to bad practice and doesn’t actually tell you what the p-value represents. The reason it leads to bad practice is people often take the statement to mean that alpha can or should be changed after they see the data, which is bad statistical practice and unethical in research (i.e. based on subject knowledge/expertise, you were comfortable with alpha .01 but when the data were analyzed, you changed alpha to .05 to get a significant result). Further, the operational definition blurs the lines between alpha (chosen significance, probability of Type I error) and the p-value (observed significance, not an error probability). People often conflate the p-value as some type of error probability, which may or may not be due to limited definitions as the one above (lowest level of significance…).

So, you’re not wrong, but it doesn’t do much in defining the p-value, which is a probability.

Indeed, you have.

Stupid me: I did have it backwards.

Good catch.

I fixed it.

And shame on you for believing me.

wink

I wouldn’t say it was believing as much as it was scrolling down without reading after noticing your avatar (and then your track record popped into my mind), so I kept on scrolling!