The Power of a Statistical Test What Does Insignificance Mean? |
| |
Authors: | MARK D. MARKEL dvm PhD Diplomate acvs |
| |
Affiliation: | Department of Surgical Sciences, School of Veterinary Medicine, University of Wisconsin-Madison 53706. |
| |
Abstract: | In statistical testing of data, the p value is a standard measure for reporting quantitative results. When a significant difference is reported, (e.g., P less than .05), most readers understand that there is less than a 5% chance that the authors have made a type I error (false positive or alpha) with their conclusion. In contrast, when nonsignificant differences between treatments, groups, or parameters of interest are reported (e.g., P greater than .05), many investigators and readers incorrectly interpret the 95% confidence interval for this conclusion as a 95% chance of making the correct decision. In fact, the alpha level of significance (in this example, .05) is only one of the parameters that determines the probability of committing a type II error (false negative or beta) when concluding statistical insignificance. Statistical power is the probability of having made a correct decision when the statistical tests reveal insignificance (P greater than .05) and the null hypothesis is true. The higher the power, the greater the chance that the decision is correct. Power depends on the alpha level of significance, the sample size, the standard deviation of the population or the sample, and the magnitude of the difference the investigators are trying to demonstrate. |
| |
Keywords: | |
|
|