Type 1 and Type II Errors
The fundamental goal of research aims at testing the truthfulness of the null hypothesis. However, type 1 and type 2 errors may result from the process of testing the hypothesis. These errors negate the results of the study, consequently; this means that the results do not reflect the true condition. Type 1 error results from rejecting a true null hypothesis, while type 2 error results when a false-null hypothesis is accepted. For example, supposing that the research hypothesis is that drug A improves brain capacity for schizophrenia patients above 60 years. If a study is conducted to establish whether drug A is beneficial to schizophrenia patients in a Randomized double blind placebo experiment, and the study shows that most of the patients reported improved brain capacity, while others developed severe and psychosomatic symptoms. In this regard, the null hypothesis would be “drug A brain benefits in improving brain capacity is coincidental”. If the researcher rejects the null hypothesis, commits type 1 error by going by the research hypothesis. Marketing this drug based on the findings of the study will cause serious problems to patients. On the other hand, if the researcher fails to reject the null hypothesis, type 2 error is committed. The drug A does not enter the market despite of some possible benefit to the patients. Therefore, it is essential for researchers to be concerned about making type 1 and 2 errors, although, making type 1 error pose more danger to the consequence of the research findings (Spiegel & Stephens, 2008).
Test statistics value is used to make the decision of accepting or rejecting a null hypothesis. However, the test statistic is dependent on the size of the population sample. The acceptance and rejection of the null hypothesis depends on whether the value of the test statistic falls within or outside the acceptance region. This acceptance region is determined by the level of significance. However, both types of errors cannot be decreased simultaneously. Setting a very low significance level increases chances of making type 2 errors. Therefore, researchers mostly use a significance level of 0.05; this presents concerns of rejecting a true null hypothesis at an error of 5%. However, the researcher can minimize the two errors by having a small chance for making type 1 error. This is possible when the sample size is large as it increases the power of the statistics (Spiegel & Stephens, 2008).
Statistical significance verses Practical significance
A statistically significant study does not guarantee practical significance. Cottrel and McKenzie (2010) in their study found that there were differences between male and females average diabetes scores, although the differences were extremely small. This shows that the study was statistically significant, but not practically significant. Statistical significance means that the test statistic falls within the rejection region, and the null hypothesis can be rejected with some certainty. In this case, the researcher goes by the alternate hypothesis. However, it is essential to determine the practical significance of the size of the relationship. In this regard, practical significance aims at determining whether the size of the relationship is of value to the make a practical sense. Practical significance can be estimated using the effect size statistic. The effect size statistics is typically used to evaluate the magnitude of the study relationship, and it is not influenced by the sample size (Cottrel & McKenzie, 2010).
Statistical significance depends of the value of the test statistics, and it is influenced by the sample size. In addition, small statistically significant differences can be found when the sample is large. In this case, the difference between means will be significant for large sample sizes. For example, there is a difference between the average IQ for men and women. On the other hand, practical significance is not affected by the sample size. In addition, it is concerned with whether the differences in average IQ for men and women large to make practical sense (Cottrel & McKenzie, 2010).
In conclusion, researchers must not only provide test statistics but also provide effect size measures to enable users estimate the practical significance of the study.
References
Cottrell, R. & McKenzie, J. F. (2010). Health promotion and education research methods: using the five-chapter thesis/ dissertation model. London: Jones & Bartlett Learning.
Spiegel, M. R. & Stephens, L.J. (2008). Schaum’s outline of theory and problems of statistics. New York : McGraw Hill professional.