The levels of research evidence required to enhance practice vary among subjects and issues of discussion. Randomization is one of the most applied approaches to generalize outcomes evidenced from a sample population. However, such inferences made from a sample population to cover the represented group leaves gaps and make assumptions that exempt other outcomes (Dattalo 2). In perspective, the significance of a quantitative outcome dictates the relevance of a tested assumption. However, other equally valid and individual outcomes stand a chance to operate within the normal conditions as entities. Ethically, they should not be assumed as invalid entities within the general population. Other relevant issues include the validity of random sample. Although a research may be termed procedurally as random, enhancing a fully random test is tactically impossible. First, the random tests are limited by such ethical requirements as consent from a sample population. A researcher must seek the consent of participants before commencing the data collection. The data collected from people must also be used confidentially for the purpose of research only as per the consent information.
A fully random test is not possible since only the tests are randomized in most research cases. This aspect implies that random sampling does not remove the errors in the research data completely. In an ethical perspective, these errors and others linked to the sample size are counted in the measure of outcome validity per individual case (Druckman and Kam 2). In most public health research studies, such errors must be nullified beyond any reasonable doubt before making the inferences.
Works Cited
Dattalo, Patrick. “Ethical Dilemmas in Sampling.” Journal of Social Work Values and Ethics, 7.1 (2010): 1-12.
Druckman, James, and Cindy Kam. "Students as Experimental Participants: A Defense of the 'Narrow Data Base'" SSRN Electronic Journal SSRN Journal 4.1 (2009): 1-41. Web.