The studies supporting the efficacy of a procedure can be categorized as strong evidence, which meets evidence standards, or weaker evidence, which meets the standards with reservation or insufficient evidence, which do not meet the evidence standards (What Works Clearinghouse, 2010). These standards are based on several factors that determine if the evidence is sufficient:
1. Eligibility screening
This is the first step in determining whether a study supporting the efficacy of a procedure or technique constitutes sufficient evidence. The screening is based on the relevance of the procedure/intervention/technique to the condition or topical area, the relevance and size of the sample relative to the interest population, the relevance and validity of the findings and the timelines of the research. For a study to be considered sufficient evidence, the research must have been published within the relevant time (within 20 years from the beginning of the area) i.e it must be current enough. The study must primarily analyze the efficacy of the procedure; studies addressing the implementation, meta-analyses and literature reviews are not acceptable measures of sufficient evidence. The study must also use a scientifically acceptable design. The study design is crucial in determining the quality of the research and the acceptable designs (in order of the quality) are randomized controlled trial (RCTs-provide the highest quality of data), quasi-experimental design (QED) and single subject or regression discontinuity (What Works Clearinghouse, 2010).
2. Study design
As earlier mentioned, the design of the studies supporting the effectiveness of a technique is a major factor in determining whether the evidence is sufficient or not. To an extent the number of studies supporting a technique can also be used to determine whether the evidence is sufficient. The study design as well as the sample properties affects the validity and quality of the data. Most of the behavioral interventions are supported by purely descriptive studies with limited sample size which compromises the quality of the research. The research supporting the efficacy of an intervention must be empirical and not descriptive. The recognized gold standard research design for assessing the efficacy of interventions is the randomized control trial (RCT). In the absence of RCT, quasi-experimental designs (QEDs) can be considered sufficient evidence of efficacy, though with reservations (Preston & Carter, 2009; What Works Clearinghouse, 2010). On the other hand, well-designed single-case designs (SCDs) can only provide casual evidence (What Works Clearinghouse, 2010).
In RCT, two groups (often the test group and a control group) are assigned randomly so that the outcomes are not biased in any way and thus the high rating of RCT studies. In a QED, the intervention and the control groups are not assigned randomly but through the same method e.g. they are both volunteers. Since the two groups differ, an acceptable QED must have a mechanism that makes the groups equivalent in terms of observable properties. The reservations on the QED are because of the unobservable properties, which may interfere with the intervention positively or negatively (What Works Clearinghouse, 2010). To be considered as evidence, SCDs are evaluated through a standard rating criterion with three levels: “meets evidence standards, meets standards with reservations and does not meet standard” (What Works Clearinghouse, 2010). The SCDs that meet the standards are then evaluated to determine if they provide no evidence of a causal relation for each outcome, moderate evidence of a causal relation or strong evidence of a causal relation.
Suffice to say that, despite RCTs being considered to provide best clinical evidence, there is a raging debate (on a social-ethical view) on whether this denies patients a possible solution. In other words, since RCTs are considered to provide the best clinical evidence for practice recommendations, resource allocation and policy decisions, then the lack of RCTs has a negative impact particularly on patient management. For insistence, in Ireland the government refuses to support ABA due to lack “robust scientific evidence” to justify investment. This has resulted in numerous court battles and tribunals with parents increasingly requesting ABA-based schools. In addition, this has been replicated in the U.S.A. The debate on the application of ABA in treatment of ASD and RCT studies is whether we should evaluate the efficacy of ABA to meet needs of an individual using empirical studies based on RCT which is a group design. The popular view is that many empirical questions related to ABA have not been settled (appropriate RCTs are yet to be conducted) and thus ABA should not be unreservedly recommended for treatment and government support. However, they are those that argue that it is not appropriate to exclusively use group designs, like RCTs in designing and evaluating individualized treatment protocols. The latter school of thought has raised two issues. First that, RCT is an excellent medical model for a thing, a drug or medical procedure, it is inappropriate to use it to ratify a complex set of procedures (such as the ABA). Second, RCTs are used to provide information on populations (not individuals) and thus they should be used to guide tailoring of a treatment for an individual. The government’s refusal to support ABA due to lack of RCTs has been termed by some to be ethically irresponsible.
3. Confounding Factors
The researchers must be able to identify and where possible, eliminate confounding factors which are likely to have the same effect as the intervention or interfere with the intervention. For instance, the effect of EIBI are likely to be affected by the age, sex, the setting, the environment and other factors. For a study to be considered sufficient evidence, the design must address the confounding factors (by eliminating them, equalizing them or having an acceptable correction for the factors) (What Works Clearinghouse, 2010).
In general, an intervention should adhere to the components of an evidence-based practice which are: step-by-step direction for implementation, have an implementation check list and be supported by RCT studies or a well designed QED (The National Professional Development Center on Autism Spectrum Disorders; Gould, Dixon, Najdowski, Smith, & Tarbox, 2011). However, because ABA, is set of highly individualized complex procedures (rather than a single procedure and drug), the appropriateness of group based designs such as RCT and QED to evaluate ABAs efficacy has been questioned. As such, single case design remain the most popular design for research on ABA because it is highly individualized and it is designed to study that specific participant.
References
Keenana, M., & Dillenburger, K. (2011). When all you have is a hammer : RCTs and hegemony in science. Research in Autism Spectrum Disorders , 5, 1–13.