Conducting Evaluations Using Multiple Trials
Loading...
Author/Creator
Author/Creator ORCID
Date
2020-10-14
Type of Work
Department
Program
Citation of Original Publication
Burt S. Barnow and David H. Greenberg, Conducting Evaluations Using Multiple Trials, Volume: 41 issue: 4, page(s): 564-580, DOI: https://doi.org/10.1177/1098214020938441
Rights
This item is likely protected under Title 17 of the U.S. Copyright Law. Unless on a Creative Commons license, for uses protected by Copyright Law, contact the copyright holder or the author.
Attribution-NonCommercial-NoDerivatives 4.0 International
Attribution-NonCommercial-NoDerivatives 4.0 International
Subjects
Abstract
This paper reviews the use of multiple trials, defined as multiple sites or multiple arms in a single evaluation and replications, in evaluating social programs. After defining key terms, the paper discusses the rationales for conducting multiple trials, which include increasing sample size to increase statistical power; identifying the most effective program design; increasing external validity; and learning how various factors affect program impact. The paper discusses reasons why program design varies across sites, including adaptations to local environment and participant characteristics and a lack of fidelity to the program design. The paper discusses why programs vary across sites and when it is desirable to maintain consistency. Distinctions are drawn between evaluations of pilots and demonstrations versus ongoing programs, as well as programs where variation is permitted or encouraged versus when a fixed design is desired. The paper includes illustrations drawn from evaluations of demonstrations and ongoing programs.