Principal Investigator: Nianbo Dong
Funding Agency: Department of Education, Institute of Education Sciences
Project Website: https://ies.ed.gov/funding/grantsearch/details.asp?ID=3343

Abstract

Social and behavioral measures are commonly used in educational and social science research as primary and secondary outcomes of interest, and they are closely associated with the student academic achievement. To examine intervention effects on social and behavioral outcomes in education and prevention science, multilevel randomized trials that include cluster randomized trials (CRTs) and multisite randomized trials (MRTs) are now widely used. In order to design CRTs and MRTs with sufficient statistical power to detect a meaningful effect size of an intervention with precision, researchers need to make reasonable assumptions about the design parameters to estimate the required sample sizes. In addition, it is helpful to use empirical benchmarks for interpreting effect size in prevention science. But to date, there is very limited information about the empirical benchmarks for researchers and policy makers to interpret the magnitude of the prevention effects and design parameters to plan CRTs and MRTs on social and behavioral outcomes.

The purpose of the current study is threefold: (1) to provide empirical benchmarks regarding (a) normative expectations for change, (b) policy-relevant performance gaps, and (c) effect size results from similar studies for researchers and policy makers to interpret the magnitude of the intervention effects on social and behavioral outcomes, (2) to provide reference values of the design parameters (effect sizes, ICCs, R2, and their variability) on social and behavioral outcomes for researchers to conduct power analysis of CRTs and MRTs, and (3) to incorporate these reference values into PowerUp! software for power analysis. We will apply two primary approaches to produce empirical benchmarks for interpreting effect sizes and reference values of the design parameters: (1) We will use two- and three-level hierarchical linear modeling to analyze data from IES funded projects to estimate the empirical benchmark of meaningful effect sizes and design parameters, and (2) We will use meta-analysis to synthesize the reference values from prior studies.

The findings will contribute to the field by providing empirical benchmarks for interpreting the magnitude of the intervention effects on social and behavioral outcomes, and reference design parameters values to inform power analyses for two- and three-level CRTs and MRTs. In particular, the project explicitly addresses IES (2018) priorities: “The Institute is interested in the development of practical statistical and methodological products (e.g., new or improved methods, guidelines or other methodological resources, software) that can be used by most education researchers (rather than only by statisticians and researchers with highly sophisticated statistical skills) to improve the designs of their studies, analyses of their data, and interpretations of their findings.”, exploration of “Variability in Effects”, and “Interpreting Impacts”. The results will be disseminated through conference presentations, journal publications, workshops, and the website (http://www.causalevaluation.org/).