Policy research focuses on finding out whether programs or policies have effects on the conditions they are intended to address, with the end goal of guiding the effective allocation of public dollars. Providing that guidance requires more than just an estimate of the net effects of a program or policy; it is also necessary to understand variation in those effects, the circumstances under which a program or policy has effects, and how and why it works (or does not work). Decades of experimental and non-experimental research has produced many techniques for addressing such questions, and especially promising innovations have been developed more recently as attention has turned to exploring these issues.
This meeting will include presentations and discussions of innovative applications of methods and analytic techniques that can be used to inform practice and policy by addressing questions such as:
We will focus on how experimental design and statistical analysis can be used to address the preceding questions in research that involves one site or multiple sites, or that synthesizes findings across multiple studies.
The meeting will convene federal staff, researchers, and practitioners with an interest in ways to expand the evidence base about drivers of program effects. The ultimate goals of the meeting are to 1) better understand the state of knowledge and how it is best communicated and applied in the field, 2) identify important gaps in knowledge, and 3) help to build a research agenda that will fill those gaps.
8:45 – 10:00
Welcome
Anna Solmeyer, Office of Planning, Research and EvaluationWhy unpacking the “black box” is important for policy
Naomi Goldstein, Director of the Office of Planning, Research and EvaluationSlide Deck: Learning about and from variation in program effects
Howard Bloom, MDRC
10:15 – 12:15
Moderator
Lauren Supplee, Office of Planning, Research and Evaluation
Discussion
Virginia Knox, MDRC
Slide Deck: Conceptual overview: Natural and systematic variation in treatment
Mark Lipsey, Vanderbilt UniversitySlide Deck: Identifying effective components of parenting programs: Two meta-analyses
Jennifer Kaminski, Centers for Disease Control and PreventionSlide Deck: Distillation and matching: Identifying components of evidence-based practice
Kimberly Becker, University of MarylandSlide Deck: Learning more from a multisite intervention: Combining natural and planned variation in program experience
Eleanor Harvill, Abt Associates
1:15 – 3:00
Moderator
Nicole Constance, Office of Planning, Research and Evaluation
Discussion
Carolyn Hill, Georgetown University
Slide Deck: Testing program components using the Multiphase Optimization Strategy (MOST)
Linda Collins, Pennsylvania State UniversitySlide Deck: Adaptive interventions and SMART design: What, Why, and How?
Kelley Kidwell, University of MichiganSlide Deck: Rapid cycle evaluation: What works better, and what works for whom?
Scott Cody, Mathematica Policy Research
3:15 – 5:00
Moderator
Meryl Barofsky, Office of Planning, Research and Evaluation
Discussion
Peter Schochet, Mathematica Policy Research
Slide Deck: Conceptual overview: Moderation: How program participation, site characteristics, and neighborhood context can inform our understanding of what works
Pamela Morris, New York University (Handout )Slide Deck:Using Analysis of Symmetrically-Predicted Endogenous Subgroups (ASPES) to understand variation in program impacts
Laura Peck, Abt AssociatesSlide Deck Compared to what? Variation in the impacts of Head Start by alternative child-care setting
Lindsay Page, University of PittsburghSlide Deck: Unpacking the black box in Moving to Opportunity
Jeffrey Kling, Congressional Budget Office
8:45 – 10:45
Moderator
Melinda Petre, Office of Planning, Research and Evaluation
Discussion
Rebecca Maynard, University of Pennsylvania
Slide Deck: Conceptual overview: Techniques for establishing causal pathways in programs and policies
Antonio Morgan-Lopez, RTI InternationalSlide Deck: Using instrumental variables analysis to investigate mediation processes in multisite randomized trials
Sean Reardon, Stanford UniversitySlide Deck: Causal mediation analysis
Luke Keele, Pennsylvania State UniversitySlide Deck: How do contextual factors influence causal processes? Conditional process models
Amanda Fairchild, University of South Carolina
11:00 – 12:30
Moderator
Naomi Goldstein, Director of the Office of Planning, Research and Evaluation
Panelists:
Bob Granger, Past President of the William T. Grant Foundation
Ruth Neild, Institute of Education Sciences
Larry Orr, Johns Hopkins University
Belinda Sims, National Institute on Drug Abuse
American Journal of Evaluation
Forum: Unpacking the “Black Box” of Social Programs and Policies
Unpacking the “Black Box” of Social Programs and Policies: Introduction
Anna R. Solmeyer, Nicole ConstanceLearning About and From a Distribution of Program Impacts Using Multisite Trials
Stephen W. Raudenbush, Howard S. BloomCausal Mediation Analysis: Warning! Assumptions Ahead
Luke KeelePrincipal Stratification: A Tool for Understanding Variation in Program Effects Across Endogenous Subgroups
Lindsay C. Page, Avi Feller, Todd Grindal, Luke Miratrix, Marie-Andree SomersConditions for Effective Application of Analysis of Symmetrically-Predicted Endogenous Subgroups
Laura R. PeckAdministrative Experiments: Unlocking What Works Better and What Works for Whom
Scott Cody, Irma Perez-Johnson, Kristen JoyceUnlocking the Potential of the “What Works” Approach to Policymaking and Practice: Improving Impact Evaluations
Robert C. Granger, Rebecca Maynard