What Works, Under What Circumstances, and How?

Methods for Unpacking the "Black Box" of Programs and Policies

Meeting Topic

Policy research focuses on finding out whether programs or policies have effects on the conditions they are intended to address, with the end goal of guiding the effective allocation of public dollars. Providing that guidance requires more than just an estimate of the net effects of a program or policy; it is also necessary to understand variation in those effects, the circumstances under which a program or policy has effects, and how and why it works (or does not work). Decades of experimental and non-experimental research has produced many techniques for addressing such questions, and especially promising innovations have been developed more recently as attention has turned to exploring these issues.

This meeting will include presentations and discussions of innovative applications of methods and analytic techniques that can be used to inform practice and policy by addressing questions such as:

  • How much variation is there in program effects and study findings; what are the sources of that variation; and how can we find out which of these sources of variation drive program impacts?
  • What methods and techniques do we have to answer questions about what treatment features or components drive program impacts?
    • How can we use natural variation in treatment across sites and studies to identify common program elements that consistently produce positive impacts?
    • How can we design studies to experimentally test program components and thereby help to maximize impacts?
    • How can programs use evaluation data to quickly answer questions about what is working well and what can be done to improve existing program operations?
  • What are the characteristics of individuals, sites, and contexts that explain why interventions work better, worse, or differently for particular subgroups?
    • How can we identify these subgroups, based on observed or latent characteristics of their members?
    • How does an individual’s participation in a program, such as compliance, dosage, and path through program services predict outcomes?
    • How do contextual features, such as site-level characteristics or neighborhood setting, affect program impacts?
  • What methods and techniques do we have to answer questions about steps in the causal pathway to participant outcomes?
    • How can we design future studies to learn about causal processes, and what are the challenges in establishing causality?
    • What analytic techniques can be used to explore questions about causal pathways in studies that are already complete?
  • How can these methods inform the work of policymakers, researchers, and practitioners?
    • What can we do to balance questions about what works, under what circumstance, and how, without compromising our ability to demonstrate overall average program impacts?

We will focus on how experimental design and statistical analysis can be used to address the preceding questions in research that involves one site or multiple sites, or that synthesizes findings across multiple studies.

The meeting will convene federal staff, researchers, and practitioners with an interest in ways to expand the evidence base about drivers of program effects. The ultimate goals of the meeting are to 1) better understand the state of knowledge and how it is best communicated and applied in the field, 2) identify important gaps in knowledge, and 3) help to build a research agenda that will fill those gaps.

Agenda and Presentations

Wednesday, September 3

Setting the Stage

8:45 – 10:00

Welcome
Anna Solmeyer, Office of Planning, Research and Evaluation

Why unpacking the “black box” is important for policy
Naomi Goldstein, Director of the Office of Planning, Research and Evaluation

Slide Deck: Learning about and from variation in program effects 
Howard Bloom, MDRC

What Works? Part 1: Analyzing Natural Variation in Program Components

10:15 – 12:15
Moderator
Lauren Supplee, Office of Planning, Research and Evaluation
Discussion 
Virginia Knox, MDRC

Slide Deck: Conceptual overview: Natural and systematic variation in treatment 
Mark Lipsey, Vanderbilt University

Slide Deck: Identifying effective components of parenting programs: Two meta-analyses 
Jennifer Kaminski, Centers for Disease Control and Prevention

Slide Deck: Distillation and matching: Identifying components of evidence-based practice 
Kimberly Becker, University of Maryland

Slide Deck: Learning more from a multisite intervention: Combining natural and planned variation in program experience 
Eleanor Harvill, Abt Associates

What Works? Part 2: Designing Systematic Variation in Program Components

1:15 – 3:00
Moderator
Nicole Constance, Office of Planning, Research and Evaluation
Discussion 
Carolyn Hill, Georgetown University

Slide Deck: Testing program components using the Multiphase Optimization Strategy (MOST) 
Linda Collins, Pennsylvania State University

Slide Deck: Adaptive interventions and SMART design: What, Why, and How? 
Kelley Kidwell, University of Michigan

Slide Deck: Rapid cycle evaluation: What works better, and what works for whom? 
Scott Cody, Mathematica Policy Research

Under What Circumstances? Variation in People and Contexts

3:15 – 5:00
Moderator
Meryl Barofsky, Office of Planning, Research and Evaluation
Discussion 
Peter Schochet, Mathematica Policy Research

Slide Deck: Conceptual overview: Moderation: How program participation, site characteristics, and neighborhood context can inform our understanding of what works 
Pamela Morris, New York University (Handout  )

Slide Deck:Using Analysis of Symmetrically-Predicted Endogenous Subgroups (ASPES) to understand variation in program impacts 
Laura Peck, Abt Associates

Slide Deck Compared to what? Variation in the impacts of Head Start by alternative child-care setting 
Lindsay Page, University of Pittsburgh

Slide Deck: Unpacking the black box in Moving to Opportunity 
Jeffrey Kling, Congressional Budget Office

Thursday, September 4

How? Uncovering Steps along the Causal Chain

8:45 – 10:45
Moderator
Melinda Petre, Office of Planning, Research and Evaluation
Discussion
Rebecca Maynard, University of Pennsylvania

Slide Deck: Conceptual overview: Techniques for establishing causal pathways in programs and policies
Antonio Morgan-Lopez, RTI International

Slide Deck: Using instrumental variables analysis to investigate mediation processes in multisite randomized trials
Sean Reardon, Stanford University

Slide Deck: Causal mediation analysis
Luke Keele, Pennsylvania State University

Slide Deck: How do contextual factors influence causal processes? Conditional process models 
Amanda Fairchild, University of South Carolina

Implications for Policy and Research

11:00 – 12:30
Moderator
Naomi Goldstein, Director of the Office of Planning, Research and Evaluation

Panelists:
Bob Granger, Past President of the William T. Grant Foundation
Ruth Neild, Institute of Education Sciences
Larry Orr, Johns Hopkins University
Belinda Sims, National Institute on Drug Abuse