Innovative Directions in Estimating Impact

Meeting Topic

A key focus of impact evaluation is identification of the outcomes directly attributable to the intervention. The question of causality is important both in designing studies and in evaluating the available evidence for a particular topic through systematic reviews. This meeting will highlight advances in experimental and quasi-experimental designs and analysis for assessing sources of bias and enhancing the detection of causality, with a focus on identification strategies, inference strategies, issues, and integrating innovations. At the heart of impact analysis is creating the treatment contrast and understanding what the comparison group is. When random assignment is not feasible, alternative approaches to selecting participants for each condition include matching methods and discontinuity designs. Speakers will discuss what we gain and lose by using these kinds of designs and how to maximize the opportunities to estimate robust impact estimates. In addition, presenters will provide guidance on how to cope with “real world” issues, such as crossovers, differential attrition across groups, and variation in program experiences and what can be done to reduce potential bias due to these factors.

The meeting will convene federal staff and researchers to share knowledge and experience utilizing innovative approaches to estimate impact in experimental and non-experimental research designs. The ultimate goals of the meeting are to 1) better understand new developments in impact estimates in both experimental and non-experimental evaluations; 2) identify important gaps in knowledge; and 3) help to build a research agenda that will fill those gaps.

Meeting Summary

In September 2012, OPRE convened the Innovative Directions in Estimating Impact Meeting.

Optimal design for impact evaluation depends on several factors (e.g. purpose of the evaluation, research questions, environmental and operational context, financial limitations). Although RCTs are often the best choice for the purposes of assessing causal influence and maximizing internal validity, there are contexts in which non-experimental design may be more accessible or appropriate. In addition, even when an RCT can be designed, execution may introduce the potential of bias into the impact estimates.

A body of work has explored approaches to eliminating bias through trying to replicate experimental findings with non-experimental designs, primarily in the areas of workforce development and training. Most such efforts concluded that non-experimental designs were inferior to their RCT counterparts. In recent years, a promising body of literature has developed including multiple disciplines and techniques for assessing sources of bias in and advancing non-experimental and experimental designs alike.

This meeting brought together stakeholders and examined what is known about innovative methods in experimental and non-experimental designs particularly as they are applied in behavioral and social science research. In addition to federal staff, stakeholders included researchers with the goals of 1) understanding advances in the field of impact estimates in experimental and non-experimental evaluations, and 2) identifying the gaps in knowledge and how to build a research agenda to fill those gaps.

Agenda and Presentations

Thursday, September 6th

Meeting Overview

8:30 – 9:00

Innovations for Impact: Identification, Inference, Issues and Integration
Naomi Goldstein, Director of the Office of Planning, Research and Evaluation

Core Analytics and the Context of Impact Estimation

9:00 – 10:30
Moderator
Lauren Supplee, Office of Planning, Research and Evaluation

Campbell and Rubin on Causal inference 
Steve West, Arizona State University

History and Context 
Jeff Smith, University of Michigan

Evaluating Evidence 
Sarah Avellar, Mathematica

Innovations in Identification Strategies

10:45 – 12:15
Moderator
Jennifer Brooks, Office of Planning, Research and Evaluation
Discussion
Robert Lerman, Urban Institute

Strategies to Create Two Groups 
Tom Cook, Northwestern University

Matching Methods 
Liz Stuart, John Hopkins Bloomberg School of Public Health

Discontinuity Designs 
Phil Gleason, Mathematica

Innovations in Identification Strategies – Integrated Answers

1:15 – 2:45
Moderator
Jason Despain, Office of Planning, Research and Evaluation
Discussion
Steve Bell, Abt Associates

Best of Both Worlds: Hybrid Designs 
Steve Glazerman, Mathematica

Pair Matching in Cluster-Randomized Experiments 
Kosuke Imai, Princeton University

Modeling and Multiple Assignment Variables 
Vivian Wong, University of Virginia

Innovations in Identification Strategies – Integrated Answers Innovations in Inference and Analysis

3:00 – 4:30
Moderator
Molly Irwin, Office of Planning, Research and Evaluation
Discussion
Anupa Bir, RTI International

Choices and Issues in the Analysis of Matched Data 
Felix Thoemmes, Cornell University

Innovations in Estimating with Precision
Michele Funk, University of North Carolina- Chapel Hill

Covariate Selection 
Peter Steiner, University of Wisconsin – Madison

Friday, September 7th

Addressing Real World Issues with Identification and Inference

8:30 – 10:00
Moderator
Kim Goodman, Substance Abuse and Mental Health Services Administration
Discussion
Larry Orr, Johns Hopkins Bloomberg School of Public Health

Problem of the Late Pretest 
Peter Schochet, Mathematica

Variation in Program Experience 
Laura Peck, Abt Associates

Complier Average Causal Effect 
Booil Jo, Stanford University

Integrating Approaches and Lessons Learned

10:15 – 12:00
Moderator
Brendan Kelly, Office of Planning, Research and Evaluation

Panelists:
Josh Angrist, MIT
Howard Bloom, MDRC
Rebecca Maynard, University of Pennsylvania Graduate School of Education
Maureen Pirog, University of Indiana Institute for Family and Social Responsibility
Dan Rosenbaum, OMB