Abstract
There is mixed evidence of the effectiveness of interventions operating on a large scale. Although the lack of consistent results is generally attributed to problems of implementation or governance of the program, the failure to find a statistically significant effect (or the success of finding one) may be due to choices made in the evaluation. To demonstrate the potential limitations and pitfalls of the usual analytic methods used for estimating causal effects, we apply the first half of a roadmap for causal inference to a pre-post evaluation of a community-level, national nutrition program. Selection into the program was non-random and strongly associated with the pre-treatment (lagged) outcome. Using structural causal models (SCM), directed acyclic graphs (DAGs) and simulated data, we demonstrate that a post treatment estimand controls for confounding by the lagged outcome but not from possible unmeasured confounders. Two separate difference-in-differences estimands have the potential to adjust for a certain type of unmeasured confounding, but introduce bias if the additional assumptions they require are not met. Our results reveal an important issue of identifiability when estimating the causal effect of a program with pre-post observational data. A careful appraisal of the assumptions underlying the causal model is imperative before committing to a statistical model and progressing to estimation.
Disciplines
Biostatistics
Suggested Citation
Weber, Ann M.; van der Laan, Mark J.; and Petersen, Maya L., "Challenges in Estimating the Causal Effect of an Intervention with Pre-Post Data (Part 1): Definition & Identification of the Causal Parameter" (October 2013). U.C. Berkeley Division of Biostatistics Working Paper Series. Working Paper 319.
https://biostats.bepress.com/ucbbiostat/paper319