There is mixed evidence of the effectiveness of interventions operating on a large scale. Although the lack of consistent results is generally attributed to problems of implementation or governance of the program, the failure to find a statistically significant effect (or the success of finding one) may be due to choices made in the evaluation. To demonstrate the potential limitations and pitfalls of the usual analytic methods used for estimating causal effects, we apply the first half of a roadmap for causal inference to a pre-post evaluation of a community-level, national nutrition program. Selection into the program was non-random and strongly associated with the pre-treatment (lagged) outcome. Using structural causal models (SCM), directed acyclic graphs (DAGs) and simulated data, we demonstrate that a post treatment estimand controls for confounding by the lagged outcome but not from possible unmeasured confounders. Two separate difference-in-differences estimands have the potential to adjust for a certain type of unmeasured confounding, but introduce bias if the additional assumptions they require are not met. Our results reveal an important issue of identifiability when estimating the causal effect of a program with pre-post observational data. A careful appraisal of the assumptions underlying the causal model is imperative before committing to a statistical model and progressing to estimation.



Included in

Biostatistics Commons