Suppose one wishes to estimate a causal parameter given a sample of observations. This requires making unidentifiable assumptions about an underlying causal mechanism. Sensitivity analyses help investigators understand what impact violations of these assumptions could have on the causal conclusions drawn from a study, though themselves rely on untestable (but hopefully more interpretable) assumptions. Díaz and van der Laan (2013) advocate the use of a sequence (or continuum) of interpretable untestable assumptions of increasing plausibility for the sensitivity analysis so that experts can have informed opinions about which are true. In this work, we argue that using appropriate statistical procedures when conducting a sensitivity analysis is crucial to drawing valid conclusions about a causal question and understanding what assumptions one would need to make to do so. Conducting a sensitivity analysis typically relies on estimating features of the unknown observed data distribution, and thus naturally leads to statistical problems about which optimality results are already known. We present a general template for efficiently estimating the bounds on the causal parameter resulting from a given untestable assumption. The sequence of assumptions yields a sequence of confidence intervals which, given a suitable statistical procedure, attain proper coverage for the causal parameter if the corresponding assumption is true. We illustrate the pitfalls of an inappropriate statistical procedure with a toy example, and apply our approach to data from the Western Collaborative Group Study to show its utility in practice.



Included in

Epidemiology Commons