Causal inference with observational data critically relies on untestable and extra-statistical assumptions that have (sometimes) testable implications. Well-known sets of assumptions that are sufficient to justify the causal interpretation of certain estimators are called identification strategies. These templates for causal analysis, however, do not perfectly map into empirical research practice. Researchers are often left in the disjunctive of either abstracting away from their particular setting to fit in the templates, risking erroneous inferences, or avoiding situations in which the templates cannot be applied, missing valuable opportunities for conducting empirical analysis. In this article, I show how directed acyclic graphs (DAGs) can help researchers to conduct empirical research and assess the quality of evidence without excessively relying on research templates. First, I offer a concise introduction to causal inference frameworks. Then I survey the arguments in the methodological literature in favor of using research templates, while either avoiding or limiting the use of causal graphical models. Third, I discuss the problems with the template model, arguing for a more flexible approach to DAGs that helps illuminating common problems in empirical settings and improving the credibility of causal claims. I demonstrate this approach in a series of worked examples, showing the gap between identification strategies as invoked by researchers and their actual applications. Finally, I conclude highlighting the benefits that routinely incorporating causal graphical models in our scientific discussions would have in terms of transparency, testability, and generativity.
翻译:暂无翻译