The principle of causation is fundamental to science and society and has remained an active topic of discourse in philosophy for over two millennia. Modern philosophers often rely on “neuron diagrams”, a domain-speciﬁc visual language for discussing and reasoning about causal relationships and the concept of causation itself. In this paper we formalize the syntax and semantics of neuron diagrams. We discuss existing algorithms for identifying causes in neuron diagrams, show how these approaches are ﬂawed, and propose solutions to these problems. We separate the standard representation of a dynamic execution of a neuron diagram from its static deﬁnition and deﬁne two separate, but related semantics, one for the causal effects of neuron diagrams and one for the identiﬁcation of causes themselves. Most signiﬁcantly, we propose a simple language extension that supports a clear, consistent, and comprehensive algorithm for automatic causal inference.