Computing causes, counterfactuals, and responsibility: theoretical analyses, probabilistic models and psychophysical studies

No abstract is currently available for this project

Understanding how people make sense of the causal structure of the world and use this knowledge to manipulate the world in order to achieve their goals is one of the most fundamental questions in cognitive science. Perhaps for no other question does the interdisciplinary nature of the cognitive sciences become more apparent. Philosophers, legal scholars, linguists, computer scientists and psychologists have joined efforts in making causation – the cement of the universe (Mackie, 1980) – tangible. Truly intelligent behavior is marked by the ability to choose amongst the multiple paths of action the one which is most likely going to bring about the desired outcome.

Recent advances in computational modeling (Pearl, 1988, 2000) have helped to bring the different aforementioned fields even closer together. Causal Bayes nets (CBN) provide a coherent framework for how agents should update their beliefs about different hypotheses from merely observational data and data generated through actively intervening in the world. A CBN serves multiple purposes: predictingfuture states, choosing actions and evaluating counterfactuals. Perhaps most importantly for everyday common-sense reasoning, a CBN can be employed to explain how a particular state of affairs came about, to answer questions about actual causation: the extent to which specific causal events or combinations of events are responsible for a particular observed effect (Chockler & Halpern, 2004; Hall, 2004; Halpern & Pearl, 2005; Hitchcock, 2001).

The goal of our project is to connect computational models for analyzing actual causation in the AI and philosophical literatures to the intuitive judgments that humans make. We seek connections that run in both directions: to build more rigorous, quantitative, explanatory models of human causal reasoning based on CBN models from AI, but also to advance those AI models beyond CBNs, bringing them closer to the expressive reasoning capacities of human common sense.

Despite extensive research on causal reasoning in cognitive psychology, AI and philosophy, there has relatively little work connecting the state-of-the-art in these three fields. Prior work has focused on a debate between two ways to think about the nature of causal claims. In philosophy, there has been a longstanding debate as to whether causality is best analyzed in terms of dependency or in terms of causal processes. According to dependency accounts an event A qualifies as a cause for another event B, if B depends on A in some way. Dependence has either been specified in terms of regularity of succession (Hume, 1988/1748), counterfactual dependence (Lewis, 1973; Woodward, 2003) or probabilistic dependence (Suppes, 1970). CBNs are typically characterized as casting causation in terms of counterfactual dependence. According to process accounts, an event A qualifies as a cause for another event B, if there is some causal process from A to B. That is, A transmitted some quantity to B, such as energy or force. Recently, process accounts of causation have regained strength as psychological accounts of how people attribute causality (e.g. Walsh & Sloman, 2011; Wolf, 2007). Walsh & Sloman (2011) have shown that people prefer to attribute causality to events which lead to an effect via a continuous causal process compared to events for which no such process exists. Simple counterfactual theories cannot account for these differences in attributions because they are insensitive to the exact way in which events depend upon each other. A further advantage of process accounts is their ability to capture some of the richness of our conception of causality. Following a linguistic analysis by Talmy (1988), Wolff (2007) has proposed a process account that predicts people’s usage of different causal terms such as cause, enable, prevent and despite based on configurations of force vectors. While process accounts capture people’s judgments quite well in physical domains it is difficult to see how they can be generalized to domains in which more abstract “forces” are at play, such as interactions in financial systems or beliefs and intentions in folk psychology (Holton, forthcoming).

Rather than fostering the divide between dependency accounts and process accounts of causation, our aim is to unify both frameworks. Similar to the analysis of actual causation in CBNs, we argue that causal attributions are best understood in terms of counterfactuals defined over probabilistic generative models. While many causal judgments have been shown to be in line with the predictions derived from CBNs (see e.g. Griffiths & Tenenbaum, 2005), people also have strong intuitions about causality in situations that go beyond the expressive power of CBNs. People’s intuitive understanding of physics makes for such an example. Battaglia, Hamrick and Tenenbaum (2011) have shown that people’s judgments about the stability of towers of blocks are closely in line with a noisy model of Newtonianphysics. Through being more precise about what people’s intuitive models of particular domains, such as physics or psychology look like, we incorporate generative processes into a broader counterfactual framework that retains the flexibility and generality of CBN-based dependency accounts and adds the richness of process accounts.

In an initial series of between-subject experiments, we have already demonstrated that our account predicts people’s causal judgments better than the current process accounts even on their home turf, that is, interacting physical entities. We showed participants video clips of two balls colliding with each other (see Figure 1)1. In two experiments, the video clips stopped shortly after the collision event. We demonstrated that people’s judgments of whether ball B will go through the gate on the left of the screen (Experiment 1; solid arrow in Figure 1) or whether ball B would go through the gate if ball A had not been present (Experiment 2; dashed arrow in Figure 1) were closely in line with a noisy model of Newtonian physics. In order to model people’s confidence about whether ball B will go through or not, we generated noisy samples of the actual clip by minimally perturbing the resulting velocity vector of the collision (Experiment 1) or the velocity vector without the collision (Experiment 2). People’s confidence ratings in both experiments were closely in line with our Gaussian perturbation model (r > .9). Thus, people can use their intuitive understanding of physics to accurately predict what will or what might have happened.

In Experiment 3, participants judged the extent to which ball A caused ball B to go through the gate or prevented ball B from going through the gate. The results of this experiment showed that people arrive at their cause and prevention judgments by comparing what actually happened with what they think might have happened, had the causal event of interest not taken place. Using participants’ confidence ratings from Experiment 2, we were able to predict participants’ cause/prevention judgments in Experiment 3 with very high accuracy (r = .99). Our model posits that participants compare the counterfactual probability that B would have gone in had A not been present, P(B|¬A), to the probability that B went in given that A was present, P(B|A). Since participants in Experiment 3 watch the clips until the end the value of P(B|A) is certain: it is either 1 when B goes through the gate or 0 when B misses the gate. In general, if P(B|A) - P(B|¬A) is negative, participants should say that A prevented B from going through the gate. Intuitively, if it was obvious that B would have gone in had A not been there (i.e. P(B|¬A) is high) but B misses the gate as a result of colliding with A (i.e. P(B|A) = 0), A should be judged to have prevented B from going through the gate. Similarly, if the difference is positive, participants should say that A caused B to go through the gate. If the chance that B would have gone through the goal without A was low but, as a result of colliding with A, B goes through the goal, A should be judged to have caused B to go through the gate. The clip depicted in Figure 1 shows an example where our model predicts that participants will say that A neither caused nor prevented B. P(B|A) is 0 since the B does not go through the gate. However, P(B|¬A) is also very low since it is clear that B would have missed the gate even if A had not been present in the scene. Figure 2 shows a scatterplot of the model predictions and empirical ratings for the 18 clips that were used in the experiment. Across the whole range of possible situations, our model predicts participants’ ratings very accurately.

There are several avenues along which we hope to pursue this research agenda with I2 funding. First, we would like to show that our framework is not only capable of making quantitative predictions about participants’ degree of causation or prevention judgments, but is also able to capture people’s use of different causal terms such as cause versus help as well as intrinsically counterfactual notions such as almost caused or almost prevented. In this experiment, participants’ task was to select one out of seven sentences that describes the clip best. In pilot studies where people must choose the best of seven linguistic descriptions for the causal structure of an event, the predictions of our framework again closely mirrored participants’ selection of sentences. These results provide strong support for the generality of our approach over mere process accounts of causation which are not capable of making predictions about when people will say that some event almost took place. Having established the close fit of our account with people’s judgments for relatively simple interactions between two physical entities, we would like to scale up the complexity and look at chains of events, cases of causal overdetermination and preemption. Findings from scenario-based implementations of these more complex causal structures have often been used to argue against counterfactual theories of causation. We are interested to see how our richer counterfactual framework captures people’s intuitions when those structures are implemented into a physical domain that elucidates the underlying generative process and hence facilitates mental simulation. Furthermore, we aim to extend our modeling approach to look at the interaction of social agents having beliefs and intentions and the roles that norms of behavior play in this context. More generally, through developing richer generative models of people’s intuitive physical and psychological theories, we will understand better how people attribute causality and responsibility.


1 The clips can be accessed here: http://www.ucl.ac.uk/lagnado-lab/experiments/demos/physicsdemo.html

Members Profiles