“Causal reasoning is an indispensable part of human thought that should be formalized and algorithmized to achieve artificial intelligence at the human level. “
Pearl of Judea
Integrating knowledge from psychology research into algorithms is tricky because the former is not exactly a quantifiable metric. But, it can be very useful as algorithms venture into a world full of “cart problems” in the form of self-driving cars and medical diagnostics.
Tobias Gerstenbeg, assistant professor of psychology at Stanford, believes that by providing a more quantitative characterization of a theory of human behavior and instantiating that in a computer program, we can make it easier for a computer scientist to integrate such information into an AI system. Gerstenbeg and his colleagues at Stanford developed a computer model to understand how humans judge causation in dynamic physical situations.
Fill out this quick survey and help the whole community
About the model
Billiard simulation experiment (Image credits: article by Gerstenberg et al.,)
In their article on the Counterfactual Simulation Model (CSM) of Causal Judgment, the researchers begin by formulating three key hypotheses:
- Causal judgments are about difference.
- Making a difference for particular events is best expressed in terms of counterfactual contrasts to causal models.
- There are multiple aspects of causation which correspond to different ways of making a difference in the outcome which jointly determine people’s causal judgments.
As a case study, the researchers first applied CSM to explain people’s causal judgments about dynamic collision events. They considered a simulated billiard ball B, as shown above, entering from the right and heading straight for an open door in the opposite wall. Blocking the path, they placed a brick. Ball A would then enter through the upper right corner and collide with Ball B, which would bounce off the bottom wall and back up through the door.
So, now the question is: Did bullet A pass bullet B through the gate? It is obvious that without Ball A, Ball B would have run into the brick rather than through the gate.
Without the brick in the way of ball B, it would have gone through the door anyway without any help from ball A. The causal relationship between ball A and ball B in the presence and absence of a factor external is checked here. Gerstenberg and his colleagues ran such scenarios through a computer model designed to predict how a human evaluates causation. The idea here is that people judge causality by comparing what actually happened with what would have happened in relevant counterfactuals. This is because, as the billiard example above shows, the meaning of causation in humans differs when the counterfactuals are different – even when the actual events are the same.
Extend CSM to AI
Researchers are now working to extend the same theory of the counterfactual simulation model of causation to AI systems. The goal here is to develop AI systems that understand causal explanations like humans do. And, to be able to demonstrate scenarios in which AI systems are designed to analyze a football match and select key events that are causally related to the end result; whether it was goals that caused the victory or counterfactuals such as if goalie saves contributed more. It’s a task that would need the AI system to emulate the smartest of team leaders. However, Gerstenberg admits their research is still at an early stage. “We can’t do it yet, but at least in principle the type of analysis we are proposing should be applicable to these kinds of situations,” he added.
In the Explanatory Science and Engineering (SEE) project, funded by Stanford HAI, researchers are using natural language processing to develop a more refined linguistic understanding of how humans think about causation. Through their study on CSM, the researchers tried to answer the fundamental question: How do people make causal judgments? The results revealed that people’s judgments are influenced by different aspects of causation, such as whether the candidate cause was necessary and sufficient for the outcome to occur, as well as whether it affected how the outcome occurred. . By modeling these aspects in terms of counterfactual contrasts, the CSM accurately captures participants’ judgments in a wide variety of physical scenes involving single and multiple causes. Researchers believe that CSM can be of great importance in many subfields of AI, including robotics, where AI needs to use more common sense to collaborate with humans intuitively and appropriate.
Join our Telegram group. Be part of an engaging online community. Join here.
Subscribe to our newsletter
Receive the latest updates and relevant offers by sharing your email.