Major NIH award to study how the brain infers structure from sensory signals may have applications for disorders like schizophrenia and offer insights for artificial intelligence.
Imagine you’re sitting on a train. You look out the window and see another train on an adjacent track that appears to be moving. But, has your train stopped while the other train is moving, or are you moving while the other train is stopped?
The same sensory experience—viewing a train—can yield two very different perceptions, leading you to feel either a sensation of yourself in motion or a sensation of being stationary while an object moves around you.
Human brains are constantly faced with such ambiguous sensory inputs. In order to resolve the ambiguity and correctly perceive the world, our brains employ a process known as causal inference.
Causal inference is a key to learning, reasoning, and decision making, but researchers currently know little about the neurons involved in the process.
Recognizing how the brain uses causal inference to separate self-motion from object motion may help in designing artificial intelligence and autopilot devices.
“Understanding how the brain infers self-motion and object motion might provide inspiration for improving existing algorithms for autopilot devices on planes and self-driving cars,” Haefner says.
For example, a plane’s circuitry must take into account the plane’s self-motion in the air while also avoiding other moving planes appearing around it.
The research may additionally have important applications in developing treatments and therapies for neural disorders such as autism and schizophrenia, conditions in which casual inference is thought to be impaired.
The original article can be found here.
In the field of AI and Causal Inference, Professor Judea Pearl is a pioneer for developing a theory of causal and counterfactual inference based on structural models. In 2011, Professor Pearl won the Turing Award, computer science’s highest honor, for “fundamental contributions to artificial intelligence through the development of a calculus of probabilistic and causal reasoning.” In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, Professor Judea also contributes to Causal Inference for AI transparency, which is one of important AIWS.net topics on AI Ethics.