A cognitive psychologist at Harvard, has spent her career testing the world’s most sophisticated learning system—the mind of a baby.
Gurgling infants might seem like no match for artificial intelligence. They are terrible at labeling images, hopeless at mining text, and awful at videogames. Then again, babies can do things beyond the reach of any AI. By just a few months old, they’ve begun to grasp the foundations of language, such as grammar. They’ve started to understand how the physical world works, how to adapt to unfamiliar situations.
Yet even experts like Spelke don’t understand precisely how babies—or adults, for that matter—learn. That gap points to a puzzle at the heart of modern artificial intelligence: We’re not sure what to aim for.
It isn’t yet clear how humans solve these problems, but Spelke’s work offers a few clues. For one thing, it suggests that humans are born with an innate ability to quickly learn certain things, like what a smile means or what happens when you drop something. It also suggests we learn a lot from each other. One recent experiment showed that 3-month-olds appear puzzled when someone grabs a ball in an inefficient way, suggesting that they already appreciate that people cause changes in their environment. Even the most sophisticated and powerful AI systems on the market can’t grasp such concepts. A self-driving car, for instance, cannot intuit from common sense what will happen if a truck spills its load.
Josh Tenenbaum, a professor in MIT’s Center for Brains, Minds & Machines, works closely with Spelke and uses insights from cognitive science as inspiration for his programs. He says much of modern AI misses the bigger picture, likening it to a Victorian-era satire about a two-dimensional world inhabited by simple geometrical people. “We’re sort of exploring Flatland—only some dimensions of basic intelligence,” he says. Tenenbaum believes that, just as evolution has given the human brain certain capabilities, AI programs will need a basic understanding of physics and psychology in order to acquire and use knowledge as efficiently as a baby. And to apply this knowledge to new situations, he says, they’ll need to learn in new ways—for example, by drawing causal inferences rather than simply finding patterns. “At some point—you know, if you’re intelligent—you realize maybe there’s something else out there,” he says.
The original article can be found here.
It is useful to note that AI and causal inference has been contributed by professor Judea Pearl, who is awarded Turing Award 2011. In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, Professor Judea Pearl also contribute on Causal Inference for AI transparency, which is one of important AIWS.net topics on AI Ethics.