Artificial General Intelligence (AGI) – a hypothetical machine capable of any and all of the intellectual tasks performed by humans – is considered by many to be a pipe dream. A long-standing feature of science fiction, AGI has achieved a cultural reputation of both reverence and fear, but above all an appreciation for the possibilities it presents. However, despite what the movies might suggest, there is still considerable debate around what constitutes general intelligence in humans, let alone machines.
Before diving into AGI, it’s worth establishing what has become the accepted meaning of ‘general intelligence’. The term ‘general intelligence’ is an evolving term. When the first electronic computers were created, many leaders in the field attributed their ability to do complicated sums as evidence of a higher intelligence than was previously known. What followed was the ability to best humans at strategy games like chess, and eventually speech and image recognition. It seems likely that this evolution will too apply to Artificial General Intelligence, particularly as the concept becomes increasingly abstract.
However, as it stands, there are some generally accepted factors which determine if a human or machine is capable of artificial general intelligence. First, that they must have the ability to learn from a limited amount of data or experience – often referred to as few shot learning. Secondly, to be able to learn, and improve its ability to learn, from a wide variety of contexts, known as meta learning. This directly feeds into the final factor: causal inference. This is the capability for scenario generation: to be able to plan for future events, or non-events, through an understanding of cause and effect.
The original article can be found here.
Regarding to Causal Inference and AI, Professor Judea Pearl is a pioneer in this work and was recognized with a Turing Award in 2011. In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, Professor Pearl also contribute on Causal Inference for AI transparency, which is one of important AI World Society (AIWS.net) topics on AI Ethics from by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF).