Horizon Search — Converging Minds: Charting the Future of Natural AI

Dec 18, 2023Event Updates, News

Published by Horizon Search; December 16, 2023

In a paradigm-shifting endeavor, the Boston Global Forum heralded a new era in AI development on December 12, 2023, by hosting a roundtable that brought together luminaries from academia and industry. This gathering was not just a meeting of minds but a pivotal juncture marking the birth of a revolutionary approach to artificial intelligence: Natural AI. This concept, rooted in computational physics, biology, and neuroscience, was crystallized in a significant letter titled ‘A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance,’ collectively endorsed by many attendees.

The letter and corresponding discussions represent a seismic shift in the AI landscape, advocating for a nature-based AI that promises to redefine our relationship with technology. Facilitated by the Boston Global Forum in collaboration with the Active Inference Institute, this initiative boldly positions Natural AI at the forefront of global discourse, aiming to pivot AI development towards a path that is inherently ethical, sustainable, and in harmony with the principles of nature. Celebrating its 11th anniversary, the Boston Global Forum underscores this roundtable as a cornerstone event, setting a transformative agenda for Natural AI to be the guiding light of its mission in the years to come.

David Silbersweig, the Stanley Cobb Professor of Psychiatry at Harvard Medical School, called for an AI that is in harmony with human cognition and the broader natural environment, referencing Karl Friston’s free energy principle. His assertion that understanding AI in the context of natural systems can lead to a technology that enhances our world provided a thought-provoking start to the discussions.

“The more we place [AI] within and have an understanding of natural systems and the human mind-brain, the more likely it is that we’re going to be able to have an AI that will resonate with, that will work cooperatively with the human mind-brain and larger natural systems and indeed the whole physical world,” — David Silbersweig

Alex Pentland, a distinguished Professor at MIT and a director of MIT Connection Science, highlighted the energy principle as a foundational concept for AI development. He describes AI as a probability engine, highlighting the potential of integrating various sensory inputs to mimic human-like conditions. Pentland explores alternative AI mechanisms, such as models based on C. elegans neurons, emphasizing the development of AI that complements human interaction and decision-making but one in which humans retain control.

“If you treat the AI as another individual and follow the sort of integration rules that we have with people, then you get something where it’s just part of the environment that it doesn’t have agency. You still have the agency.” — Alex Pentland

Alex Pentland talks on the principles guiding artificial intelligence

John Clippinger, a BioForm Labs co-founder and an MIT Media Lab research scientist, calls for a paradigm shift from enlightenment to an “entanglement” mentality in AI development. Addressing the technical aspects of AI, he discusses large language models and transformers, acknowledging their complexity as statistical engines. He suggests that these models can lead to the inference of causal structures, such as Markov blankets, which have intentional capabilities and representation abilities.

A critical concern for Clippinger is the integration of science and policy, especially in the context of AI. He advocates for a global platform, stressing the importance of preventing the monopolization of AI by commercial interests. This viewpoint stems from his experience in digital identity and data policy, where he has observed the challenges in achieving successful policy integration, particularly in the U.S.

Thomas Kehler, co-founder, CEO, and Chief Scientist of CrowdSmart.ai, brought to the table a focus on the potential of collective intelligence and the concept of adaptive learning. He began by referencing the insights of Judea Pearl, a foundational figure in AI, to critique the current narrow view of AI that heavily relies on historical data. Kehler notes that while large language models excel at generating plausible outputs from this data, this approach only scratches the surface of AI’s potential.

Kehler emphasizes that human intelligence surpasses the mere processing of historical data. It encompasses imaginative thinking about the future and the potential for collective intelligence – the idea that humans can achieve together what they cannot alone. He points out that current AI models fall short of facilitating collaboration and amplifying our collective ability to envision and solve complex problems.

A significant part of Kehler’s discussion revolves around the concept of adaptive learning, as proposed in frameworks like the free energy principle and active inference. Unlike traditional models, which require extensive training before deployment, adaptive learning models can learn instantaneously from experiences. Kehler asserts that intelligence is a shared, constantly evolving concept not adequately captured by existing AI models.

“We can create an augmented collective intelligence that is truly more intelligent than either machine or human alone.” — Thomas Kehler

Furthermore, Kehler touches upon the concept of emergent self-learning, which he sees as inherent in the nature of life and reflected in phenomena like the cooperative movements of birds. He believes active inference captures this principle and can lead AI into new realms of possibility and exploration.

Cameron Kerry, a former Acting Secretary of Commerce for the United States and a distinguished visiting fellow at The Brookings Institution, provided a balanced perspective, highlighting the importance of maintaining ethical boundaries in AI development.

“Let’s not anthropomorphize these models, because that I think can lead to false confidence in AI, false assumptions about the capabilities of the models,” — Cameron Kerry

Kerry notes that current AI models, often described as “probability engines,” are limited in their scope, echoing Stuart Russell‘s observation that these models require an extensive number of iterations to achieve what a young child can do in far fewer steps. This comparison highlights a fundamental gap in the efficiency and inferential capabilities of current AI systems compared to human cognition.

Reflecting on the history of AI, Kerry draws parallels between today’s efforts and those of pioneers like Marvin Minsky, who sought to replicate human information processing in computers. Kerry sees the current focus on understanding the biological aspects of intelligence and incorporating them into AI as a continuation of this long-standing goal.

Nazli Choucri, Professor of Political Science at MIT, brought attention to the role of temporality in AI, an often-neglected dimension that has pivotal implications for decision-making and policy. She highlights how decision-making time frames have become increasingly compressed, particularly in the realm of AI and technology. This poses significant challenges, as it requires rapid processing and response in scenarios that might traditionally have allowed for more extended contemplation and analysis.

“What we don’t have is a sense of matching temporal representation with either biological phenomenon or geological phenomenon or conflict phenomenon or decision phenomenon.” — Nazli Choucri

Francesco Lapenta, Director of the Institute of Future and Innovation Studies at John Cabot University, reflects on his own experience of working in AI for 15 years, noting the challenges and the long process involved in making the broader public aware of the complex body of work that underlies AI. He emphasizes that the conversation about AI is not new but has been ongoing for over three decades. However, he points out that the general public’s understanding of these discussions, especially regarding the link between technology and biology in the context of AI, remains limited.

“AI should be complimentary for the cognitive, biological, physical qualities of humanity and not the other way around.” — Francesco Lapenta

Paul Nemitz, Principal Adviser on the Digital Transition at the European Commission, voiced the need for AI to maintain the primacy of humanity over technology and democracy over business models. He also brings attention to the concept of multiple intelligences, as proposed by Howard Gardner, and shares his view that we are still at the nascent stages of AI development. However, he acknowledges that AI challenges us to reconsider what it means to be human.

“I would like to see a commitment among all of us and also in the letter, that we are in this not just to develop the greatest ever technology which can do all the things which humans can do, but to maintain the primacy of humanity over technology, the primacy of democracy over business models.” — Paul Nemitz

Nemitz notes his unfamiliarity with the methodology of active inference and the specifics of the institute mentioned in the letter, reflecting a need for more clarity about their contributions to the field compared to established academic institutions. Despite this, he appreciates the letter for introducing elements of plurality and a more human-centered approach to AI. This approach, as he sees it, would include safeguards and drivers that emulate how humans function as social beings within a democracy.

Nam Pham, a Program Specialist at Harvard Kennedy School, acknowledges the significant progress AI has made in recent years and notes that many institutions and governments have started to recognize the need for oversight and control over technological advancements. He observes that policy development tends to lag behind technological advancements, posing a challenge in ensuring technology remains a servant to human needs and interests.

“We need to keep in mind that if we could speed up the train of policy with the visions and the work of the Boston Global Forum, maybe policy can catch up with technology to ensure that technology will serve human beings.” — Nam Pham

Martin Nkafu Nkemnkia, from the Pontifical Lateran University, expressed the eagerness of the Association of African Universities to participate in the global AI discourse, as they are already discussing the role of AI in the future of higher education on the continent. He highlights the expediency of utilizing the Association as a collective entity rather than approaching individual universities separately, which would streamline collaboration and representation of African educational institutions in global AI discourse.

“We want to catch the train while it is still possible. And then, work with you, we just have to formulate how we want to work with you.” — Martin Nkafu Nkemnkia

In his closing remarks, Tuan Nguyen, co-founder and CEO of the Boston Global Forum, provided an update on the progress and future plans involving the integration and application of artificial intelligence (AI) within various sectors and regions. He outlines the update in three main parts: science and technology applications, policy development, and public engagement with a spiritual dimension.

Nguyen starts by sharing exciting news about the involvement of universities in Vietnam, particularly the University of Information and Communication Technology. This university is significantly engaged in the field, planning substantial investments in health, cybersecurity, and AI, supported by the Ministry of Defense. He also mentions Amrita University in India as another active participant in these initiatives.

He then highlighted the upcoming AIWS (Artificial Intelligence World Society) roundtable, the importance of refining and expanding the initiatives and incorporating feedback from this discussion. Nguyen announces a forthcoming conference on April 30th at Harvard University’s Loeb House, which will not only focus on science and technology but also policy and public engagement.

A notable aspect of Nguyen’s vision is the incorporation of spiritual values into the conversation about AI. He talks about working closely with spiritual leaders and religious figures from various faiths who have shown support and interest in the AIWS initiative. This unique approach aims to integrate spiritual values into the development and application of AI, recognizing the human and ethical dimensions of technological progress. He concludes by highlighting the achievements of the Boston Global Forum and its contribution to the global discourse on AI, expressing optimism for continued collaboration and contribution in the coming years.