Continued troubles in the Red Sea, European summit: Roundup on the Four Pillars

Continued troubles in the Red Sea, European summit: Roundup on the Four Pillars

It seems to the Red Sea has become an ever-growing problem: two companies, the shipping giant Maersk and oil giant BP have decided to stop shipping through the Red Sea due to recent attacks. While the Pillars have been defending themselves and responding to distress calls from Houthi missile attacks, there is still a large threat to civilian shipping. While not wanting to escalate tensions in the Middle East further, it is necessary that the Pillars come together and find resolutions to the Houthi issue, even a forceful one, as it threatens aspects of the global economy, especially between Europe and Asia. The US has announced Operation Prosperity Guardian to defend against these attacks.

India, a Pillar, has been navally assisting in different parts of the world too. The Indian navy intervened against Somali pirates’  hijacking earlier this week. In the Pacific, they have sent a warship to Manila to bolster ties, after recent clashes between China and the Philippines.

At the European summit, the EU have decided to launch memberships talks with Ukraine and Moldova, as well as granting Georgia the candidate status. While the Ukrainian counter-offensive hasn’t been successful, there have been some good reports recently: it appears that Russia has lost 87% of its troops prior to the war. The UK have been selected as the headquarters for a fighter plane project with Japan and Italy, two other Pillars.

USS Carney in the Mediterranean Sea on Oct. 23, 2018. credit: US Navy

Horizon Search — Converging Minds: Charting the Future of Natural AI

Horizon Search — Converging Minds: Charting the Future of Natural AI

Published by Horizon Search; December 16, 2023

In a paradigm-shifting endeavor, the Boston Global Forum heralded a new era in AI development on December 12, 2023, by hosting a roundtable that brought together luminaries from academia and industry. This gathering was not just a meeting of minds but a pivotal juncture marking the birth of a revolutionary approach to artificial intelligence: Natural AI. This concept, rooted in computational physics, biology, and neuroscience, was crystallized in a significant letter titled ‘A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance,’ collectively endorsed by many attendees.

The letter and corresponding discussions represent a seismic shift in the AI landscape, advocating for a nature-based AI that promises to redefine our relationship with technology. Facilitated by the Boston Global Forum in collaboration with the Active Inference Institute, this initiative boldly positions Natural AI at the forefront of global discourse, aiming to pivot AI development towards a path that is inherently ethical, sustainable, and in harmony with the principles of nature. Celebrating its 11th anniversary, the Boston Global Forum underscores this roundtable as a cornerstone event, setting a transformative agenda for Natural AI to be the guiding light of its mission in the years to come.

David Silbersweig, the Stanley Cobb Professor of Psychiatry at Harvard Medical School, called for an AI that is in harmony with human cognition and the broader natural environment, referencing Karl Friston’s free energy principle. His assertion that understanding AI in the context of natural systems can lead to a technology that enhances our world provided a thought-provoking start to the discussions.

“The more we place [AI] within and have an understanding of natural systems and the human mind-brain, the more likely it is that we’re going to be able to have an AI that will resonate with, that will work cooperatively with the human mind-brain and larger natural systems and indeed the whole physical world,” — David Silbersweig

Alex Pentland, a distinguished Professor at MIT and a director of MIT Connection Science, highlighted the energy principle as a foundational concept for AI development. He describes AI as a probability engine, highlighting the potential of integrating various sensory inputs to mimic human-like conditions. Pentland explores alternative AI mechanisms, such as models based on C. elegans neurons, emphasizing the development of AI that complements human interaction and decision-making but one in which humans retain control.

“If you treat the AI as another individual and follow the sort of integration rules that we have with people, then you get something where it’s just part of the environment that it doesn’t have agency. You still have the agency.” — Alex Pentland

Alex Pentland talks on the principles guiding artificial intelligence

John Clippinger, a BioForm Labs co-founder and an MIT Media Lab research scientist, calls for a paradigm shift from enlightenment to an “entanglement” mentality in AI development. Addressing the technical aspects of AI, he discusses large language models and transformers, acknowledging their complexity as statistical engines. He suggests that these models can lead to the inference of causal structures, such as Markov blankets, which have intentional capabilities and representation abilities.

A critical concern for Clippinger is the integration of science and policy, especially in the context of AI. He advocates for a global platform, stressing the importance of preventing the monopolization of AI by commercial interests. This viewpoint stems from his experience in digital identity and data policy, where he has observed the challenges in achieving successful policy integration, particularly in the U.S.

Thomas Kehler, co-founder, CEO, and Chief Scientist of CrowdSmart.ai, brought to the table a focus on the potential of collective intelligence and the concept of adaptive learning. He began by referencing the insights of Judea Pearl, a foundational figure in AI, to critique the current narrow view of AI that heavily relies on historical data. Kehler notes that while large language models excel at generating plausible outputs from this data, this approach only scratches the surface of AI’s potential.

Kehler emphasizes that human intelligence surpasses the mere processing of historical data. It encompasses imaginative thinking about the future and the potential for collective intelligence – the idea that humans can achieve together what they cannot alone. He points out that current AI models fall short of facilitating collaboration and amplifying our collective ability to envision and solve complex problems.

A significant part of Kehler’s discussion revolves around the concept of adaptive learning, as proposed in frameworks like the free energy principle and active inference. Unlike traditional models, which require extensive training before deployment, adaptive learning models can learn instantaneously from experiences. Kehler asserts that intelligence is a shared, constantly evolving concept not adequately captured by existing AI models.

“We can create an augmented collective intelligence that is truly more intelligent than either machine or human alone.” — Thomas Kehler

Furthermore, Kehler touches upon the concept of emergent self-learning, which he sees as inherent in the nature of life and reflected in phenomena like the cooperative movements of birds. He believes active inference captures this principle and can lead AI into new realms of possibility and exploration.

Cameron Kerry, a former Acting Secretary of Commerce for the United States and a distinguished visiting fellow at The Brookings Institution, provided a balanced perspective, highlighting the importance of maintaining ethical boundaries in AI development.

“Let’s not anthropomorphize these models, because that I think can lead to false confidence in AI, false assumptions about the capabilities of the models,” — Cameron Kerry

Kerry notes that current AI models, often described as “probability engines,” are limited in their scope, echoing Stuart Russell‘s observation that these models require an extensive number of iterations to achieve what a young child can do in far fewer steps. This comparison highlights a fundamental gap in the efficiency and inferential capabilities of current AI systems compared to human cognition.

Reflecting on the history of AI, Kerry draws parallels between today’s efforts and those of pioneers like Marvin Minsky, who sought to replicate human information processing in computers. Kerry sees the current focus on understanding the biological aspects of intelligence and incorporating them into AI as a continuation of this long-standing goal.

Nazli Choucri, Professor of Political Science at MIT, brought attention to the role of temporality in AI, an often-neglected dimension that has pivotal implications for decision-making and policy. She highlights how decision-making time frames have become increasingly compressed, particularly in the realm of AI and technology. This poses significant challenges, as it requires rapid processing and response in scenarios that might traditionally have allowed for more extended contemplation and analysis.

“What we don’t have is a sense of matching temporal representation with either biological phenomenon or geological phenomenon or conflict phenomenon or decision phenomenon.” — Nazli Choucri

Francesco Lapenta, Director of the Institute of Future and Innovation Studies at John Cabot University, reflects on his own experience of working in AI for 15 years, noting the challenges and the long process involved in making the broader public aware of the complex body of work that underlies AI. He emphasizes that the conversation about AI is not new but has been ongoing for over three decades. However, he points out that the general public’s understanding of these discussions, especially regarding the link between technology and biology in the context of AI, remains limited.

“AI should be complimentary for the cognitive, biological, physical qualities of humanity and not the other way around.” — Francesco Lapenta

Paul Nemitz, Principal Adviser on the Digital Transition at the European Commission, voiced the need for AI to maintain the primacy of humanity over technology and democracy over business models. He also brings attention to the concept of multiple intelligences, as proposed by Howard Gardner, and shares his view that we are still at the nascent stages of AI development. However, he acknowledges that AI challenges us to reconsider what it means to be human.

“I would like to see a commitment among all of us and also in the letter, that we are in this not just to develop the greatest ever technology which can do all the things which humans can do, but to maintain the primacy of humanity over technology, the primacy of democracy over business models.” — Paul Nemitz

Nemitz notes his unfamiliarity with the methodology of active inference and the specifics of the institute mentioned in the letter, reflecting a need for more clarity about their contributions to the field compared to established academic institutions. Despite this, he appreciates the letter for introducing elements of plurality and a more human-centered approach to AI. This approach, as he sees it, would include safeguards and drivers that emulate how humans function as social beings within a democracy.

Nam Pham, a Program Specialist at Harvard Kennedy School, acknowledges the significant progress AI has made in recent years and notes that many institutions and governments have started to recognize the need for oversight and control over technological advancements. He observes that policy development tends to lag behind technological advancements, posing a challenge in ensuring technology remains a servant to human needs and interests.

“We need to keep in mind that if we could speed up the train of policy with the visions and the work of the Boston Global Forum, maybe policy can catch up with technology to ensure that technology will serve human beings.” — Nam Pham

Martin Nkafu Nkemnkia, from the Pontifical Lateran University, expressed the eagerness of the Association of African Universities to participate in the global AI discourse, as they are already discussing the role of AI in the future of higher education on the continent. He highlights the expediency of utilizing the Association as a collective entity rather than approaching individual universities separately, which would streamline collaboration and representation of African educational institutions in global AI discourse.

“We want to catch the train while it is still possible. And then, work with you, we just have to formulate how we want to work with you.” — Martin Nkafu Nkemnkia

In his closing remarks, Tuan Nguyen, co-founder and CEO of the Boston Global Forum, provided an update on the progress and future plans involving the integration and application of artificial intelligence (AI) within various sectors and regions. He outlines the update in three main parts: science and technology applications, policy development, and public engagement with a spiritual dimension.

Nguyen starts by sharing exciting news about the involvement of universities in Vietnam, particularly the University of Information and Communication Technology. This university is significantly engaged in the field, planning substantial investments in health, cybersecurity, and AI, supported by the Ministry of Defense. He also mentions Amrita University in India as another active participant in these initiatives.

He then highlighted the upcoming AIWS (Artificial Intelligence World Society) roundtable, the importance of refining and expanding the initiatives and incorporating feedback from this discussion. Nguyen announces a forthcoming conference on April 30th at Harvard University’s Loeb House, which will not only focus on science and technology but also policy and public engagement.

A notable aspect of Nguyen’s vision is the incorporation of spiritual values into the conversation about AI. He talks about working closely with spiritual leaders and religious figures from various faiths who have shown support and interest in the AIWS initiative. This unique approach aims to integrate spiritual values into the development and application of AI, recognizing the human and ethical dimensions of technological progress. He concludes by highlighting the achievements of the Boston Global Forum and its contribution to the global discourse on AI, expressing optimism for continued collaboration and contribution in the coming years.

Pope calls for treaty regulating AI, warning of potential for ‘technological dictatorship’

Pope calls for treaty regulating AI, warning of potential for ‘technological dictatorship’

Pope Francis has called for an international treaty to regulate the use of Artificial Intelligence, warning that the new technology risks causing a “technological dictatorship” which would threaten peace and democracy.

The 86-year-old pontiff says he wants world leaders to agree to a “binding international treaty” on AI developed within an ethical framework. Francis made the appeal in his annual message for the World Day of Peace which is marked by the Catholic Church every January 1.

“The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms, and that capacity cannot be reduced to programming a machine, which as ‘intelligent’ as it may be, remains a machine,” Francis wrote.

“Any number of urgent questions need to be asked. What will be the consequences, in the medium and long term, of these new digital technologies? And what impact will they have on individual lives and on societies, on international stability and peace?”

The pope issued a strong warning against AI-controlled weapons systems which he called a “cause for grave ethical concern” while also raising alarm about the misuse of technology including “interference in elections, the rise of a surveillance society” and growing inequalities.

“All these factors risk fuelling conflicts and hindering peace,” the pope said.

Despite his cautious message, Francis did praise the “impressive achievements of science and technology,” insisting that AI also offers “exciting opportunities.”

Read the article in full here: https://edition.cnn.com/2023/12/14/tech/pope-francis-ai-warning-technological-dictatorship/index.html

Looking ahead to 2024, Boston Global Forum will develop a special report and organize a conference that will bring together spiritual leaders and esteemed religious figures. This gathering aims to deepen the exploration and development of spiritual values for AIWS, marking a significant milestone in the journey toward an ethically grounded and enlightened AI World Society.

Webinar of International Association of University Presidents “Human Rights 2023: To Innovate, To Include” – Human Rights in AI World Society

Webinar of International Association of University Presidents “Human Rights 2023: To Innovate, To Include” – Human Rights in AI World Society

The International Association of University Presidents (IAUP) celebrates 75 years of the Universal Declaration on Human Rights with an online webinar on December 11, 2023 (11:00 am -12:30 pm, EST)

Human Rights 2023: To Innovate, To Include

Nguyen Anh Tuan, CEO of the Boston Global Forum, addressed human rights in Artificial Intelligence World Society.

 

Speakers includes:

Dr. Fernando Leon Garcia, President, International Association of University Presidents

Zlatko Lagumdzija, Permanent Representative of Bosnia and Herzegovina to the United Nations (and a regular speaker at BGF events), former Prime Minister of Bosnia and Herzegovina

Cora Weiss, President, Peace Educator and Nobel Peace Prize nominee

Nguyen Anh Tuan, CEO of the Boston Global Forum, Co-founder of AI World Society (AIWS)

Alessandra Nilo, Juliana Cesar, Vice President Civil-20 engagement group of the 2024 G-20 chaired by Brazil, affiliated with Gestos – HIV+, Communication and Gender Issues NGO, Recife, Brazil

Dr. Mihir Kanade, Academic Coordinator of the University for Peace (UPEACE), the Head of its Department of International Law, and the Director of the UPEACE Human Rights Centre; independent expert of the UN Human Rights Council’s Expert Mechanism on the Right to Development

Dr Ş. İlgü Özler, founder and director of the SUNY Global Engagement programme, New York

Beth Nielsen Chapman, Twice Grammy-nominated Nashville based singer and songwriter

The moderator: Ramu Damodaran, the First Chief of the United Nations Academic Impact, Co-Chair of the United Nations Centennial Initiative

Link:

https://www.youtube.com/watch?v=V9anldnH_Fo

https://www.iaup.org/event/human-rights-2023-to-innovate-to-include/

Cora Weiss, Peace Educator and Nobel Peace Prize nominee at the event

 

Issues with Freedom of Navigation in the Middle East: Roundup on the Four Pillars

Issues with Freedom of Navigation in the Middle East: Roundup on the Four Pillars

The US and European states, two of the Pillars, have been facing threats in the Middle East recently from Iranian proxies. The American embassy in Baghdad was, and continues to be, under fire from militant groups. These are semi-coordinated attacks, but it is unclear if it is under direct orders from Iran. It was hit with seven mortars. France intercepted two drones that targeted its ship in the Red Sea, originating from Yemen (read: Houthis). This comes on the back of Houthis attacking American ship USS Mason and making waterways hostile in the region.

The Pillars have been weighing options to address the recent threats to the freedom of navigation. However, the Biden administration is considering to have a more mitigated response, in part for fear of exploding a powder keg in the Middle East, when there is a need to draw back  and focus on Asia-Pacific.

Economically, Apple and other companies continue to move away from China and invest more in India, a Pillar, and other countries such as Indonesia and Vietnam. This is important as China becomes more of a risk to invest in, and decoupling continues, in preparing for potential geopolitical confrontations.

French ship FREMM Languedoc

A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance

A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance

Boston, December 12, 2023 – In a landmark initiative, prominent scientists, strategists and policy makers are joining forces to redefine the public narrative surrounding artificial intelligence (AI), grounding it in the science of computational physics, biology, and neuroscience. The Active Inference Institute and the Boston Global Forum, through a joint letter signed by leading experts, announce a pivotal effort to reshape the discourse on AI.

 

Titled “A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance,” the letter addresses the remarkable advancements in AI, especially Large Language Models (LLMs) and Transformer models. While acknowledging their achievements, the letter underscores concerns about the lack of scientific principles guiding their development and the absence of independent performance standards.

 

Signatories emphasize that current LLMs and Transformer models are “corpus bound,” relying on parameters that remain enigmatic and inaccessible to scrutiny. Despite their capabilities, a critical understanding of these systems and their implications is missing.

 

Challenging the popular narrative of AI as an existential threat or the emergence of “super intelligence,” the letter argues for a more nuanced view rooted in computational neuroscience, biology, and physics, highlighting the interconnected “intelligences” of all living things.

The signatories stress that a nuanced scientific understanding of AI is crucial for effective policies and regulations. They highlight the relevance of studying the human brain-mind to optimize the integration of human intelligence with artificial intelligence in a pro-social manner.

 

Addressing economic, policy, and social misunderstandings surrounding AI, the letter challenges the notion that only “Big Tech” companies can afford AI commercialization.

 

It anticipates a future where distributed and biologically grounded intelligences can operate on mobile devices with lower energy requirements, promising transparency, privacy, security, and equitable access.

In response to these challenges, the signatories propose interdisciplinary public workshops to convene legislators, regulators, technologists, investors, scientists, journalists, NGOs, faith communities, the public, and business leaders. The goal is to foster an alternative, science-based understanding of the biological foundations of AI and transform the way AI is approached, developed, and integrated into society.

This initiative is a collaborative effort between the Active Inference Institute, grounded in the science of computational physics and biology, the Neuropsychiatry and Society Program, focused on understanding the societal implications of the human brain-mind, and the Boston Global Forum, dedicated to forming global policies and an AI World Society model for the inclusive and beneficial application of AI.

 

Boston Global Forum is set to host the AIWS Roundtable on December 12, 2023, to announce and discuss the letter and initiative. Additionally, a significant conference is planned at Harvard University Loeb House on April 30, 2024, focusing on this pioneering initiative.

For more information, please visit:

 

Contact Information:

Jim McManus,

Principal Partner of Slowey McManus Communications

Email: [email protected]

Phone: 617-413-9232

 

John Clippinger,

Co-Founder, BioForm Labs

Email: [email protected]

 

About the Active Inference Institute:

https://www.activeinference.org/about/strategy

About the Neuropsychiatry and Society Program:

https://www.brighamandwomens.org/psychiatry/brigham-psychiatric-specialties/psychiatry-law-and-society

 

About the Boston Global Forum:

The Boston Global Forum (BGF) offers a venue for leaders, strategists, thinkers, and innovators to contribute to the process of Remaking the World – Toward an Age of Global Enlightenment.

The BGF introduced core concepts that are shaping groundbreaking international initiatives, most notably, the Social Contract for the AI Age, AI International Law and Accord, the Global Alliance for Digital Governance, the AI World Society (AIWS) Ecosystem, and the AIWS City.

Celebrating the Birthday of Boston Global Forum December 12, 2012 with the book honoring Governor Dukakis and new significant initiatives

Celebrating the Birthday of Boston Global Forum December 12, 2012 with the book honoring Governor Dukakis and new significant initiatives

The Boston Global Forum celebrates its 11th birthday (December 12, 2012) by honoring Governor Michael Dukakis, Co-founder and Chair of the BGF with the book “From the Massachusetts Miracle to the Age of Global Enlightenment”

Harvard Professor David Silbersweig wrote in the book:

“Governor Dukakis has been a beacon of perspective, wisdom, graciousness and integrity for the Boston Global Forum, for our state and for the world.  He has brought deep experience and ability to get right to the heart of the matter, rooted in ethics and knowledge of politics.  He has exemplified leadership and given a platform for others to lead for good.  He bridges the latest advances with age-old human nature to maximize the impact of the BGF and AIWS, in its partnership with other leading international organizations.  He reminds us all of common human decency, of how we as societies are falling short, and how we need to keep our eye on the most important elements to improve local, national and global lives.”

Read or download the book here:

https://bostonglobalforum.org/publications/from-the-massachusetts-miracle-to-the-age-of-global-enlightenment/

On December 12, 2023, BGF and Active Inference Institute officially launch the Letter on: A Natural AI Based on The Science of Computational Physics, Biology and Neuroscience: Policy and Societal Significance and AIWS Natural AI Initiative

Signatories include esteemed leaders, strategists, scholars such as Governor Michael Dukakis, former Prime Minister of Italy Enrico Letta, Nazli Choucri, Beth Noveck, Alex Pentland, John Clippinger, David Silbersweig, Thomas Patterson, Nguyen Anh Tuan, and others.

On December 12, 2023, BGF will publish writing of BGF CEO Nguyen Anh Tuan “Building Spiritual Values for AI World Society”. BGF collaborates closely with spiritual leaders and religious figures to gather esteemed values, contributing to the creation of the spiritual values framework for AIWS. Notably, Amma, a revered spiritual leader, actively supports and participates in the development of AIWS Spiritual Values.

Governor Michael Dukakis, Estonian President Toomas Hendrik Ilves and Nguyen Anh Tuan at the launch of AI World Society on December 12, 2017 at Harvard University Loeb House

Letter on: A Natural AI Based on The Science of Computational Physics, Biology and Neuroscience: Policy and Societal Significance

Letter on: A Natural AI Based on The Science of Computational Physics, Biology and Neuroscience: Policy and Societal Significance

December 12, 2023

 

Introduction: 

The astonishing achievements of Large Language Models (LLMs) and Transformer models have exceeded the expectations of even their most ardent supporters. Foundational advances were made in the discovery of the power of Markov processes, tensor networks, transformers, and context-aware attention mechanisms. These advances were guided, not by specific scientific hypotheses, but by sheer engineering ingenuity in the application of mathematical and novel machine learning techniques. Such approaches have relied upon massive computational capabilities to generate and fit billions of parameters into models and outputs that achieve outcomes tied to the expectations of their respective creators. Notwithstanding the massive advances in system performance, potential use cases, and adoption of such systems, no scientific principles nor independent performance standards were referenced or applied to direct the research and development, nor to evaluate the adequacy of their outputs or contextual appropriateness of their performance. Consequently, all current LLMs and Transformer models are “corpus bound,” and their parameter-setting criteria are confined to an inaccessible and undecipherable stochastic “black box”.

The rapid advance and notable successes of LLM and Transformer models in processing information is historically unprecedented and has led to proclamations, by some reputable individuals, of “existential” threats to human civilization and emergent “super intelligence” or “Artificial General Intelligence”. No doubt the potential for intentional abuse (and negligent application) of such powerful and novel technologies is enormous, and likely to dwarf those harms arising in social media contexts. However, suppositions as to what constitutes “intelligence”, much less a “super” or “AGI” are ill founded and foster highly misleading public narratives of the future of intelligent systems generally. This de facto narrative is rooted in popular tropes characteristic of apocalyptic science fiction, but are not supported by scientific evidence. Contrary to the popular narrative, a substantial and credible body of scientific research exists today, grounded in computational neuroscience, biology, and physics, that supports a much more nuanced, and ultimately positive and tractable narrative relating to the phenomenon of intelligences.  This perspective is one that integrates AI, human intelligence, and other intelligent forms into an overall description and understanding about the interconnected “intelligences” of all “living things”. The emerging field of Diverse Intelligence, which is highlighting forms of cognition in unconventional substrates is an essential part of the AI debate, and a necessary balance to misguided comparisons to human minds as the essential rubric for evaluating AI.

This alternative scientific narrative that harmonically couples conceptions of “life” and “intelligence” is the precursor to next generation forms of ultra-high capacity, distributed AI composed of self-explanatory, self-reflective, and self-corrective intelligences.

Without a proper and nuanced scientific understanding of current and future AI technologies, policies and regulations intended to manage AI systems and their impacts are likely to be misdirected and ineffective.

Neural systems arising in nature have evolved to achieve an impressive array of adaptive capabilities. Human societies and the capacity for symbolic communication, have leveraged natural evolution of fitness to the point where human organisms can convey information across time and space, fostering the accumulation of knowledge at an ever-accelerating pace. The human mind and its extensions have advanced to the point where it can create AI.

Why is the human brain-mind relevant to the future of AI? A deep understanding of the structure and functions of the human brain and its emergent mental functions can not only help shape future technological possibilities (with and beyond neural network models), but will also be essential in optimizing how human intelligence integrates and works with artificial intelligence in a pro-social, rather than anti-social, manner. The human brain-mind has evolved in ways that lead to both advantages and limitations. It will be necessary to work synergistically with AI (including, for example, possible brain-computer interfaces in the treatment of diseases), and to guide its ethical development.

Critical misunderstandings are not just scientific and academic but in a broader economic, policy and social structural contexts as well. For example, due to their high computational costs and dependency on large volumes of training data, LLMs and Transformer models are broadly presumed to be only affordable for commercialization by “Big Tech” companies. Hence, the argument is made that Big Tech should be courted and granted special consideration by regulators and deference by the general public. But this need not be the case, as AI technology does not have to be monolithic nor concentrated to be successfully commercialized and appropriately regulated. In the very near future distributed and biologically grounded intelligences will have the capacity to run on mobile devices with far less energy than current systems, and with the intrinsic ability to self enforce and self correct their actions and goals, vastly outperforming current and future centralized AI system architectures. Such transparent cognitive architectures and edge infrastructures upon which such future intelligence infrastructures will run will be critical to preserving privacy and security and in attaining the equitable, sustainable and democratic use of this promising and necessary technology.

 

Call for actions:

We the undersigned signatories believe that it is vital at this juncture in the commercialization and regulation of AI that an alternative and science-based understanding of the biological foundations of AI be given public voice and that interdisciplinary public workshops be convened among legislators, regulators, technologists, investors, scientists, journalists, NGOs, faith communities, the public and business leaders.

Through the combined efforts of the Active Inference Institute, whose founding principles are grounded in science and the computational physics and biology of living intelligences and open technologies,  the Neuropsychiatry and Society Program, whose focus is bringing an understanding of the human brain-mind to societal issues and technological developments, and the Boston Global Forum, whose mandate is the formation of global policies and AI World Society model for the inclusive and beneficial application of AI, there can be real transformative change in the way we approach, develop, and integrate artificial intelligence into our societies.

 

 

Signed,

Signatories list:

Krishnashree Achuthan, Dean, Amrita University

Nazli Choucri, MIT professor, Boston Global Forum Board Member

John H. Clippinger, Ph.D., Active Inference Institute; Bioform Labs

Scott L. David, University of Washington – Applied Physics Laboratory

Ramu Damodaran, The First Chief of the United Nations Academic Impact (UNAI), Co-Chair of the United Nations Centennial – BGF and UNAI Initiative in Honor of the United Nations 2045 Centenary, Representative of Boston Global Forum in New York

Governor Michael Dukakis, Co-founder and Chair of Boston Global Forum

Chris Fields, Ph.D., Tufts University; Private consultant

Daniel Ari Friedman, Ph.D., Active Inference Institute; COGSEC

Karl Friston, MD, PhD, University College London

Thomas Kehler, Ph.D., CrowdSmart.ai; CommonGoodAI

Virginia Bleu Knight, Active Inference Institute

Zlatko Lagumdzija, Former Prime Minister of Bosnia & Herzegovina, Ambassador of Bosnia and Herzegovina to the United Nations

Francesco Lapenta, John Cabot University in Rome, Representative of Boston Global Forum in Rome

Enrico Letta, Former Prime Minister of Italy, president of the Institut Jacques Delors

Michael Levin, Ph.D., Director, Levin Labs, Allen Discovery Center, Tufts University

Yasuhide Nakayama, Former Japanese State Minister of Defense and Foreign Affairs

Paul Nemitz, Principal Adviser on the Digital Transition in DG Justice and Consumers, EU Commission, Representative of Boston Global Forum in Brussels and Berlin

Nguyen Anh Tuan, Co-founder and CEO of Boston Global Forum

Martin Nkafu Nkemnkia, the Pontifical Lateran University, Vatican

Beth Noveck, Northeastern University, the first United States Deputy Chief Technology Officer under President Obama at the White House

Thomas Patterson, Harvard Kennedy School professor, Co-founder of Boston Global Forum

Alex Pentland, MIT professor, Boston Global Forum Board Member

Matthew Pirkowski, Bioform Labs

Joshua Shane, Bioform Labs

David A. Silbersweig, MD, Chairman, Department of Psychiatry, Co-Director, Center for the Neurosciences, Brigham and Women’s Hospital, Stanley Cobb Professor of Psychiatry, Harvard Medical School

Bert de Vries, PhD, Professor, Eindhoven University of Technology

What US-Japan naval cooperation in the Gulf of Aden tells us: Roundup on the Four Pillars

What US-Japan naval cooperation in the Gulf of Aden tells us: Roundup on the Four Pillars

Minh Nguyen, BGF Editor

More information has been made clear regarding a naval confrontation in the Horn of Africa. A civilian vessel, owned by an Israeli group and flagged as Liberian, sent out an SOS call in response to a raid by Somali pirates. However, their call was apparently ignored by the nearby Chinese military vessels. It was then that a US vessel, the USS Mason, along with an ally, reported to be Japan, a Pillar, finally intervened. This resulted in the boarding and arrest of the pirate vessels and those on board. Interestingly, the USS Mason was the one that came under fire from Houthi, an Iran-backed rebel group in Yemen, by two missiles that missed earlier that week. This incident demonstrates that the Four Pillars are an important and necessary player on the global stage. Coordination between the Four Pillars means that the Pillars can better defend themselves, and those that support the rule of law. It enhances democracy across the world. This incident also demonstrates that one should be wary of Chinese or Russian hegemony in the world, where the rule of law is not considered, or calls for assistance against lawless actors go unnoticed.

However, this naval incident is just one of many to befall US or allied (eg. Japan) vessels in recent weeks, in light of the war in Gaza. It has been reported that three commercial vessels had been targeted by the Houthis, and the Pentagon has threatened to take actions.

Japan and Vietnam have elevated their relationship to Strategic Comprehensive Partnership. This will enhance not just economic cooperation, but security cooperation as well, seeing both states have concerns and territorial disputes with China. Additionally, this has been a convergence of economic and security interests from both sides, building up from decades. This is the third Pillar that Vietnam has attained status with (the others being India and the US), and the second of this year.

Something to watch out in the Four Pillars space (none of the Pillars have taken action yet) is that Venezuela, a rival of the US in Latin America, has been eyeing to annex the western half of its neighbor Guyana, which recently discovered oil in the region. Most recently, the country’s population has “approved” a referendum to claim sovereignty over Essequibo, the aforementioned oil-rich region. However, it should be noted that this may just be posturing to distract the domestic population and won’t lead to consequential actions.

USS Mason and JS Akebono, credit: JMSDF Twitter