Global Enlightenment Leaders play a significant role in building AIWS in 2023

The Global Enlightenment Leaders of 2023 have been instrumental in shaping the trajectory of the AI World Society (AIWS), leading initiatives that transcend conventional boundaries and exemplify a commitment to ethical, spiritual, inclusive, and forward-thinking practices.

  1. BGF Frameworks for Global Governance of AI: Global Enlightenment Leaders have played a pivotal role in the development and advocacy of the Boston Global Forum (BGF) Frameworks for global governing AI. Actively participating in the crafting of guidelines, they prioritize transparency, accountability, and the responsible use of artificial intelligence to ensure positive societal impacts.
  2. Spiritual Values Integration: Going beyond technical considerations, Amma, recipient of the 2023 World Leader for Peace and Security Award, has been a driving force in integrating spiritual values into the discourse around AIWS. Recognizing the significance of ethical and spiritual principles, Amma promotes a harmonious relationship between technology and spirituality in guiding the development and deployment of AI technologies.
  3. Human-Centric AI: Emphasizing a human-centric approach to AI development, Global Enlightenment Leaders advocate for technologies that prioritize human well-being. Their efforts have been pivotal in shaping policies and practices that focus on augmenting, rather than replacing, human capabilities. This approach fosters a symbiotic relationship between humans and AI, ensuring the enhancement of human potential.

The Global Enlightenment Leaders have not only been pivotal in advocating for BGF Frameworks for global governing AI but have also played key roles in integrating spiritual values into AI discourse and promoting human-centric AI development. Their multifaceted contributions reflect a comprehensive commitment to guiding AIWS with ethical, spiritual, and human-centered principles.

Global Enlightenment Mountain as a highlight of AIWS City in 2023

Global Enlightenment Mountain as a highlight of AIWS City in 2023

The “Global Enlightenment Mountain” not only stands as a symbolic landmark but also serves as a revolutionary initiative, heralding the reinvention of Silicon Valley. In a departure from the conventional tech-centric landscape, this mountain embodies a paradigm shift by placing a strong emphasis on ethical considerations, global collaboration, and societal well-being.

Global Collaboration Center: From Boston, a world intellectual capital with Harvard university, MIT and top universities, the mountain redefines the very essence of innovation, transforming into a hub for global collaboration. It serves as a dynamic space for fostering partnerships, encouraging collaborations, and facilitating the exchange of knowledge among international research centers, experts, researchers, marketers, and policymakers. This collaborative ethos goes beyond geographical constraints, underlining a collective and inclusive approach to effectively address the multifaceted challenges posed by the AI era.

Societal Impact Focus: Reinventing Silicon Valley, the mountain places a strong emphasis on understanding and mitigating the societal impacts of technology. Its initiatives, events, and educational programs prioritize addressing issues such as job displacement, digital inequality, and the ethical use of AI in diverse cultural contexts.

Tech for Humanity: Redefining the narrative of Silicon Valley, the mountain embraces a “tech for humanity” philosophy. It showcases how technological advancements can be aligned with ethical considerations, cultural sensitivity, and a deep understanding of human values, reinforcing the idea that innovation should serve the betterment of humanity.

Cultural and Spiritual Integration: Departing from the purely technological focus, the mountain integrates cultural and spiritual dimensions. It acknowledges the diverse cultural backgrounds of its global collaborators and incorporates spiritual values into its ethos, fostering an inclusive and holistic approach to technology and innovation.

In essence, the Global Enlightenment Mountain stands as a beacon of change, reinventing Silicon Valley’s trajectory by prioritizing ethical, global, and societal considerations. It embodies a vision where technological innovation is inseparable from a commitment to human values and the well-being of societies worldwide.

Continued troubles in the Red Sea, European summit: Roundup on the Four Pillars

Continued troubles in the Red Sea, European summit: Roundup on the Four Pillars

Minh Nguyen is the Editor of the Boston Global Forum and a Shinzo Abe Initiative Fellow. She writes the Four Pillars column in the BGF Weekly newsletter.

 

It seems to the Red Sea has become an ever-growing problem: two companies, the shipping giant Maersk and oil giant BP have decided to stop shipping through the Red Sea due to recent attacks. While the Pillars have been defending themselves and responding to distress calls from Houthi missile attacks, there is still a large threat to civilian shipping. While not wanting to escalate tensions in the Middle East further, it is necessary that the Pillars come together and find resolutions to the Houthi issue, even a forceful one, as it threatens aspects of the global economy, especially between Europe and Asia. The US has announced Operation Prosperity Guardian to defend against these attacks.

India, a Pillar, has been navally assisting in different parts of the world too. The Indian navy intervened against Somali pirates’  hijacking earlier this week. In the Pacific, they have sent a warship to Manila to bolster ties, after recent clashes between China and the Philippines.

At the European summit, the EU have decided to launch memberships talks with Ukraine and Moldova, as well as granting Georgia the candidate status. While the Ukrainian counter-offensive hasn’t been successful, there have been some good reports recently: it appears that Russia has lost 87% of its troops prior to the war. The UK have been selected as the headquarters for a fighter plane project with Japan and Italy, two other Pillars.

USS Carney in the Mediterranean Sea on Oct. 23, 2018. credit: US Navy

Horizon Search — Converging Minds: Charting the Future of Natural AI

Horizon Search — Converging Minds: Charting the Future of Natural AI

Published by Horizon Search; December 16, 2023

In a paradigm-shifting endeavor, the Boston Global Forum heralded a new era in AI development on December 12, 2023, by hosting a roundtable that brought together luminaries from academia and industry. This gathering was not just a meeting of minds but a pivotal juncture marking the birth of a revolutionary approach to artificial intelligence: Natural AI. This concept, rooted in computational physics, biology, and neuroscience, was crystallized in a significant letter titled ‘A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance,’ collectively endorsed by many attendees.

The letter and corresponding discussions represent a seismic shift in the AI landscape, advocating for a nature-based AI that promises to redefine our relationship with technology. Facilitated by the Boston Global Forum in collaboration with the Active Inference Institute, this initiative boldly positions Natural AI at the forefront of global discourse, aiming to pivot AI development towards a path that is inherently ethical, sustainable, and in harmony with the principles of nature. Celebrating its 11th anniversary, the Boston Global Forum underscores this roundtable as a cornerstone event, setting a transformative agenda for Natural AI to be the guiding light of its mission in the years to come.

David Silbersweig, the Stanley Cobb Professor of Psychiatry at Harvard Medical School, called for an AI that is in harmony with human cognition and the broader natural environment, referencing Karl Friston’s free energy principle. His assertion that understanding AI in the context of natural systems can lead to a technology that enhances our world provided a thought-provoking start to the discussions.

“The more we place [AI] within and have an understanding of natural systems and the human mind-brain, the more likely it is that we’re going to be able to have an AI that will resonate with, that will work cooperatively with the human mind-brain and larger natural systems and indeed the whole physical world,” — David Silbersweig

Alex Pentland, a distinguished Professor at MIT and a director of MIT Connection Science, highlighted the energy principle as a foundational concept for AI development. He describes AI as a probability engine, highlighting the potential of integrating various sensory inputs to mimic human-like conditions. Pentland explores alternative AI mechanisms, such as models based on C. elegans neurons, emphasizing the development of AI that complements human interaction and decision-making but one in which humans retain control.

“If you treat the AI as another individual and follow the sort of integration rules that we have with people, then you get something where it’s just part of the environment that it doesn’t have agency. You still have the agency.” — Alex Pentland

Alex Pentland talks on the principles guiding artificial intelligence

John Clippinger, a BioForm Labs co-founder and an MIT Media Lab research scientist, calls for a paradigm shift from enlightenment to an “entanglement” mentality in AI development. Addressing the technical aspects of AI, he discusses large language models and transformers, acknowledging their complexity as statistical engines. He suggests that these models can lead to the inference of causal structures, such as Markov blankets, which have intentional capabilities and representation abilities.

A critical concern for Clippinger is the integration of science and policy, especially in the context of AI. He advocates for a global platform, stressing the importance of preventing the monopolization of AI by commercial interests. This viewpoint stems from his experience in digital identity and data policy, where he has observed the challenges in achieving successful policy integration, particularly in the U.S.

Thomas Kehler, co-founder, CEO, and Chief Scientist of CrowdSmart.ai, brought to the table a focus on the potential of collective intelligence and the concept of adaptive learning. He began by referencing the insights of Judea Pearl, a foundational figure in AI, to critique the current narrow view of AI that heavily relies on historical data. Kehler notes that while large language models excel at generating plausible outputs from this data, this approach only scratches the surface of AI’s potential.

Kehler emphasizes that human intelligence surpasses the mere processing of historical data. It encompasses imaginative thinking about the future and the potential for collective intelligence – the idea that humans can achieve together what they cannot alone. He points out that current AI models fall short of facilitating collaboration and amplifying our collective ability to envision and solve complex problems.

A significant part of Kehler’s discussion revolves around the concept of adaptive learning, as proposed in frameworks like the free energy principle and active inference. Unlike traditional models, which require extensive training before deployment, adaptive learning models can learn instantaneously from experiences. Kehler asserts that intelligence is a shared, constantly evolving concept not adequately captured by existing AI models.

“We can create an augmented collective intelligence that is truly more intelligent than either machine or human alone.” — Thomas Kehler

Furthermore, Kehler touches upon the concept of emergent self-learning, which he sees as inherent in the nature of life and reflected in phenomena like the cooperative movements of birds. He believes active inference captures this principle and can lead AI into new realms of possibility and exploration.

Cameron Kerry, a former Acting Secretary of Commerce for the United States and a distinguished visiting fellow at The Brookings Institution, provided a balanced perspective, highlighting the importance of maintaining ethical boundaries in AI development.

“Let’s not anthropomorphize these models, because that I think can lead to false confidence in AI, false assumptions about the capabilities of the models,” — Cameron Kerry

Kerry notes that current AI models, often described as “probability engines,” are limited in their scope, echoing Stuart Russell‘s observation that these models require an extensive number of iterations to achieve what a young child can do in far fewer steps. This comparison highlights a fundamental gap in the efficiency and inferential capabilities of current AI systems compared to human cognition.

Reflecting on the history of AI, Kerry draws parallels between today’s efforts and those of pioneers like Marvin Minsky, who sought to replicate human information processing in computers. Kerry sees the current focus on understanding the biological aspects of intelligence and incorporating them into AI as a continuation of this long-standing goal.

Nazli Choucri, Professor of Political Science at MIT, brought attention to the role of temporality in AI, an often-neglected dimension that has pivotal implications for decision-making and policy. She highlights how decision-making time frames have become increasingly compressed, particularly in the realm of AI and technology. This poses significant challenges, as it requires rapid processing and response in scenarios that might traditionally have allowed for more extended contemplation and analysis.

“What we don’t have is a sense of matching temporal representation with either biological phenomenon or geological phenomenon or conflict phenomenon or decision phenomenon.” — Nazli Choucri

Francesco Lapenta, Director of the Institute of Future and Innovation Studies at John Cabot University, reflects on his own experience of working in AI for 15 years, noting the challenges and the long process involved in making the broader public aware of the complex body of work that underlies AI. He emphasizes that the conversation about AI is not new but has been ongoing for over three decades. However, he points out that the general public’s understanding of these discussions, especially regarding the link between technology and biology in the context of AI, remains limited.

“AI should be complimentary for the cognitive, biological, physical qualities of humanity and not the other way around.” — Francesco Lapenta

Paul Nemitz, Principal Adviser on the Digital Transition at the European Commission, voiced the need for AI to maintain the primacy of humanity over technology and democracy over business models. He also brings attention to the concept of multiple intelligences, as proposed by Howard Gardner, and shares his view that we are still at the nascent stages of AI development. However, he acknowledges that AI challenges us to reconsider what it means to be human.

“I would like to see a commitment among all of us and also in the letter, that we are in this not just to develop the greatest ever technology which can do all the things which humans can do, but to maintain the primacy of humanity over technology, the primacy of democracy over business models.” — Paul Nemitz

Nemitz notes his unfamiliarity with the methodology of active inference and the specifics of the institute mentioned in the letter, reflecting a need for more clarity about their contributions to the field compared to established academic institutions. Despite this, he appreciates the letter for introducing elements of plurality and a more human-centered approach to AI. This approach, as he sees it, would include safeguards and drivers that emulate how humans function as social beings within a democracy.

Nam Pham, a Program Specialist at Harvard Kennedy School, acknowledges the significant progress AI has made in recent years and notes that many institutions and governments have started to recognize the need for oversight and control over technological advancements. He observes that policy development tends to lag behind technological advancements, posing a challenge in ensuring technology remains a servant to human needs and interests.

“We need to keep in mind that if we could speed up the train of policy with the visions and the work of the Boston Global Forum, maybe policy can catch up with technology to ensure that technology will serve human beings.” — Nam Pham

Martin Nkafu Nkemnkia, from the Pontifical Lateran University, expressed the eagerness of the Association of African Universities to participate in the global AI discourse, as they are already discussing the role of AI in the future of higher education on the continent. He highlights the expediency of utilizing the Association as a collective entity rather than approaching individual universities separately, which would streamline collaboration and representation of African educational institutions in global AI discourse.

“We want to catch the train while it is still possible. And then, work with you, we just have to formulate how we want to work with you.” — Martin Nkafu Nkemnkia

In his closing remarks, Tuan Nguyen, co-founder and CEO of the Boston Global Forum, provided an update on the progress and future plans involving the integration and application of artificial intelligence (AI) within various sectors and regions. He outlines the update in three main parts: science and technology applications, policy development, and public engagement with a spiritual dimension.

Nguyen starts by sharing exciting news about the involvement of universities in Vietnam, particularly the University of Information and Communication Technology. This university is significantly engaged in the field, planning substantial investments in health, cybersecurity, and AI, supported by the Ministry of Defense. He also mentions Amrita University in India as another active participant in these initiatives.

He then highlighted the upcoming AIWS (Artificial Intelligence World Society) roundtable, the importance of refining and expanding the initiatives and incorporating feedback from this discussion. Nguyen announces a forthcoming conference on April 30th at Harvard University’s Loeb House, which will not only focus on science and technology but also policy and public engagement.

A notable aspect of Nguyen’s vision is the incorporation of spiritual values into the conversation about AI. He talks about working closely with spiritual leaders and religious figures from various faiths who have shown support and interest in the AIWS initiative. This unique approach aims to integrate spiritual values into the development and application of AI, recognizing the human and ethical dimensions of technological progress. He concludes by highlighting the achievements of the Boston Global Forum and its contribution to the global discourse on AI, expressing optimism for continued collaboration and contribution in the coming years.

Pope calls for treaty regulating AI, warning of potential for ‘technological dictatorship’

Pope calls for treaty regulating AI, warning of potential for ‘technological dictatorship’

Pope Francis has called for an international treaty to regulate the use of Artificial Intelligence, warning that the new technology risks causing a “technological dictatorship” which would threaten peace and democracy.

The 86-year-old pontiff says he wants world leaders to agree to a “binding international treaty” on AI developed within an ethical framework. Francis made the appeal in his annual message for the World Day of Peace which is marked by the Catholic Church every January 1.

“The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms, and that capacity cannot be reduced to programming a machine, which as ‘intelligent’ as it may be, remains a machine,” Francis wrote.

“Any number of urgent questions need to be asked. What will be the consequences, in the medium and long term, of these new digital technologies? And what impact will they have on individual lives and on societies, on international stability and peace?”

The pope issued a strong warning against AI-controlled weapons systems which he called a “cause for grave ethical concern” while also raising alarm about the misuse of technology including “interference in elections, the rise of a surveillance society” and growing inequalities.

“All these factors risk fuelling conflicts and hindering peace,” the pope said.

Despite his cautious message, Francis did praise the “impressive achievements of science and technology,” insisting that AI also offers “exciting opportunities.”

Read the article in full here: https://edition.cnn.com/2023/12/14/tech/pope-francis-ai-warning-technological-dictatorship/index.html

Looking ahead to 2024, Boston Global Forum will develop a special report and organize a conference that will bring together spiritual leaders and esteemed religious figures. This gathering aims to deepen the exploration and development of spiritual values for AIWS, marking a significant milestone in the journey toward an ethically grounded and enlightened AI World Society.

Webinar of International Association of University Presidents “Human Rights 2023: To Innovate, To Include” – Human Rights in AI World Society

Webinar of International Association of University Presidents “Human Rights 2023: To Innovate, To Include” – Human Rights in AI World Society

The International Association of University Presidents (IAUP) celebrates 75 years of the Universal Declaration on Human Rights with an online webinar on December 11, 2023 (11:00 am -12:30 pm, EST)

Human Rights 2023: To Innovate, To Include

Nguyen Anh Tuan, CEO of the Boston Global Forum, addressed human rights in Artificial Intelligence World Society.

 

Speakers includes:

Dr. Fernando Leon Garcia, President, International Association of University Presidents

Zlatko Lagumdzija, Permanent Representative of Bosnia and Herzegovina to the United Nations (and a regular speaker at BGF events), former Prime Minister of Bosnia and Herzegovina

Cora Weiss, President, Peace Educator and Nobel Peace Prize nominee

Nguyen Anh Tuan, CEO of the Boston Global Forum, Co-founder of AI World Society (AIWS)

Alessandra Nilo, Juliana Cesar, Vice President Civil-20 engagement group of the 2024 G-20 chaired by Brazil, affiliated with Gestos – HIV+, Communication and Gender Issues NGO, Recife, Brazil

Dr. Mihir Kanade, Academic Coordinator of the University for Peace (UPEACE), the Head of its Department of International Law, and the Director of the UPEACE Human Rights Centre; independent expert of the UN Human Rights Council’s Expert Mechanism on the Right to Development

Dr Ş. İlgü Özler, founder and director of the SUNY Global Engagement programme, New York

Beth Nielsen Chapman, Twice Grammy-nominated Nashville based singer and songwriter

The moderator: Ramu Damodaran, the First Chief of the United Nations Academic Impact, Co-Chair of the United Nations Centennial Initiative

Link:

https://www.youtube.com/watch?v=V9anldnH_Fo

https://www.iaup.org/event/human-rights-2023-to-innovate-to-include/

Cora Weiss, Peace Educator and Nobel Peace Prize nominee at the event

 

Issues with Freedom of Navigation in the Middle East: Roundup on the Four Pillars

Issues with Freedom of Navigation in the Middle East: Roundup on the Four Pillars

Minh Nguyen is the Editor of the Boston Global Forum and a Shinzo Abe Initiative Fellow. She writes the Four Pillars column in the BGF Weekly newsletter.

 

The US and European states, two of the Pillars, have been facing threats in the Middle East recently from Iranian proxies. The American embassy in Baghdad was, and continues to be, under fire from militant groups. These are semi-coordinated attacks, but it is unclear if it is under direct orders from Iran. It was hit with seven mortars. France intercepted two drones that targeted its ship in the Red Sea, originating from Yemen (read: Houthis). This comes on the back of Houthis attacking American ship USS Mason and making waterways hostile in the region.

The Pillars have been weighing options to address the recent threats to the freedom of navigation. However, the Biden administration is considering to have a more mitigated response, in part for fear of exploding a powder keg in the Middle East, when there is a need to draw back  and focus on Asia-Pacific.

Economically, Apple and other companies continue to move away from China and invest more in India, a Pillar, and other countries such as Indonesia and Vietnam. This is important as China becomes more of a risk to invest in, and decoupling continues, in preparing for potential geopolitical confrontations.

French ship FREMM Languedoc

A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance

A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance

Boston, December 12, 2023 – In a landmark initiative, prominent scientists, strategists and policy makers are joining forces to redefine the public narrative surrounding artificial intelligence (AI), grounding it in the science of computational physics, biology, and neuroscience. The Active Inference Institute and the Boston Global Forum, through a joint letter signed by leading experts, announce a pivotal effort to reshape the discourse on AI.

 

Titled “A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance,” the letter addresses the remarkable advancements in AI, especially Large Language Models (LLMs) and Transformer models. While acknowledging their achievements, the letter underscores concerns about the lack of scientific principles guiding their development and the absence of independent performance standards.

 

Signatories emphasize that current LLMs and Transformer models are “corpus bound,” relying on parameters that remain enigmatic and inaccessible to scrutiny. Despite their capabilities, a critical understanding of these systems and their implications is missing.

 

Challenging the popular narrative of AI as an existential threat or the emergence of “super intelligence,” the letter argues for a more nuanced view rooted in computational neuroscience, biology, and physics, highlighting the interconnected “intelligences” of all living things.

The signatories stress that a nuanced scientific understanding of AI is crucial for effective policies and regulations. They highlight the relevance of studying the human brain-mind to optimize the integration of human intelligence with artificial intelligence in a pro-social manner.

 

Addressing economic, policy, and social misunderstandings surrounding AI, the letter challenges the notion that only “Big Tech” companies can afford AI commercialization.

 

It anticipates a future where distributed and biologically grounded intelligences can operate on mobile devices with lower energy requirements, promising transparency, privacy, security, and equitable access.

In response to these challenges, the signatories propose interdisciplinary public workshops to convene legislators, regulators, technologists, investors, scientists, journalists, NGOs, faith communities, the public, and business leaders. The goal is to foster an alternative, science-based understanding of the biological foundations of AI and transform the way AI is approached, developed, and integrated into society.

This initiative is a collaborative effort between the Active Inference Institute, grounded in the science of computational physics and biology, the Neuropsychiatry and Society Program, focused on understanding the societal implications of the human brain-mind, and the Boston Global Forum, dedicated to forming global policies and an AI World Society model for the inclusive and beneficial application of AI.

 

Boston Global Forum is set to host the AIWS Roundtable on December 12, 2023, to announce and discuss the letter and initiative. Additionally, a significant conference is planned at Harvard University Loeb House on April 30, 2024, focusing on this pioneering initiative.

For more information, please visit:

 

Contact Information:

Jim McManus,

Principal Partner of Slowey McManus Communications

Email: [email protected]

Phone: 617-413-9232

 

John Clippinger,

Co-Founder, BioForm Labs

Email: [email protected]

 

About the Active Inference Institute:

https://www.activeinference.org/about/strategy

About the Neuropsychiatry and Society Program:

https://www.brighamandwomens.org/psychiatry/brigham-psychiatric-specialties/psychiatry-law-and-society

 

About the Boston Global Forum:

The Boston Global Forum (BGF) offers a venue for leaders, strategists, thinkers, and innovators to contribute to the process of Remaking the World – Toward an Age of Global Enlightenment.

The BGF introduced core concepts that are shaping groundbreaking international initiatives, most notably, the Social Contract for the AI Age, AI International Law and Accord, the Global Alliance for Digital Governance, the AI World Society (AIWS) Ecosystem, and the AIWS City.

Celebrating the Birthday of Boston Global Forum December 12, 2012 with the book honoring Governor Dukakis and new significant initiatives

Celebrating the Birthday of Boston Global Forum December 12, 2012 with the book honoring Governor Dukakis and new significant initiatives

The Boston Global Forum celebrates its 11th birthday (December 12, 2012) by honoring Governor Michael Dukakis, Co-founder and Chair of the BGF with the book “From the Massachusetts Miracle to the Age of Global Enlightenment”

Harvard Professor David Silbersweig wrote in the book:

“Governor Dukakis has been a beacon of perspective, wisdom, graciousness and integrity for the Boston Global Forum, for our state and for the world.  He has brought deep experience and ability to get right to the heart of the matter, rooted in ethics and knowledge of politics.  He has exemplified leadership and given a platform for others to lead for good.  He bridges the latest advances with age-old human nature to maximize the impact of the BGF and AIWS, in its partnership with other leading international organizations.  He reminds us all of common human decency, of how we as societies are falling short, and how we need to keep our eye on the most important elements to improve local, national and global lives.”

Read or download the book here:

https://bostonglobalforum.org/publications/from-the-massachusetts-miracle-to-the-age-of-global-enlightenment/

On December 12, 2023, BGF and Active Inference Institute officially launch the Letter on: A Natural AI Based on The Science of Computational Physics, Biology and Neuroscience: Policy and Societal Significance and AIWS Natural AI Initiative

Signatories include esteemed leaders, strategists, scholars such as Governor Michael Dukakis, former Prime Minister of Italy Enrico Letta, Nazli Choucri, Beth Noveck, Alex Pentland, John Clippinger, David Silbersweig, Thomas Patterson, Nguyen Anh Tuan, and others.

On December 12, 2023, BGF will publish writing of BGF CEO Nguyen Anh Tuan “Building Spiritual Values for AI World Society”. BGF collaborates closely with spiritual leaders and religious figures to gather esteemed values, contributing to the creation of the spiritual values framework for AIWS. Notably, Amma, a revered spiritual leader, actively supports and participates in the development of AIWS Spiritual Values.

Governor Michael Dukakis, Estonian President Toomas Hendrik Ilves and Nguyen Anh Tuan at the launch of AI World Society on December 12, 2017 at Harvard University Loeb House