Global Enlightenment Mountain as a highlight of AIWS City in 2023

Global Enlightenment Mountain as a highlight of AIWS City in 2023

The “Global Enlightenment Mountain” not only stands as a symbolic landmark but also serves as a revolutionary initiative, heralding the reinvention of Silicon Valley. In a departure from the conventional tech-centric landscape, this mountain embodies a paradigm shift by placing a strong emphasis on ethical considerations, global collaboration, and societal well-being.

Global Collaboration Center: From Boston, a world intellectual capital with Harvard university, MIT and top universities, the mountain redefines the very essence of innovation, transforming into a hub for global collaboration. It serves as a dynamic space for fostering partnerships, encouraging collaborations, and facilitating the exchange of knowledge among international research centers, experts, researchers, marketers, and policymakers. This collaborative ethos goes beyond geographical constraints, underlining a collective and inclusive approach to effectively address the multifaceted challenges posed by the AI era.

Societal Impact Focus: Reinventing Silicon Valley, the mountain places a strong emphasis on understanding and mitigating the societal impacts of technology. Its initiatives, events, and educational programs prioritize addressing issues such as job displacement, digital inequality, and the ethical use of AI in diverse cultural contexts.

Tech for Humanity: Redefining the narrative of Silicon Valley, the mountain embraces a “tech for humanity” philosophy. It showcases how technological advancements can be aligned with ethical considerations, cultural sensitivity, and a deep understanding of human values, reinforcing the idea that innovation should serve the betterment of humanity.

Cultural and Spiritual Integration: Departing from the purely technological focus, the mountain integrates cultural and spiritual dimensions. It acknowledges the diverse cultural backgrounds of its global collaborators and incorporates spiritual values into its ethos, fostering an inclusive and holistic approach to technology and innovation.

In essence, the Global Enlightenment Mountain stands as a beacon of change, reinventing Silicon Valley’s trajectory by prioritizing ethical, global, and societal considerations. It embodies a vision where technological innovation is inseparable from a commitment to human values and the well-being of societies worldwide.

Continued troubles in the Red Sea, European summit: Roundup on the Four Pillars

Continued troubles in the Red Sea, European summit: Roundup on the Four Pillars

Minh Nguyen is the Editor of the Boston Global Forum and a Shinzo Abe Initiative Fellow. She writes the Four Pillars column in the BGF Weekly newsletter.

 

It seems to the Red Sea has become an ever-growing problem: two companies, the shipping giant Maersk and oil giant BP have decided to stop shipping through the Red Sea due to recent attacks. While the Pillars have been defending themselves and responding to distress calls from Houthi missile attacks, there is still a large threat to civilian shipping. While not wanting to escalate tensions in the Middle East further, it is necessary that the Pillars come together and find resolutions to the Houthi issue, even a forceful one, as it threatens aspects of the global economy, especially between Europe and Asia. The US has announced Operation Prosperity Guardian to defend against these attacks.

India, a Pillar, has been navally assisting in different parts of the world too. The Indian navy intervened against Somali pirates’  hijacking earlier this week. In the Pacific, they have sent a warship to Manila to bolster ties, after recent clashes between China and the Philippines.

At the European summit, the EU have decided to launch memberships talks with Ukraine and Moldova, as well as granting Georgia the candidate status. While the Ukrainian counter-offensive hasn’t been successful, there have been some good reports recently: it appears that Russia has lost 87% of its troops prior to the war. The UK have been selected as the headquarters for a fighter plane project with Japan and Italy, two other Pillars.

USS Carney in the Mediterranean Sea on Oct. 23, 2018. credit: US Navy

Horizon Search — Converging Minds: Charting the Future of Natural AI

Horizon Search — Converging Minds: Charting the Future of Natural AI

Published by Horizon Search; December 16, 2023

In a paradigm-shifting endeavor, the Boston Global Forum heralded a new era in AI development on December 12, 2023, by hosting a roundtable that brought together luminaries from academia and industry. This gathering was not just a meeting of minds but a pivotal juncture marking the birth of a revolutionary approach to artificial intelligence: Natural AI. This concept, rooted in computational physics, biology, and neuroscience, was crystallized in a significant letter titled ‘A Natural AI Based on The Science of Computational Physics, Biology, and Neuroscience: Policy and Societal Significance,’ collectively endorsed by many attendees.

The letter and corresponding discussions represent a seismic shift in the AI landscape, advocating for a nature-based AI that promises to redefine our relationship with technology. Facilitated by the Boston Global Forum in collaboration with the Active Inference Institute, this initiative boldly positions Natural AI at the forefront of global discourse, aiming to pivot AI development towards a path that is inherently ethical, sustainable, and in harmony with the principles of nature. Celebrating its 11th anniversary, the Boston Global Forum underscores this roundtable as a cornerstone event, setting a transformative agenda for Natural AI to be the guiding light of its mission in the years to come.

David Silbersweig, the Stanley Cobb Professor of Psychiatry at Harvard Medical School, called for an AI that is in harmony with human cognition and the broader natural environment, referencing Karl Friston’s free energy principle. His assertion that understanding AI in the context of natural systems can lead to a technology that enhances our world provided a thought-provoking start to the discussions.

“The more we place [AI] within and have an understanding of natural systems and the human mind-brain, the more likely it is that we’re going to be able to have an AI that will resonate with, that will work cooperatively with the human mind-brain and larger natural systems and indeed the whole physical world,” — David Silbersweig

Alex Pentland, a distinguished Professor at MIT and a director of MIT Connection Science, highlighted the energy principle as a foundational concept for AI development. He describes AI as a probability engine, highlighting the potential of integrating various sensory inputs to mimic human-like conditions. Pentland explores alternative AI mechanisms, such as models based on C. elegans neurons, emphasizing the development of AI that complements human interaction and decision-making but one in which humans retain control.

“If you treat the AI as another individual and follow the sort of integration rules that we have with people, then you get something where it’s just part of the environment that it doesn’t have agency. You still have the agency.” — Alex Pentland

Alex Pentland talks on the principles guiding artificial intelligence

John Clippinger, a BioForm Labs co-founder and an MIT Media Lab research scientist, calls for a paradigm shift from enlightenment to an “entanglement” mentality in AI development. Addressing the technical aspects of AI, he discusses large language models and transformers, acknowledging their complexity as statistical engines. He suggests that these models can lead to the inference of causal structures, such as Markov blankets, which have intentional capabilities and representation abilities.

A critical concern for Clippinger is the integration of science and policy, especially in the context of AI. He advocates for a global platform, stressing the importance of preventing the monopolization of AI by commercial interests. This viewpoint stems from his experience in digital identity and data policy, where he has observed the challenges in achieving successful policy integration, particularly in the U.S.

Thomas Kehler, co-founder, CEO, and Chief Scientist of CrowdSmart.ai, brought to the table a focus on the potential of collective intelligence and the concept of adaptive learning. He began by referencing the insights of Judea Pearl, a foundational figure in AI, to critique the current narrow view of AI that heavily relies on historical data. Kehler notes that while large language models excel at generating plausible outputs from this data, this approach only scratches the surface of AI’s potential.

Kehler emphasizes that human intelligence surpasses the mere processing of historical data. It encompasses imaginative thinking about the future and the potential for collective intelligence – the idea that humans can achieve together what they cannot alone. He points out that current AI models fall short of facilitating collaboration and amplifying our collective ability to envision and solve complex problems.

A significant part of Kehler’s discussion revolves around the concept of adaptive learning, as proposed in frameworks like the free energy principle and active inference. Unlike traditional models, which require extensive training before deployment, adaptive learning models can learn instantaneously from experiences. Kehler asserts that intelligence is a shared, constantly evolving concept not adequately captured by existing AI models.

“We can create an augmented collective intelligence that is truly more intelligent than either machine or human alone.” — Thomas Kehler

Furthermore, Kehler touches upon the concept of emergent self-learning, which he sees as inherent in the nature of life and reflected in phenomena like the cooperative movements of birds. He believes active inference captures this principle and can lead AI into new realms of possibility and exploration.

Cameron Kerry, a former Acting Secretary of Commerce for the United States and a distinguished visiting fellow at The Brookings Institution, provided a balanced perspective, highlighting the importance of maintaining ethical boundaries in AI development.

“Let’s not anthropomorphize these models, because that I think can lead to false confidence in AI, false assumptions about the capabilities of the models,” — Cameron Kerry

Kerry notes that current AI models, often described as “probability engines,” are limited in their scope, echoing Stuart Russell‘s observation that these models require an extensive number of iterations to achieve what a young child can do in far fewer steps. This comparison highlights a fundamental gap in the efficiency and inferential capabilities of current AI systems compared to human cognition.

Reflecting on the history of AI, Kerry draws parallels between today’s efforts and those of pioneers like Marvin Minsky, who sought to replicate human information processing in computers. Kerry sees the current focus on understanding the biological aspects of intelligence and incorporating them into AI as a continuation of this long-standing goal.

Nazli Choucri, Professor of Political Science at MIT, brought attention to the role of temporality in AI, an often-neglected dimension that has pivotal implications for decision-making and policy. She highlights how decision-making time frames have become increasingly compressed, particularly in the realm of AI and technology. This poses significant challenges, as it requires rapid processing and response in scenarios that might traditionally have allowed for more extended contemplation and analysis.

“What we don’t have is a sense of matching temporal representation with either biological phenomenon or geological phenomenon or conflict phenomenon or decision phenomenon.” — Nazli Choucri

Francesco Lapenta, Director of the Institute of Future and Innovation Studies at John Cabot University, reflects on his own experience of working in AI for 15 years, noting the challenges and the long process involved in making the broader public aware of the complex body of work that underlies AI. He emphasizes that the conversation about AI is not new but has been ongoing for over three decades. However, he points out that the general public’s understanding of these discussions, especially regarding the link between technology and biology in the context of AI, remains limited.

“AI should be complimentary for the cognitive, biological, physical qualities of humanity and not the other way around.” — Francesco Lapenta

Paul Nemitz, Principal Adviser on the Digital Transition at the European Commission, voiced the need for AI to maintain the primacy of humanity over technology and democracy over business models. He also brings attention to the concept of multiple intelligences, as proposed by Howard Gardner, and shares his view that we are still at the nascent stages of AI development. However, he acknowledges that AI challenges us to reconsider what it means to be human.

“I would like to see a commitment among all of us and also in the letter, that we are in this not just to develop the greatest ever technology which can do all the things which humans can do, but to maintain the primacy of humanity over technology, the primacy of democracy over business models.” — Paul Nemitz

Nemitz notes his unfamiliarity with the methodology of active inference and the specifics of the institute mentioned in the letter, reflecting a need for more clarity about their contributions to the field compared to established academic institutions. Despite this, he appreciates the letter for introducing elements of plurality and a more human-centered approach to AI. This approach, as he sees it, would include safeguards and drivers that emulate how humans function as social beings within a democracy.

Nam Pham, a Program Specialist at Harvard Kennedy School, acknowledges the significant progress AI has made in recent years and notes that many institutions and governments have started to recognize the need for oversight and control over technological advancements. He observes that policy development tends to lag behind technological advancements, posing a challenge in ensuring technology remains a servant to human needs and interests.

“We need to keep in mind that if we could speed up the train of policy with the visions and the work of the Boston Global Forum, maybe policy can catch up with technology to ensure that technology will serve human beings.” — Nam Pham

Martin Nkafu Nkemnkia, from the Pontifical Lateran University, expressed the eagerness of the Association of African Universities to participate in the global AI discourse, as they are already discussing the role of AI in the future of higher education on the continent. He highlights the expediency of utilizing the Association as a collective entity rather than approaching individual universities separately, which would streamline collaboration and representation of African educational institutions in global AI discourse.

“We want to catch the train while it is still possible. And then, work with you, we just have to formulate how we want to work with you.” — Martin Nkafu Nkemnkia

In his closing remarks, Tuan Nguyen, co-founder and CEO of the Boston Global Forum, provided an update on the progress and future plans involving the integration and application of artificial intelligence (AI) within various sectors and regions. He outlines the update in three main parts: science and technology applications, policy development, and public engagement with a spiritual dimension.

Nguyen starts by sharing exciting news about the involvement of universities in Vietnam, particularly the University of Information and Communication Technology. This university is significantly engaged in the field, planning substantial investments in health, cybersecurity, and AI, supported by the Ministry of Defense. He also mentions Amrita University in India as another active participant in these initiatives.

He then highlighted the upcoming AIWS (Artificial Intelligence World Society) roundtable, the importance of refining and expanding the initiatives and incorporating feedback from this discussion. Nguyen announces a forthcoming conference on April 30th at Harvard University’s Loeb House, which will not only focus on science and technology but also policy and public engagement.

A notable aspect of Nguyen’s vision is the incorporation of spiritual values into the conversation about AI. He talks about working closely with spiritual leaders and religious figures from various faiths who have shown support and interest in the AIWS initiative. This unique approach aims to integrate spiritual values into the development and application of AI, recognizing the human and ethical dimensions of technological progress. He concludes by highlighting the achievements of the Boston Global Forum and its contribution to the global discourse on AI, expressing optimism for continued collaboration and contribution in the coming years.

Pope calls for treaty regulating AI, warning of potential for ‘technological dictatorship’

Pope calls for treaty regulating AI, warning of potential for ‘technological dictatorship’

Pope Francis has called for an international treaty to regulate the use of Artificial Intelligence, warning that the new technology risks causing a “technological dictatorship” which would threaten peace and democracy.

The 86-year-old pontiff says he wants world leaders to agree to a “binding international treaty” on AI developed within an ethical framework. Francis made the appeal in his annual message for the World Day of Peace which is marked by the Catholic Church every January 1.

“The unique human capacity for moral judgment and ethical decision-making is more than a complex collection of algorithms, and that capacity cannot be reduced to programming a machine, which as ‘intelligent’ as it may be, remains a machine,” Francis wrote.

“Any number of urgent questions need to be asked. What will be the consequences, in the medium and long term, of these new digital technologies? And what impact will they have on individual lives and on societies, on international stability and peace?”

The pope issued a strong warning against AI-controlled weapons systems which he called a “cause for grave ethical concern” while also raising alarm about the misuse of technology including “interference in elections, the rise of a surveillance society” and growing inequalities.

“All these factors risk fuelling conflicts and hindering peace,” the pope said.

Despite his cautious message, Francis did praise the “impressive achievements of science and technology,” insisting that AI also offers “exciting opportunities.”

Read the article in full here: https://edition.cnn.com/2023/12/14/tech/pope-francis-ai-warning-technological-dictatorship/index.html

Looking ahead to 2024, Boston Global Forum will develop a special report and organize a conference that will bring together spiritual leaders and esteemed religious figures. This gathering aims to deepen the exploration and development of spiritual values for AIWS, marking a significant milestone in the journey toward an ethically grounded and enlightened AI World Society.

Webinar of International Association of University Presidents “Human Rights 2023: To Innovate, To Include” – Human Rights in AI World Society

Webinar of International Association of University Presidents “Human Rights 2023: To Innovate, To Include” – Human Rights in AI World Society

The International Association of University Presidents (IAUP) celebrates 75 years of the Universal Declaration on Human Rights with an online webinar on December 11, 2023 (11:00 am -12:30 pm, EST)

Human Rights 2023: To Innovate, To Include

Nguyen Anh Tuan, CEO of the Boston Global Forum, addressed human rights in Artificial Intelligence World Society.

 

Speakers includes:

Dr. Fernando Leon Garcia, President, International Association of University Presidents

Zlatko Lagumdzija, Permanent Representative of Bosnia and Herzegovina to the United Nations (and a regular speaker at BGF events), former Prime Minister of Bosnia and Herzegovina

Cora Weiss, President, Peace Educator and Nobel Peace Prize nominee

Nguyen Anh Tuan, CEO of the Boston Global Forum, Co-founder of AI World Society (AIWS)

Alessandra Nilo, Juliana Cesar, Vice President Civil-20 engagement group of the 2024 G-20 chaired by Brazil, affiliated with Gestos – HIV+, Communication and Gender Issues NGO, Recife, Brazil

Dr. Mihir Kanade, Academic Coordinator of the University for Peace (UPEACE), the Head of its Department of International Law, and the Director of the UPEACE Human Rights Centre; independent expert of the UN Human Rights Council’s Expert Mechanism on the Right to Development

Dr Ş. İlgü Özler, founder and director of the SUNY Global Engagement programme, New York

Beth Nielsen Chapman, Twice Grammy-nominated Nashville based singer and songwriter

The moderator: Ramu Damodaran, the First Chief of the United Nations Academic Impact, Co-Chair of the United Nations Centennial Initiative

Link:

https://www.youtube.com/watch?v=V9anldnH_Fo

https://www.iaup.org/event/human-rights-2023-to-innovate-to-include/

Cora Weiss, Peace Educator and Nobel Peace Prize nominee at the event