by Editor | Feb 12, 2024 | News
Minh Nguyen is the Editor of the Boston Global Forum and a Shinzo Abe Initiative Fellow. She writes the Four Pillars column in the BGF Weekly newsletter.
There has been little movement in the space of the Four Pillars themselves this past week, but we continue to provide updates to the stories of the year.
After the EU approved an aid package to Ukraine last week, the US is looking to commit its own package too, pending the domestic political circus, after a foreign aid bill was advanced in the Senate. Although it has been long overdue, also due to domestic political theater, aid would still be welcome. Furthermore, it is important that the US asserts aid to Ukraine this year, as a bridge of sorts to when European production truly gets online in 2025. In an election year, the direction of American support may change, and thus Europe, especially Germany, is eyeing to step up their efforts in case the US does draw back. While Europe should have been more wary of what Russia could do, they may have to go it alone themselves and begin preparing for potential conflicts in the East.
Furthermore, European enlargement is on the horizon, with plans and projects being drawn up by the Commission in the past months. While the EU, a Pillar, has been in an internal debate about member states leaving and the effectiveness of a federal Europe, events of the past years have shown that both NATO and EU enlargement is necessary to maintain global democracy and prosperity.

Axel Heimken / AFP
by Editor | Feb 12, 2024 | Event Updates, News
We are excited to unveil the Boston Global Forum Conference on Natural AI, scheduled to take place on April 30, 2024, at the esteemed Harvard University Loeb House. This one-day symposium promises to be a landmark gathering of luminaries, specialists, and trailblazers converging to probe the transformative capabilities of Natural AI in building the AI World Society (AIWS) and shaping our collective destiny.
The conference will unfold across five vibrant sessions, meticulously curated to dissect pivotal aspects of Natural AI and its far-reaching implications for society, ethics, and technology.
Session 1: Unveiling the Depths of Human Mind and Brain
Guided by David Silbersweig and featuring distinguished guest speakers, Session 1 will unravel the intricacies of the human mind’s innate intelligence and intricate systems. Discourses will traverse societal mental well-being, interpersonal dynamics, and the ever-evolving terrain of brain-computer interface and human-computer interaction.
Session 2: Navigating the Expanse of Computer Intelligence
Under the stewardship of Tom Kehler and John Clippinger, Session 2 will plunge into the realm of computer intelligence and intricate systems. Ethical quandaries in AI and the nuances of collective decision-making will take precedence, shedding light on pathways for conscientious AI advancement.
Session 3: Confronting Societal and Global Imperatives
A prestigious panel comprising BGF luminaries alongside governmental, policy, non-profit, and corporate titans will assemble in Session 3. Together, they will tackle the societal, political, and global challenges entrenched in the pursuit of Natural AI.
Session 4: Synthesizing Insights and Paving the Way Forward
In Session 4, participants will engage in vigorous dialogues aimed at synthesizing the insights gleaned from preceding sessions. Themes such as unified complex systems, the Free Energy Principle, and the odyssey toward a natural, ethical AI will be explored, culminating in a collective vision of how Natural AI can surmount real-world challenges. Additionally, BGF will introduce the AIWS Angel Initiative, heralding a superlative Natural AI Assistant.
Session 5: Envisioning Tomorrow and Taking Decisive Action
As the conference draws to a crescendo, Session 5 will chart a roadmap for decisive action and collaboration. Attendees will strategize on actionable plans, forge impactful partnerships, and explore avenues to amplify the resonance of Natural AI initiatives. Furthermore, BGF will bestow the prestigious 2024 World Leader in AIWS Award, honoring exemplary contributions to the realm of Natural AI and the advancement of the AI World Society, recognizing those who have significantly contributed to building a more ethically-driven and inclusive AI ecosystem.
Join us on this remarkable journey as we unlock the boundless potential of Natural AI to sculpt a future brimming with promise, prosperity, and ethical stewardship.

by Editor | Feb 12, 2024 | Global Alliance for Digital Governance
The US government has created an artificial intelligence safety advisory group, including AI creators, users, and academics, with the goal of putting some guardrails on AI use and development.
The new US AI Safety Institute Consortium (AISIC), part of the National Institute of Standards and Technology, is tasked with coming up with guidelines for red-teaming AI systems, evaluating AI capacity, managing risk, ensuring safety and security, and watermarking AI-generated content.
On Thursday, the US Department of Commerce, NIST’s parent agency, announced both the creation of AISIC and a list of more than 200 participating companies and organizations. Amazon.com, Carnegie Mellon University, Duke University, the Free Software Foundation, and Visa are all members of AISIC, as well as several major developers of AI tools, including Apple, Google, Microsoft, and OpenAI.
The consortium “will ensure America is at the front of the pack” in setting AI safety standards while encouraging innovation, US Secretary of Commerce Gina Raimondo said in a statement. “Together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”
In addition to the announcement of the new consortium, the Biden administration this week named Elizabeth Kelly, a former economic policy adviser to the president, as director of the newly formed US Artificial Intelligence Safety Institute (USAISI), an organization within NIST that will house AISIC.
It’s unclear whether the coalition’s work will lead to regulations or new laws. While President Joe Biden issued an Oct. 30 executive order on AI safety, the timeline for the consortium’s work is up in the air. Furthermore, if Biden loses the presidential election later this year, momentum for AI regulations could stall.
However, Biden’s recent executive order suggests some regulation is needed. “Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks,” the executive order says. “This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”
Among Biden’s goals:
- Require developers of AI systems to share their safety test results with the U.S. government.
- Develop standards, tools, and test to help ensure that AI systems are safe, secure, and trustworthy.
- Protect US residents against AI-enabled fraud and deception.
- Establish a cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
The AI Safety Institute Consortium seeks contributions from its members in several of those areas, notably around the development of testing tools and industry standards for safe AI deployment.
Meanwhile, lawmakers have introduced dozens of AI-related bills in the US Congress during the 2023-24 session. The Artificial Intelligence Bug Bounty Act would require the Department of Defense to create a bug bounty program for AI tools it uses. The Block Nuclear Launch by Autonomous Artificial Intelligence Act would prohibit the use of federal funds for autonomous weapons systems that can launch nuclear weapons without meaningful human intervention. During an election year, it’s difficult to pass bills in Congress, however.
Several people, including Elon Musk and Stephen Hawking, have raised many concerns about AI, including the far-off threat that AI will eventually take control of the world. Nearer-term concerns including using AI to create bioweapons, new cyberattacks, or disinformation campaigns.
But others, including venture capitalist Marc Andreessen, have suggested that many concerns about AI are overblown.
Andreessen, in a lengthy June 6, 2023 blog post, argued that AI will save the world. He called for no regulatory barriers “whatsoever” on open-source AI development, because of the benefits to students learning to build AI systems.
However, he wrote, opportunists with a chance to profit from regulation have created a “moral panic” about the dangers of AI as a way to force new restrictions, regulations, and laws. Leaders of existing AI companies “stand to make more money if regulatory barriers are erected that form a cartel of government-blessed AI vendors protected from new startup and open source competition.”
The original article was published at Computerworld: https://www.computerworld.com/article/3712863/us-creates-advisory-group-to-consider-ai-regulation.html

by Editor | Feb 12, 2024 | News
President Sauli Niinisto, the 2018 World Leader for Peace and Security Award and a Global Enlightenment Leader, discussed his position on nuclear weapons.
In a recent interview with a Finnish newspaper IS.fi, Finnish President Sauli Niinistö made headlines with his firm stance on the nuclear weapons debate, diverging sharply from the views of Alexander Stubb, a fellow politician and presidential candidate. Niinistö’s comments come at a time when the topic of nuclear armament is increasingly contentious, not just within Finland but across the globe.
During the state opening of Parliament, when asked about the possibility of revisiting the nuclear energy law and the notion of Finland harboring nuclear weapons, President Niinistö responded emphatically. “I have sometimes noted that Finland has no need to open a discussion on nuclear weapons,” he stated, underscoring his position with a clarity that left little room for ambiguity.
Niinistö further elaborated, “The fact is that NATO only keeps nuclear weapons in a few places in Europe, and none of them are particularly close to Finland. But another fact is that the nuclear deterrent is realized through many different means, including submarines, among others. As far as I understand, it’s already quite well established.”
Read the full article here and in its original Finnish here.

by Editor | Feb 12, 2024 | News
On March 16, 2024, a significant collaboration between the Boston Global Forum and the Army Innovation Park (AIP) at Telecommunication University (TCU) in Nha Trang, Vietnam, marks a pivotal moment in the advancement of artificial intelligence (AI). This partnership, under the auspices of the Ministry of Defense of Vietnam, signals a proactive approach towards harnessing the potential of AI for societal benefit. At the heart of this collaboration lies a conference dedicated to exploring the capabilities of AIWS Angel, a groundbreaking Natural AI Assistant, as part of the AIWS Natural Initiative.
The conference represents a convergence of visionary minds, where experts from diverse backgrounds will convene to delve into the intricate nuances of Natural AI and its implications for the future. Held within the innovative ecosystem of the Army Innovation Park, renowned for its commitment to fostering technological innovation, the event promises to be a breeding ground for transformative ideas and collaborative ventures.
At its core, the discussion will revolve around AIWS Angel, a beacon of innovation poised to redefine human-computer interaction. With its naturalistic approach to AI, AIWS Angel holds the potential to revolutionize various facets of society, from enhancing productivity to promoting inclusivity and accessibility. By leveraging the power of AI in alignment with human values and ethics, this initiative seeks to pave the way towards a more harmonious coexistence between humans and intelligent machines.

Army Innovation Park in Nha Trang, Vietnam