by Editor BGF | Mar 1, 2026 | News
For decades, Iran’s strategy of prolonged hostility and confrontation with the United States and Israel has produced painful consequences: heightened regional tensions, cycles of sanctions and isolation, constrained economic opportunity, and persistent insecurity that ultimately burdens ordinary people most. The lesson is clear: hatred and hostility do not create sustainable security; they prolong crisis and narrow a nation’s future.
In the AI Age, strategies rooted in hostility become even more dangerous. Artificial intelligence accelerates decision-making, amplifies information operations, expands cyber conflict, and can intensify misperception at unprecedented speed. In such an environment, even small incidents can escalate rapidly into major crises—faster, deeper, and harder to control than in previous eras. That is why every nation must build a new foundation for security and prosperity: Reconciliation and love—without hatred, without hostility.
This is a core value of the AI World Society (AIWS). AIWS calls on societies to replace hostility with dialogue, empathy, and cooperation; to build trust rather than fear; and to pursue shared prosperity rather than zero-sum confrontation. AIWS affirms a simple principle: powerful technology must be guided by values even more powerful—human dignity, compassion, moral responsibility, and long-term stewardship.
BGF-AIWS Family urges national leaders, especially in regions of conflict, to adopt reconciliation as a national strategy, to treat love and compassion as a civic and cultural foundation, and to make non-hostility a new measure of strength in the AI Age. Only on such a foundation can AI become a force for peace, development, and sustainable security for all humanity.

by Editor BGF | Mar 1, 2026 | News, Shaping Futures
The AI references in the State of the Union underscore a practical truth: AI is becoming infrastructure for all infrastructure—tied to data centers, electricity, supply chains, and the ability to deploy capabilities at national scale. This focus is realistic. To lead in the AI Age, a country must build AI infrastructure: compute, power, networks, data capacity, and talent—because these determine speed, innovation, and competitiveness.
But speed alone is not enough. As AI increasingly shapes economies, societies, security, and public confidence, we must build a second pillar alongside physical infrastructure: trust infrastructure, anchored in Trust Infrastructure Laws. This is central to the AI World Society (AIWS) framework: creating standards and verification mechanisms so AI can be deployed as fast and as effectively as possible, while remaining safe, transparent, accountable, and grounded in human dignity.
The key lesson is not to choose one over the other. We need both “Infrastructure Pledges” and “Trust Infrastructure Laws.”
- “Infrastructure pledges” can mobilize investment, accelerate deployment, and expand capability.
- “Trust infrastructure laws” provide the guardrails that protect citizens’ rights, reduce systemic risk, and preserve democratic legitimacy.
Under the AIWS principle, the optimal balance is: open, enabling conditions that build the foundation for the fastest and most effective AI applications—guided by humanity’s highest values. That requires governance that accelerates innovation while ensuring accountability: transparent scope of deployment, auditability and traceability, risk evaluations and incident reporting, privacy and data protections, and independent oversight for high-stakes uses.
In the AI Age, national strength will be measured by two capabilities: the ability to build infrastructure that accelerates progress, and the ability to build trust infrastructure that protects values. When both pillars stand together, AI can truly become a force for prosperity, peace, and human-centered development.


by Editor BGF | Mar 1, 2026 | Shinzo Abe Initiative for Peace and Security, News
Boston Global Forum (BGF) congratulates Yasuhide Nakayama, BGF’s Representative in Japan, on his re-election to Japan’s House of Representatives on February 8, returning to the Diet via the Kinki proportional representation block.
Mr. Nakayama’s re-election comes at a pivotal moment for Japan as it navigates a rapidly changing geopolitical and technological landscape. The Kinki PR block—covering the Kansai region—remains one of Japan’s most influential electoral blocs, and Mr. Nakayama’s return reinforces the presence of experienced national-security and foreign-policy leadership in the Diet.
BGF also recognizes Mr. Nakayama’s continuing role as BGF Representative in Japan, reflecting his long-standing engagement in international cooperation and democratic partnerships.
Following the election, BGF notes that Mr. Nakayama has taken on an expanded leadership portfolio within the Liberal Democratic Party (LDP) as Director of the Global South—a strategic arena that will be increasingly decisive in the AI Age. As competition and cooperation over AI standards, digital infrastructure, trusted supply chains, and human-centered governance intensify, Japan’s relationships with developing nations will help shape whether the next global technological order is grounded in openness, trust, and shared prosperity.
In BGF’s view, this Global South portfolio positions Mr. Nakayama to help shape a new phase of Japanese diplomacy—one that treats the Global South not as a peripheral priority, but as a central partner in building trusted AI ecosystems, education and skills cooperation, resilient cyber and digital infrastructure, and inclusive economic development.

by Editor BGF | Mar 1, 2026 | News
In the AI age, national power is no longer determined only by military strength or traditional economic capacity. It increasingly depends on technological capability—especially advanced AI models, data, compute, and the ability to deploy systems at scale. This shift raises a foundational question: who ultimately decides how AI is used in matters of national destiny—governments or private companies? This question is central to the themes of “America at 250: A Beacon for the AI Age”—and to the future of democratic leadership in an era when AI becomes “infrastructure for all infrastructure.”
In practice, governments cannot move as fast as private firms in frontier AI innovation. Companies excel at attracting talent, iterating quickly, raising capital, and deploying products at global scale. For that reason, allowing private companies the freedom to innovate is essential. Excessive control can slow down national competitiveness and prevent breakthroughs that benefit society.
Yet freedom to innovate cannot mean that private actors gain private sovereignty over decisions that affect national defense and security. If companies hold strategic AI capabilities and operate with full autonomy—without structured cooperation with the state—then these companies become a powerful force that can shape a nation’s security posture without democratic accountability. The risk is not only technological. It is institutional: loss of coordination, unclear responsibility, reduced transparency, and weakened legitimacy—especially in crisis situations when national security requires reliable, timely cooperation.
The solution is not to replace private innovation with state control, nor to allow corporate autonomy to override public authority. What is needed is a modern partnership model: companies lead innovation, while governments retain sovereign decision-making in defense and high-stakes national security matters—supported by clear trust mechanisms that enable cooperation. This is precisely the kind of “architect role” democratic nations must embrace at America’s 250th: not only building capability, but also building the governance and trust structures that keep capability aligned with constitutional values.
This requires a practical framework: a public–private AI compact, minimum trust clauses in government contracts (scope limits, auditability, incident reporting, and enforceable remedies), independent oversight where feasible, and a firm principle of human-in-command for high-consequence decisions. In other words, governments should not “outsource” sovereign responsibility—and companies should not stand apart from the state when AI becomes infrastructure for national security.
In the AI era, a strong nation is one that combines the speed and creativity of the private sector with the legitimacy and accountability of democratic governance—ensuring that AI advances peace, prosperity, and sustainable security.

by Editor BGF | Mar 1, 2026 | Global Alliance for Digital Governance
The recent dispute between Anthropic and the U.S. defense establishment—described in public reporting as the “Department of War”—is a defining governance stress test for the AI age. In a public statement, Anthropic CEO Dario Amodei argued it is existentially important for democracies to use AI for defense, while drawing two explicit “red lines” he said should not appear in contracts: mass domestic surveillance and fully autonomous weapons. Anthropic framed these limits as essential to protect civil liberties and to ensure that lethal decision-making remains under meaningful human control.
At the level of democratic principle, the statement is significant. Frontier AI is increasingly “infrastructure for all infrastructure” in national security—supporting intelligence analysis, modeling and simulation, operational planning, and cyber operations. In such environments, the line between legitimate defense and rights-eroding surveillance can blur quickly. Anthropic’s position—no mass domestic surveillance and no fully autonomous weapons—seeks to define a minimal democratic boundary: defending a country must not become a pathway to undermining the constitutional values it claims to protect.
Yet even if one agrees with these red lines, the episode exposes a structural problem: if guardrails are negotiated vendor-by-vendor, democracies will drift into fragmentation. Governments can “shop” for the least restrictive provider, or companies can be punished for insisting on safeguards—producing inconsistent standards and a race to the bottom. This is not a sustainable model for democratic governance of AI in defense.
Reuters reporting indicates President Trump directed federal agencies to stop using Anthropic’s technology, with a transition period for embedded defense use—illustrating how quickly a policy clash can become an operational shock when AI is deeply integrated into government systems. Whether one sides with the vendor or the government, the practical lesson is the same: without shared rules, national security AI becomes vulnerable to political whiplash and procurement instability.
From a BGF–AIWS Family lens, the core takeaway is that democracies should not allow the “operational constitution” of defense AI to be defined through ad hoc commercial contracts. National security and defense are matters of sovereign authority and democratic accountability. Private AI vendors may innovate and supply capabilities, but they must not hold veto power over defense policy or become a parallel authority above the state. What is needed is government-level Trust Infrastructure: a common baseline of enforceable clauses that applies to all vendors and all deployments—ensuring consistency, accountability, and democratic legitimacy.
Five minimum trust clauses (aligned with the AIWS Trust Infrastructure approach):
- No mass domestic surveillance; clear boundaries for domestic data and independent oversight.
- No fully autonomous weapons; mandatory human-in-command for lethal decisions.
- Mandatory auditability and logging; traceability for accountability and investigations.
- Incident reporting and emergency shutdown; defined response playbooks and post-incident review.
- Scope limits and anti–scope creep; any expansion requires formal approval and oversight.
In conclusion, Anthropic’s statement is valuable because it articulates democratic red lines in a high-stakes domain. But to prevent recurring conflict and fragmentation, democracies must move beyond vendor-by-vendor negotiations and establish shared Trust Infrastructure standards—so AI can strengthen security without eroding the values security exists to defend.

by Editor BGF | Feb 21, 2026 | News
A measurable, auditable scorecard system—and an annual index—to strengthen safety, transparency, and accountability in the AI Age
BOSTON, MA — Boston Global Forum (BGF) today announced the launch of the AIWS Trust Rating, the flagship public-facing assessment tool within the broader AIWS Trust Infrastructure, and introduced the AIWS Trust Index, a periodic benchmarking publication that will track trust, safety, and governance readiness across sectors and jurisdictions in the AI Age.
The AIWS Trust Rating is a measurable, auditable rating system designed to help governments, health systems, companies, and democratic partners evaluate and improve trust in AI—with clear standards for safety, transparency, fairness, privacy, and incident readiness. The AIWS Trust Index will aggregate results and benchmarks over time, enabling year-to-year comparison and highlighting best practices, leadership examples, and priority gaps.
As artificial intelligence becomes “infrastructure for all infrastructure”—embedded across public services, healthcare, finance, education, and information ecosystems—BGF emphasized that trust must be built through verifiable mechanisms, not promises. The AIWS Trust Infrastructure translates democratic values into practical standards that can be measured, compared, and improved.
“AI will shape social trust, human security, and prosperity for decades to come,” said Nguyen Anh Tuan, Co-Founder, Co-Chair, and CEO of Boston Global Forum. “Democracies need common standards that are measurable, comparable, and actionable. AIWS Trust Rating provides a practical foundation for trustworthy AI, and the AIWS Trust Index will help the world track progress, compare performance, and learn from what works.”
A Scorecard Designed for Real-World Use
The AIWS Trust Rating is designed for decision-makers, procurement teams, regulators, and deployment leaders. It supports both technical verification and governance readiness, producing scores that can guide safe adoption and continuous improvement.
The rating framework evaluates AI across three layers:
- Core Trust & Safety (mandatory):
Safety & accountability, model transparency, bias and fairness auditing, privacy and data protection, and incident response readiness.
- Deployment Readiness:
Governance and oversight, evaluation and monitoring, workforce training, and procurement/vendor controls.
- Outcomes & Social Impact:
Real-world performance, equitable access, and impact on public trust and information integrity.
Ratings can be reported on a 0–100 scale and translated into A–F grades, with “red flag” indicators for unacceptable risks and a roadmap of recommended actions to improve performance.
The AIWS Trust Index: Tracking Progress and Leadership
The AIWS Trust Index will provide periodic benchmarking of trust readiness in the AI Age, including:
- comparative snapshots across sectors (e.g., healthcare, public services) and participating institutions,
- trend analysis over time to measure improvement and risk reduction,
- identification of best practices and replicable governance models, and
- public reporting that strengthens transparency and accountability.
BGF indicated that the index will be released alongside AIWS Trust Reports, with clear methodology disclosures and expert review.
Priority Applications: Government and Healthcare
BGF stated that early implementations will focus on domains where stakes are highest, including:
- Public services and government systems, to strengthen accountable, evidence-based administration in the AI Age; and
- Healthcare, where AI can save lives but must meet rigorous standards of evidence, safety, and patient trust.
The AIWS Trust Infrastructure also supports “trust dashboards” that enable ongoing monitoring—helping institutions detect drift, track incidents, and validate outcomes after deployment.
Supporting Democratic Coordination and Trusted Markets
BGF highlighted that the AIWS Trust Infrastructure is designed to support coordination across democratic allies by creating a shared language for evaluation, governance, and procurement. By aligning standards and verification methods, allies can reduce systemic risks, improve resilience against AI-driven manipulation and misinformation, and build trusted AI markets.
Next Steps
In the coming months, BGF will:
- publish a baseline AIWS Trust Rating – Core package, including evaluation criteria and reporting templates;
- convene a multi-stakeholder review process with experts from policy, academia, civil society, industry, and healthcare;
- launch pilots in priority domains—especially government services and healthcare; and
- publish the inaugural AIWS Trust Index and periodic AIWS Trust Reports to track benchmarks, progress, and best practices.
About Boston Global Forum and AIWS
Boston Global Forum (BGF) is a global think tank and convening platform advancing responsible innovation, democratic governance, and international cooperation.
AI World Society (AIWS) is BGF’s initiative to develop practical architectures for human-centered, trustworthy AI, advancing peace, security, and human dignity in the AI Age. The AIWS Trust Infrastructure provides the umbrella framework of standards, dashboards, audits, and governance mechanisms within which the AIWS Trust Rating operates and the AIWS Trust Index reports progress.
Media Contact:
Boston Global Forum – Communications Office
Email: [email protected]
Phone: +1 617 286 6589


by Editor BGF | Feb 22, 2026 | World Leader for Peace and Security, News, World Leaders in AIWS Award Updates
Addressing the summit in New Delhi, Amandeep Singh Gill, UN Under-Secretary-General and 2022 World Leader in AIWS Award Recipient, articulated a vision where AI serves as a “global equalizer” rather than a tool for further fragmentation. He warned that without deliberate intervention, the AI era could trigger a “second great divergence” between the Global North and South.
1. Advancing Inclusion: The “K-Shaped” Warning
Gill warned against a “K-shaped” AI economy, where inequality is baked into the technology’s architecture.
- Beyond Isolation: He urged developing nations not to view AI in isolation but as the next essential layer of Digital Public Infrastructure (DPI).
- The Inclusion Mandate: AI must move from “market bystanders” to active participants. He famously stated: “Young people should not be passengers in this story; they must actively bend the ‘K’ toward equality.”
2. “Idiot-Savants” and the Need for Guardrails
Gill offered a nuanced view of current AI capabilities, describing them as “Idiot-Savants.”
- The Savant: Formidable at spotting patterns and mimicking human reasoning at scale.
- The Idiot: Brittle and lacking real understanding (e.g., AI confusing blurred backgrounds for wildlife).
- Policy Implications: Because AI is a “general-purpose technology on steroids,” Gill argued that it cannot be left to market forces alone. He called for mandatory testing and human oversight for high-stakes decisions.
3. Real-World Impact: “Low-Hanging Fruit”
For countries like India, Gill identified voice-based AI systems in local languages as the most immediate path to inclusion.
- Breaking Barriers: Voice-enabled models can bypass literacy and linguistic hurdles, delivering agricultural, health, and educational services directly to the “bottom of the pyramid.”
- Sustainability: He advocated for Small Language Models (SLMs)—energy-efficient models that can run on low-bandwidth networks and inexpensive devices.
4. Sovereignty and the Data Gap
Gill highlighted a stark global asymmetry in infrastructure:
- The Compute Gap: He noted that as of last year, the entire continent of Africa had fewer than 1,000 GPUs.
- Local Data as Sovereignty: True AI sovereignty, he argued, begins with local-language datasets. Protecting these datasets is essential to ensure that AI models reflect national cultures and values rather than external biases.
5. Toward July 2026: Global AI Governance
Gill announced that the UN is preparing for a Global AI Governance Dialogue in Geneva (July 2026).
- Science-First: The UN has established an independent scientific panel of 40 experts to provide evidence-based assessments to counter both “AI hype” and “AI fear.”
- A New Architecture: He proposed a $3 billion global AI fund to support nearly 90 under-resourced countries in building talent, compute capacity, and policy frameworks.

by Editor BGF | Feb 22, 2026 | Shinzo Abe Initiative for Peace and Security, News
On February 18, 2026, Japan entered a transformative era as Sanae Takaichi was officially designated the nation’s 105th Prime Minister. Following the Liberal Democratic Party’s (LDP) landslide victory on February 8, PM Takaichi used her inaugural Diet address to set a bold, proactive tone for Japan’s future, focusing on economic revitalization, national security, and global AI leadership.
This domestic momentum now shifts to the international stage with the March 19, 2026, Washington Summit, hailed as the most significant diplomatic event of the decade. The summit serves as the formal unveiling of the Japan-U.S. “New Golden Age”—a framework integrating economic security, AI innovation, and the Social Contract for the AI Age.
1. The March 19 Summit Agenda: A Historic Reception
- State Guest Honors: President Trump will welcome PM Takaichi as a State Guest with a full ceremony and official dinner. This high-level choreography underscores the “limitless” potential of the alliance following her decisive electoral mandate.
- The “America at 250” Tribute: Japan will take center stage in the U.S. 250th-anniversary celebrations. PM Takaichi will formally present 250 cherry trees to the American people, symbolizing a renewed and enduring friendship for the next century.
2. Anchoring the AIWS Social Contract
The summit will act as a strategic bridge to the Harvard AIWS Summit on May 1st, focusing on:
- The “Gennai” & AIWS Synergy: Harmonizing Japan’s new Fundamental Plan for AI (Gennai) with the AIWS Social Contract. The goal is to move from theory to practice by implementing shared standards for AI Trust and Transparent Algorithms.
- Human-Centric Governance: Both leaders are expected to affirm a joint vision for AI that upholds human dignity and individual data rights, creating a democratic alternative to “Digital Authoritarianism.”
3. The $550 Billion “Prosperity Bridge”
The leaders will finalize specific projects under Japan’s massive investment pledge to the U.S. economy:
- Semiconductor Resilience: Joint ventures in 2-nanometer chip production to secure the hardware foundation of the global AI stack.
- Critical Minerals: A new bilateral framework for stockpiling and processing rare earths and lithium to ensure supply chain independence.
4. Security & The “JESTA” Framework
Building on her proactive security stance, PM Takaichi will discuss integrating Japan’s new JESTA (Japan Electronic System for Travel Authorization) with U.S. border technologies. This creates a “Trusted Corridor”, utilizing AI for high-speed, high-security migration management between the two nations.

by Editor BGF | Feb 21, 2026 | News
On February 20, 2026, the U.S. Supreme Court issued a landmark 6-3 ruling striking down President Donald Trump’s “reciprocal” global tariffs. The Court ruled that the administration exceeded its authority by using the International Emergency Economic Powers Act (IEEPA) to bypass Congress’s constitutional power over taxes and trade.
In response, Treasury Secretary Scott Bessent appeared on Fox News to outline the administration’s strategy moving forward:
1. The “Draconian Alternative”
Bessent argued that while the Court restricted specific tariffs under IEEPA, it reaffirmed the President’s right to a complete embargo.
- The Leverage: Bessent noted, “The Court has made the President’s leverage more draconian… he does have the right to a full embargo. He can just cut countries off or cut whole product lines off.”
2. Seamless Transition: “The Toolbox is Full”
The administration plans to maintain tariff levels using alternative legal authorities:
- Section 122 (Trade Act of 1974): Used immediately to sign an executive order for a 10% global tariff (valid for 150 days).
- Sections 232 & 301: These security-focused authorities remain intact to ensure trade goals are met without interruption.
3. Economic Stability & Growth
Despite the ruling, Bessent projected confidence, stating that 2026 revenue estimates remain “virtually unchanged.” The administration continues to target 3.5% growth through its “parallel prosperity” agenda, linking trade, tax, and energy policy.
4. The Refund Contention
With $133B–$175B in collected tariffs now potentially illegal, a “convoluted” battle for refunds looms. Bessent signaled that the refund process could take years, urging partners to honor existing trade agreements rather than seeking immediate repayments.
