by Editor BGF | Mar 1, 2026 | Global Alliance for Digital Governance
The recent dispute between Anthropic and the U.S. defense establishment—described in public reporting as the “Department of War”—is a defining governance stress test for the AI age. In a public statement, Anthropic CEO Dario Amodei argued it is existentially important for democracies to use AI for defense, while drawing two explicit “red lines” he said should not appear in contracts: mass domestic surveillance and fully autonomous weapons. Anthropic framed these limits as essential to protect civil liberties and to ensure that lethal decision-making remains under meaningful human control.
At the level of democratic principle, the statement is significant. Frontier AI is increasingly “infrastructure for all infrastructure” in national security—supporting intelligence analysis, modeling and simulation, operational planning, and cyber operations. In such environments, the line between legitimate defense and rights-eroding surveillance can blur quickly. Anthropic’s position—no mass domestic surveillance and no fully autonomous weapons—seeks to define a minimal democratic boundary: defending a country must not become a pathway to undermining the constitutional values it claims to protect.
Yet even if one agrees with these red lines, the episode exposes a structural problem: if guardrails are negotiated vendor-by-vendor, democracies will drift into fragmentation. Governments can “shop” for the least restrictive provider, or companies can be punished for insisting on safeguards—producing inconsistent standards and a race to the bottom. This is not a sustainable model for democratic governance of AI in defense.
Reuters reporting indicates President Trump directed federal agencies to stop using Anthropic’s technology, with a transition period for embedded defense use—illustrating how quickly a policy clash can become an operational shock when AI is deeply integrated into government systems. Whether one sides with the vendor or the government, the practical lesson is the same: without shared rules, national security AI becomes vulnerable to political whiplash and procurement instability.
From a BGF–AIWS Family lens, the core takeaway is that democracies should not allow the “operational constitution” of defense AI to be defined through ad hoc commercial contracts. National security and defense are matters of sovereign authority and democratic accountability. Private AI vendors may innovate and supply capabilities, but they must not hold veto power over defense policy or become a parallel authority above the state. What is needed is government-level Trust Infrastructure: a common baseline of enforceable clauses that applies to all vendors and all deployments—ensuring consistency, accountability, and democratic legitimacy.
Five minimum trust clauses (aligned with the AIWS Trust Infrastructure approach):
- No mass domestic surveillance; clear boundaries for domestic data and independent oversight.
- No fully autonomous weapons; mandatory human-in-command for lethal decisions.
- Mandatory auditability and logging; traceability for accountability and investigations.
- Incident reporting and emergency shutdown; defined response playbooks and post-incident review.
- Scope limits and anti–scope creep; any expansion requires formal approval and oversight.
In conclusion, Anthropic’s statement is valuable because it articulates democratic red lines in a high-stakes domain. But to prevent recurring conflict and fragmentation, democracies must move beyond vendor-by-vendor negotiations and establish shared Trust Infrastructure standards—so AI can strengthen security without eroding the values security exists to defend.

by Editor BGF | Feb 21, 2026 | News
A measurable, auditable scorecard system—and an annual index—to strengthen safety, transparency, and accountability in the AI Age
BOSTON, MA — Boston Global Forum (BGF) today announced the launch of the AIWS Trust Rating, the flagship public-facing assessment tool within the broader AIWS Trust Infrastructure, and introduced the AIWS Trust Index, a periodic benchmarking publication that will track trust, safety, and governance readiness across sectors and jurisdictions in the AI Age.
The AIWS Trust Rating is a measurable, auditable rating system designed to help governments, health systems, companies, and democratic partners evaluate and improve trust in AI—with clear standards for safety, transparency, fairness, privacy, and incident readiness. The AIWS Trust Index will aggregate results and benchmarks over time, enabling year-to-year comparison and highlighting best practices, leadership examples, and priority gaps.
As artificial intelligence becomes “infrastructure for all infrastructure”—embedded across public services, healthcare, finance, education, and information ecosystems—BGF emphasized that trust must be built through verifiable mechanisms, not promises. The AIWS Trust Infrastructure translates democratic values into practical standards that can be measured, compared, and improved.
“AI will shape social trust, human security, and prosperity for decades to come,” said Nguyen Anh Tuan, Co-Founder, Co-Chair, and CEO of Boston Global Forum. “Democracies need common standards that are measurable, comparable, and actionable. AIWS Trust Rating provides a practical foundation for trustworthy AI, and the AIWS Trust Index will help the world track progress, compare performance, and learn from what works.”
A Scorecard Designed for Real-World Use
The AIWS Trust Rating is designed for decision-makers, procurement teams, regulators, and deployment leaders. It supports both technical verification and governance readiness, producing scores that can guide safe adoption and continuous improvement.
The rating framework evaluates AI across three layers:
- Core Trust & Safety (mandatory):
Safety & accountability, model transparency, bias and fairness auditing, privacy and data protection, and incident response readiness.
- Deployment Readiness:
Governance and oversight, evaluation and monitoring, workforce training, and procurement/vendor controls.
- Outcomes & Social Impact:
Real-world performance, equitable access, and impact on public trust and information integrity.
Ratings can be reported on a 0–100 scale and translated into A–F grades, with “red flag” indicators for unacceptable risks and a roadmap of recommended actions to improve performance.
The AIWS Trust Index: Tracking Progress and Leadership
The AIWS Trust Index will provide periodic benchmarking of trust readiness in the AI Age, including:
- comparative snapshots across sectors (e.g., healthcare, public services) and participating institutions,
- trend analysis over time to measure improvement and risk reduction,
- identification of best practices and replicable governance models, and
- public reporting that strengthens transparency and accountability.
BGF indicated that the index will be released alongside AIWS Trust Reports, with clear methodology disclosures and expert review.
Priority Applications: Government and Healthcare
BGF stated that early implementations will focus on domains where stakes are highest, including:
- Public services and government systems, to strengthen accountable, evidence-based administration in the AI Age; and
- Healthcare, where AI can save lives but must meet rigorous standards of evidence, safety, and patient trust.
The AIWS Trust Infrastructure also supports “trust dashboards” that enable ongoing monitoring—helping institutions detect drift, track incidents, and validate outcomes after deployment.
Supporting Democratic Coordination and Trusted Markets
BGF highlighted that the AIWS Trust Infrastructure is designed to support coordination across democratic allies by creating a shared language for evaluation, governance, and procurement. By aligning standards and verification methods, allies can reduce systemic risks, improve resilience against AI-driven manipulation and misinformation, and build trusted AI markets.
Next Steps
In the coming months, BGF will:
- publish a baseline AIWS Trust Rating – Core package, including evaluation criteria and reporting templates;
- convene a multi-stakeholder review process with experts from policy, academia, civil society, industry, and healthcare;
- launch pilots in priority domains—especially government services and healthcare; and
- publish the inaugural AIWS Trust Index and periodic AIWS Trust Reports to track benchmarks, progress, and best practices.
About Boston Global Forum and AIWS
Boston Global Forum (BGF) is a global think tank and convening platform advancing responsible innovation, democratic governance, and international cooperation.
AI World Society (AIWS) is BGF’s initiative to develop practical architectures for human-centered, trustworthy AI, advancing peace, security, and human dignity in the AI Age. The AIWS Trust Infrastructure provides the umbrella framework of standards, dashboards, audits, and governance mechanisms within which the AIWS Trust Rating operates and the AIWS Trust Index reports progress.
Media Contact:
Boston Global Forum – Communications Office
Email: [email protected]
Phone: +1 617 286 6589


by Editor BGF | Feb 22, 2026 | World Leader for Peace and Security, News, World Leaders in AIWS Award Updates
Addressing the summit in New Delhi, Amandeep Singh Gill, UN Under-Secretary-General and 2022 World Leader in AIWS Award Recipient, articulated a vision where AI serves as a “global equalizer” rather than a tool for further fragmentation. He warned that without deliberate intervention, the AI era could trigger a “second great divergence” between the Global North and South.
1. Advancing Inclusion: The “K-Shaped” Warning
Gill warned against a “K-shaped” AI economy, where inequality is baked into the technology’s architecture.
- Beyond Isolation: He urged developing nations not to view AI in isolation but as the next essential layer of Digital Public Infrastructure (DPI).
- The Inclusion Mandate: AI must move from “market bystanders” to active participants. He famously stated: “Young people should not be passengers in this story; they must actively bend the ‘K’ toward equality.”
2. “Idiot-Savants” and the Need for Guardrails
Gill offered a nuanced view of current AI capabilities, describing them as “Idiot-Savants.”
- The Savant: Formidable at spotting patterns and mimicking human reasoning at scale.
- The Idiot: Brittle and lacking real understanding (e.g., AI confusing blurred backgrounds for wildlife).
- Policy Implications: Because AI is a “general-purpose technology on steroids,” Gill argued that it cannot be left to market forces alone. He called for mandatory testing and human oversight for high-stakes decisions.
3. Real-World Impact: “Low-Hanging Fruit”
For countries like India, Gill identified voice-based AI systems in local languages as the most immediate path to inclusion.
- Breaking Barriers: Voice-enabled models can bypass literacy and linguistic hurdles, delivering agricultural, health, and educational services directly to the “bottom of the pyramid.”
- Sustainability: He advocated for Small Language Models (SLMs)—energy-efficient models that can run on low-bandwidth networks and inexpensive devices.
4. Sovereignty and the Data Gap
Gill highlighted a stark global asymmetry in infrastructure:
- The Compute Gap: He noted that as of last year, the entire continent of Africa had fewer than 1,000 GPUs.
- Local Data as Sovereignty: True AI sovereignty, he argued, begins with local-language datasets. Protecting these datasets is essential to ensure that AI models reflect national cultures and values rather than external biases.
5. Toward July 2026: Global AI Governance
Gill announced that the UN is preparing for a Global AI Governance Dialogue in Geneva (July 2026).
- Science-First: The UN has established an independent scientific panel of 40 experts to provide evidence-based assessments to counter both “AI hype” and “AI fear.”
- A New Architecture: He proposed a $3 billion global AI fund to support nearly 90 under-resourced countries in building talent, compute capacity, and policy frameworks.

by Editor BGF | Feb 22, 2026 | Shinzo Abe Initiative for Peace and Security, News
On February 18, 2026, Japan entered a transformative era as Sanae Takaichi was officially designated the nation’s 105th Prime Minister. Following the Liberal Democratic Party’s (LDP) landslide victory on February 8, PM Takaichi used her inaugural Diet address to set a bold, proactive tone for Japan’s future, focusing on economic revitalization, national security, and global AI leadership.
This domestic momentum now shifts to the international stage with the March 19, 2026, Washington Summit, hailed as the most significant diplomatic event of the decade. The summit serves as the formal unveiling of the Japan-U.S. “New Golden Age”—a framework integrating economic security, AI innovation, and the Social Contract for the AI Age.
1. The March 19 Summit Agenda: A Historic Reception
- State Guest Honors: President Trump will welcome PM Takaichi as a State Guest with a full ceremony and official dinner. This high-level choreography underscores the “limitless” potential of the alliance following her decisive electoral mandate.
- The “America at 250” Tribute: Japan will take center stage in the U.S. 250th-anniversary celebrations. PM Takaichi will formally present 250 cherry trees to the American people, symbolizing a renewed and enduring friendship for the next century.
2. Anchoring the AIWS Social Contract
The summit will act as a strategic bridge to the Harvard AIWS Summit on May 1st, focusing on:
- The “Gennai” & AIWS Synergy: Harmonizing Japan’s new Fundamental Plan for AI (Gennai) with the AIWS Social Contract. The goal is to move from theory to practice by implementing shared standards for AI Trust and Transparent Algorithms.
- Human-Centric Governance: Both leaders are expected to affirm a joint vision for AI that upholds human dignity and individual data rights, creating a democratic alternative to “Digital Authoritarianism.”
3. The $550 Billion “Prosperity Bridge”
The leaders will finalize specific projects under Japan’s massive investment pledge to the U.S. economy:
- Semiconductor Resilience: Joint ventures in 2-nanometer chip production to secure the hardware foundation of the global AI stack.
- Critical Minerals: A new bilateral framework for stockpiling and processing rare earths and lithium to ensure supply chain independence.
4. Security & The “JESTA” Framework
Building on her proactive security stance, PM Takaichi will discuss integrating Japan’s new JESTA (Japan Electronic System for Travel Authorization) with U.S. border technologies. This creates a “Trusted Corridor”, utilizing AI for high-speed, high-security migration management between the two nations.

by Editor BGF | Feb 21, 2026 | News
On February 20, 2026, the U.S. Supreme Court issued a landmark 6-3 ruling striking down President Donald Trump’s “reciprocal” global tariffs. The Court ruled that the administration exceeded its authority by using the International Emergency Economic Powers Act (IEEPA) to bypass Congress’s constitutional power over taxes and trade.
In response, Treasury Secretary Scott Bessent appeared on Fox News to outline the administration’s strategy moving forward:
1. The “Draconian Alternative”
Bessent argued that while the Court restricted specific tariffs under IEEPA, it reaffirmed the President’s right to a complete embargo.
- The Leverage: Bessent noted, “The Court has made the President’s leverage more draconian… he does have the right to a full embargo. He can just cut countries off or cut whole product lines off.”
2. Seamless Transition: “The Toolbox is Full”
The administration plans to maintain tariff levels using alternative legal authorities:
- Section 122 (Trade Act of 1974): Used immediately to sign an executive order for a 10% global tariff (valid for 150 days).
- Sections 232 & 301: These security-focused authorities remain intact to ensure trade goals are met without interruption.
3. Economic Stability & Growth
Despite the ruling, Bessent projected confidence, stating that 2026 revenue estimates remain “virtually unchanged.” The administration continues to target 3.5% growth through its “parallel prosperity” agenda, linking trade, tax, and energy policy.
4. The Refund Contention
With $133B–$175B in collected tariffs now potentially illegal, a “convoluted” battle for refunds looms. Bessent signaled that the refund process could take years, urging partners to honor existing trade agreements rather than seeking immediate repayments.
