Boston Global Forum (BGF) Recommendation to Iran

Boston Global Forum (BGF) Recommendation to Iran

Implementing the “No Hostility Doctrine” for Peace, Normal Relations, and Prosperity

Iran is entering a new era. BGF recommends that Iran’s emerging leadership adopt and implement a clear national doctrine of no hostility toward any country as the cornerstone of a successful transition toward democracy, economic renewal, and international normalization. This doctrine is not merely a foreign-policy adjustment—it is a strategic foundation for restoring dignity to the Iranian people through peace, the rule of law, and opportunity.

1) Make “No Hostility” the national strategic turning point

BGF advises Iran to formally declare—clearly, consistently, and publicly—that:

  • Iran will not be hostile to the United States, Israel, or any country.
  • Iran will pursue disputes only through diplomacy, dialogue, and international law.
  • Iran will uphold non-aggression and respect for sovereignty.
  • Iran’s security policy will be defensive, focused on protecting Iranians and borders—not ideological confrontation.
  • Iran seeks to become a reliable partner for regional stability, trade, science, education, health, and cultural cooperation.

Why this matters: It ends the “permanent enemy” narrative that has fueled isolation, sanctions, militarization, and internal repression—blocking prosperity and democracy.

2) Convert doctrine into credibility: actions within the first 90 days

BGF advises the new leadership to pair the doctrine with concrete, verifiable actions—because credibility is built by behavior, not words.

A. Open diplomatic channels immediately
  • Establish direct and indirect communications with the United States and all relevant regional actors to reduce the risk of escalation and begin a pathway to normalization.
  • Create a crisis hotline mechanism to prevent miscalculation.
B. Enforce a “no external destabilization” policy
  • Announce and implement a clear policy of regional calm and non-interference.
  • Align all security agencies with a single principle: Iran’s security must be achieved through stability, not external confrontation.
C. Launch transparency and anti-corruption public reporting
  • Publish key state decisions, budgets, and procurement, especially in high-risk sectors.
  • Establish an independent anti-corruption mechanism and public oversight dashboards.
D. Protect human rights and the rule of law
  • Implement measurable steps to protect civil liberties, due process, and equal citizenship.
  • Begin a lawful review process for political detainees and ensure courts operate independently.
E. Invite international technical cooperation
  • Welcome cooperation on humanitarian needs, economic stabilization, public health, energy reliability, and institutional reform.
  • Prioritize partnerships that improve daily life quickly and visibly.

3) Align foreign policy with domestic renewal

BGF recommends a simple national message: Iran’s renewal will be built at home.
That means focusing the state on: jobs, education, health, freedom, anti-corruption, and a lawful democratic system.

BGF advice: Treat peace and normalization as economic infrastructure.

  • No hostility → lower risk → more investment → better jobs → stronger middle class → stronger democracy.

4) Establish a “Trust and Legitimacy Program” to support the doctrine

To sustain the No Hostility Doctrine beyond a single moment, BGF recommends Iran build institutional trust through:

  • Transparency by default (budgets, procurement, public services)
  • Accountability mechanisms (independent audits, anti-corruption authority, citizen complaint channels)
  • Human-centered governance (rights protections, due process, equal citizenship)
  • Public trust metrics to measure institutional performance and integrity

This is consistent with BGF’s broader vision of trust infrastructure for modern governance.

5) A message to the world—and to Iranians

BGF recommends that Iran deliver a final, unifying message:

Iran seeks a future defined by dignity through peace—a nation open to cooperation, committed to stability, and devoted to the wellbeing of its people.

BGF closing recommendation:
To succeed, Iran’s new leadership should treat the No Hostility Doctrine as a binding national commitment—translated into immediate actions, lawful institutions, and measurable improvements in the lives of Iranians. Peace is not a slogan; it is the pathway to democracy and prosperity.

Causal Inference + AIWS: A Practical Path to a New Iran

Causal Inference + AIWS: A Practical Path to a New Iran

Following reports circulating internationally about a major turning point in Iran’s leadership, Professor Judea Pearl (UCLA)—the pioneer of Causal Inference and the 2020 AIWS World Leader Award recipient—posted on X that Iran’s Supreme Leader Ayatollah Khamenei had died, linking to reporting from the Times of Israel.

In moments of regime transition, the greatest danger is acting on emotion, rumor, and short-term correlation—rather than on causal reality. Causal Inference offers a disciplined approach: map the drivers of instability and target the highest-leverage interventions that reduce violence and increase legitimacy. AIWS adds the moral foundation: reconciliation and love—without hatred, without hostility—guided by human dignity, trust, transparency, and accountability.

BGF recommends an “AIWS Transition Package” for a New Iran, built on four causal priorities:

  1. Legitimacy before power: Establish a time-limited National Transitional Council with broad representation (women, youth, provinces, minorities, experts) and a clear timetable for constitutional reform and elections. This reduces the “legitimacy vacuum → factional conflict” pathway.
  2. A No Hostility Doctrine: Iran’s new leadership should clearly announce: no hostility toward the United States, Israel, or any country, and commit to resolving disputes through diplomacy and international law. This breaks the cycle “hostility → isolation → economic collapse → radicalization.”
  3. 100-day stabilization: Protect essential services—electricity, water, hospitals, banking, food supply—and launch immediate transparency on budgets and procurement. Stabilizing daily life prevents social breakdown.
  4. Truth, reconciliation, and rule of law: Avoid revenge cycles. Create lawful accountability for grave crimes, while prioritizing national reconciliation and equal citizenship.

Finally, BGF encourages the U.S., Israel, and allied partners to support a stable transition through a structured “Friends of a New Iran” pathway—humanitarian support, technical assistance for elections and anti-corruption systems, and step-by-step normalization tied to verifiable reforms.

In the AI Age, peace is not idealism—it is risk management. A New Iran can be built by combining causal reasoning with AIWS values: reconciliation, dignity, trust, and a future without hostility.

STATEMENT (BGF-AIWS Family) — From the Consequences of Hostility to a Foundation of Reconciliation and Love in the AI Age

STATEMENT (BGF-AIWS Family) — From the Consequences of Hostility to a Foundation of Reconciliation and Love in the AI Age

For decades, Iran’s strategy of prolonged hostility and confrontation with the United States and Israel has produced painful consequences: heightened regional tensions, cycles of sanctions and isolation, constrained economic opportunity, and persistent insecurity that ultimately burdens ordinary people most. The lesson is clear: hatred and hostility do not create sustainable security; they prolong crisis and narrow a nation’s future.

In the AI Age, strategies rooted in hostility become even more dangerous. Artificial intelligence accelerates decision-making, amplifies information operations, expands cyber conflict, and can intensify misperception at unprecedented speed. In such an environment, even small incidents can escalate rapidly into major crises—faster, deeper, and harder to control than in previous eras. That is why every nation must build a new foundation for security and prosperity: Reconciliation and love—without hatred, without hostility.

This is a core value of the AI World Society (AIWS). AIWS calls on societies to replace hostility with dialogue, empathy, and cooperation; to build trust rather than fear; and to pursue shared prosperity rather than zero-sum confrontation. AIWS affirms a simple principle: powerful technology must be guided by values even more powerful—human dignity, compassion, moral responsibility, and long-term stewardship.

BGF-AIWS Family urges national leaders, especially in regions of conflict, to adopt reconciliation as a national strategy, to treat love and compassion as a civic and cultural foundation, and to make non-hostility a new measure of strength in the AI Age. Only on such a foundation can AI become a force for peace, development, and sustainable security for all humanity.

AI in the State of the Union: We Need Both “Infrastructure Pledges” and “Trust Infrastructure Laws”

AI in the State of the Union: We Need Both “Infrastructure Pledges” and “Trust Infrastructure Laws”

The AI references in the State of the Union underscore a practical truth: AI is becoming infrastructure for all infrastructure—tied to data centers, electricity, supply chains, and the ability to deploy capabilities at national scale. This focus is realistic. To lead in the AI Age, a country must build AI infrastructure: compute, power, networks, data capacity, and talent—because these determine speed, innovation, and competitiveness.

But speed alone is not enough. As AI increasingly shapes economies, societies, security, and public confidence, we must build a second pillar alongside physical infrastructure: trust infrastructure, anchored in Trust Infrastructure Laws. This is central to the AI World Society (AIWS) framework: creating standards and verification mechanisms so AI can be deployed as fast and as effectively as possible, while remaining safe, transparent, accountable, and grounded in human dignity.

The key lesson is not to choose one over the other. We need both “Infrastructure Pledges” and “Trust Infrastructure Laws.”

  • “Infrastructure pledges” can mobilize investment, accelerate deployment, and expand capability.
  • “Trust infrastructure laws” provide the guardrails that protect citizens’ rights, reduce systemic risk, and preserve democratic legitimacy.

Under the AIWS principle, the optimal balance is: open, enabling conditions that build the foundation for the fastest and most effective AI applications—guided by humanity’s highest values. That requires governance that accelerates innovation while ensuring accountability: transparent scope of deployment, auditability and traceability, risk evaluations and incident reporting, privacy and data protections, and independent oversight for high-stakes uses.

In the AI Age, national strength will be measured by two capabilities: the ability to build infrastructure that accelerates progress, and the ability to build trust infrastructure that protects values. When both pillars stand together, AI can truly become a force for prosperity, peace, and human-centered development.

The Nakayama Re-election: BGF Representative in Japan Strengthens Japan’s Global South Outreach in the AI Age

The Nakayama Re-election: BGF Representative in Japan Strengthens Japan’s Global South Outreach in the AI Age

Boston Global Forum (BGF) congratulates Yasuhide Nakayama, BGF’s Representative in Japan, on his re-election to Japan’s House of Representatives on February 8, returning to the Diet via the Kinki proportional representation block.

Mr. Nakayama’s re-election comes at a pivotal moment for Japan as it navigates a rapidly changing geopolitical and technological landscape. The Kinki PR block—covering the Kansai region—remains one of Japan’s most influential electoral blocs, and Mr. Nakayama’s return reinforces the presence of experienced national-security and foreign-policy leadership in the Diet.

BGF also recognizes Mr. Nakayama’s continuing role as BGF Representative in Japan, reflecting his long-standing engagement in international cooperation and democratic partnerships.

Following the election, BGF notes that Mr. Nakayama has taken on an expanded leadership portfolio within the Liberal Democratic Party (LDP) as Director of the Global South—a strategic arena that will be increasingly decisive in the AI Age. As competition and cooperation over AI standards, digital infrastructure, trusted supply chains, and human-centered governance intensify, Japan’s relationships with developing nations will help shape whether the next global technological order is grounded in openness, trust, and shared prosperity.

In BGF’s view, this Global South portfolio positions Mr. Nakayama to help shape a new phase of Japanese diplomacy—one that treats the Global South not as a peripheral priority, but as a central partner in building trusted AI ecosystems, education and skills cooperation, resilient cyber and digital infrastructure, and inclusive economic development.

AI Companies and the State: Freedom to Innovate—But No “Private Sovereignty” Over National Security

AI Companies and the State: Freedom to Innovate—But No “Private Sovereignty” Over National Security

In the AI age, national power is no longer determined only by military strength or traditional economic capacity. It increasingly depends on technological capability—especially advanced AI models, data, compute, and the ability to deploy systems at scale. This shift raises a foundational question: who ultimately decides how AI is used in matters of national destiny—governments or private companies? This question is central to the themes of “America at 250: A Beacon for the AI Age”—and to the future of democratic leadership in an era when AI becomes “infrastructure for all infrastructure.”

In practice, governments cannot move as fast as private firms in frontier AI innovation. Companies excel at attracting talent, iterating quickly, raising capital, and deploying products at global scale. For that reason, allowing private companies the freedom to innovate is essential. Excessive control can slow down national competitiveness and prevent breakthroughs that benefit society.

Yet freedom to innovate cannot mean that private actors gain private sovereignty over decisions that affect national defense and security. If companies hold strategic AI capabilities and operate with full autonomy—without structured cooperation with the state—then these companies become a powerful force that can shape a nation’s security posture without democratic accountability. The risk is not only technological. It is institutional: loss of coordination, unclear responsibility, reduced transparency, and weakened legitimacy—especially in crisis situations when national security requires reliable, timely cooperation.

The solution is not to replace private innovation with state control, nor to allow corporate autonomy to override public authority. What is needed is a modern partnership model: companies lead innovation, while governments retain sovereign decision-making in defense and high-stakes national security matters—supported by clear trust mechanisms that enable cooperation. This is precisely the kind of “architect role” democratic nations must embrace at America’s 250th: not only building capability, but also building the governance and trust structures that keep capability aligned with constitutional values.

This requires a practical framework: a public–private AI compact, minimum trust clauses in government contracts (scope limits, auditability, incident reporting, and enforceable remedies), independent oversight where feasible, and a firm principle of human-in-command for high-consequence decisions. In other words, governments should not “outsource” sovereign responsibility—and companies should not stand apart from the state when AI becomes infrastructure for national security.

In the AI era, a strong nation is one that combines the speed and creativity of the private sector with the legitimacy and accountability of democratic governance—ensuring that AI advances peace, prosperity, and sustainable security.

Anthropic vs. the U.S. “Department of War”: Right Red Lines—But Democracies Need Government-Level Trust Infrastructure

Anthropic vs. the U.S. “Department of War”: Right Red Lines—But Democracies Need Government-Level Trust Infrastructure

The recent dispute between Anthropic and the U.S. defense establishment—described in public reporting as the “Department of War”—is a defining governance stress test for the AI age. In a public statement, Anthropic CEO Dario Amodei argued it is existentially important for democracies to use AI for defense, while drawing two explicit “red lines” he said should not appear in contracts: mass domestic surveillance and fully autonomous weapons. Anthropic framed these limits as essential to protect civil liberties and to ensure that lethal decision-making remains under meaningful human control.

At the level of democratic principle, the statement is significant. Frontier AI is increasingly “infrastructure for all infrastructure” in national security—supporting intelligence analysis, modeling and simulation, operational planning, and cyber operations. In such environments, the line between legitimate defense and rights-eroding surveillance can blur quickly. Anthropic’s position—no mass domestic surveillance and no fully autonomous weapons—seeks to define a minimal democratic boundary: defending a country must not become a pathway to undermining the constitutional values it claims to protect.

Yet even if one agrees with these red lines, the episode exposes a structural problem: if guardrails are negotiated vendor-by-vendor, democracies will drift into fragmentation. Governments can “shop” for the least restrictive provider, or companies can be punished for insisting on safeguards—producing inconsistent standards and a race to the bottom. This is not a sustainable model for democratic governance of AI in defense.

Reuters reporting indicates President Trump directed federal agencies to stop using Anthropic’s technology, with a transition period for embedded defense use—illustrating how quickly a policy clash can become an operational shock when AI is deeply integrated into government systems. Whether one sides with the vendor or the government, the practical lesson is the same: without shared rules, national security AI becomes vulnerable to political whiplash and procurement instability.

From a BGF–AIWS Family lens, the core takeaway is that democracies should not allow the “operational constitution” of defense AI to be defined through ad hoc commercial contracts. National security and defense are matters of sovereign authority and democratic accountability. Private AI vendors may innovate and supply capabilities, but they must not hold veto power over defense policy or become a parallel authority above the state. What is needed is government-level Trust Infrastructure: a common baseline of enforceable clauses that applies to all vendors and all deployments—ensuring consistency, accountability, and democratic legitimacy.

Five minimum trust clauses (aligned with the AIWS Trust Infrastructure approach):

  1. No mass domestic surveillance; clear boundaries for domestic data and independent oversight.
  2. No fully autonomous weapons; mandatory human-in-command for lethal decisions.
  3. Mandatory auditability and logging; traceability for accountability and investigations.
  4. Incident reporting and emergency shutdown; defined response playbooks and post-incident review.
  5. Scope limits and anti–scope creep; any expansion requires formal approval and oversight.

In conclusion, Anthropic’s statement is valuable because it articulates democratic red lines in a high-stakes domain. But to prevent recurring conflict and fragmentation, democracies must move beyond vendor-by-vendor negotiations and establish shared Trust Infrastructure standards—so AI can strengthen security without eroding the values security exists to defend.

Boston Global Forum Launches AIWS Trust Rating and Announces the AIWS Trust Index Under the AIWS Trust Infrastructure

Boston Global Forum Launches AIWS Trust Rating and Announces the AIWS Trust Index Under the AIWS Trust Infrastructure

A measurable, auditable scorecard system—and an annual index—to strengthen safety, transparency, and accountability in the AI Age

BOSTON, MA — Boston Global Forum (BGF) today announced the launch of the AIWS Trust Rating, the flagship public-facing assessment tool within the broader AIWS Trust Infrastructure, and introduced the AIWS Trust Index, a periodic benchmarking publication that will track trust, safety, and governance readiness across sectors and jurisdictions in the AI Age.

The AIWS Trust Rating is a measurable, auditable rating system designed to help governments, health systems, companies, and democratic partners evaluate and improve trust in AI—with clear standards for safety, transparency, fairness, privacy, and incident readiness. The AIWS Trust Index will aggregate results and benchmarks over time, enabling year-to-year comparison and highlighting best practices, leadership examples, and priority gaps.

As artificial intelligence becomes “infrastructure for all infrastructure”—embedded across public services, healthcare, finance, education, and information ecosystems—BGF emphasized that trust must be built through verifiable mechanisms, not promises. The AIWS Trust Infrastructure translates democratic values into practical standards that can be measured, compared, and improved.

“AI will shape social trust, human security, and prosperity for decades to come,” said Nguyen Anh Tuan, Co-Founder, Co-Chair, and CEO of Boston Global Forum. “Democracies need common standards that are measurable, comparable, and actionable. AIWS Trust Rating provides a practical foundation for trustworthy AI, and the AIWS Trust Index will help the world track progress, compare performance, and learn from what works.”

A Scorecard Designed for Real-World Use

The AIWS Trust Rating is designed for decision-makers, procurement teams, regulators, and deployment leaders. It supports both technical verification and governance readiness, producing scores that can guide safe adoption and continuous improvement.

The rating framework evaluates AI across three layers:

  1. Core Trust & Safety (mandatory):
    Safety & accountability, model transparency, bias and fairness auditing, privacy and data protection, and incident response readiness.
  2. Deployment Readiness:
    Governance and oversight, evaluation and monitoring, workforce training, and procurement/vendor controls.
  3. Outcomes & Social Impact:
    Real-world performance, equitable access, and impact on public trust and information integrity.

Ratings can be reported on a 0–100 scale and translated into A–F grades, with “red flag” indicators for unacceptable risks and a roadmap of recommended actions to improve performance.

The AIWS Trust Index: Tracking Progress and Leadership

The AIWS Trust Index will provide periodic benchmarking of trust readiness in the AI Age, including:

  • comparative snapshots across sectors (e.g., healthcare, public services) and participating institutions,
  • trend analysis over time to measure improvement and risk reduction,
  • identification of best practices and replicable governance models, and
  • public reporting that strengthens transparency and accountability.

BGF indicated that the index will be released alongside AIWS Trust Reports, with clear methodology disclosures and expert review.

Priority Applications: Government and Healthcare

BGF stated that early implementations will focus on domains where stakes are highest, including:

  • Public services and government systems, to strengthen accountable, evidence-based administration in the AI Age; and
  • Healthcare, where AI can save lives but must meet rigorous standards of evidence, safety, and patient trust.

The AIWS Trust Infrastructure also supports “trust dashboards” that enable ongoing monitoring—helping institutions detect drift, track incidents, and validate outcomes after deployment.

Supporting Democratic Coordination and Trusted Markets

BGF highlighted that the AIWS Trust Infrastructure is designed to support coordination across democratic allies by creating a shared language for evaluation, governance, and procurement. By aligning standards and verification methods, allies can reduce systemic risks, improve resilience against AI-driven manipulation and misinformation, and build trusted AI markets.

Next Steps

In the coming months, BGF will:

  • publish a baseline AIWS Trust Rating – Core package, including evaluation criteria and reporting templates;
  • convene a multi-stakeholder review process with experts from policy, academia, civil society, industry, and healthcare;
  • launch pilots in priority domains—especially government services and healthcare; and
  • publish the inaugural AIWS Trust Index and periodic AIWS Trust Reports to track benchmarks, progress, and best practices.
About Boston Global Forum and AIWS

Boston Global Forum (BGF) is a global think tank and convening platform advancing responsible innovation, democratic governance, and international cooperation.
AI World Society (AIWS) is BGF’s initiative to develop practical architectures for human-centered, trustworthy AI, advancing peace, security, and human dignity in the AI Age. The AIWS Trust Infrastructure provides the umbrella framework of standards, dashboards, audits, and governance mechanisms within which the AIWS Trust Rating operates and the AIWS Trust Index reports progress.

Media Contact:
Boston Global Forum – Communications Office
Email: [email protected]
Phone: +1 617 286 6589

Amandeep Singh Gill at India AI Impact Summit 2026: AI as a Leap Forward, Not a Divide

Amandeep Singh Gill at India AI Impact Summit 2026: AI as a Leap Forward, Not a Divide

Addressing the summit in New Delhi, Amandeep Singh Gill, UN Under-Secretary-General and 2022 World Leader in AIWS Award Recipient, articulated a vision where AI serves as a “global equalizer” rather than a tool for further fragmentation. He warned that without deliberate intervention, the AI era could trigger a “second great divergence” between the Global North and South.

1. Advancing Inclusion: The “K-Shaped” Warning

Gill warned against a “K-shaped” AI economy, where inequality is baked into the technology’s architecture.

  • Beyond Isolation: He urged developing nations not to view AI in isolation but as the next essential layer of Digital Public Infrastructure (DPI).
  • The Inclusion Mandate: AI must move from “market bystanders” to active participants. He famously stated: “Young people should not be passengers in this story; they must actively bend the ‘K’ toward equality.”
2. “Idiot-Savants” and the Need for Guardrails

Gill offered a nuanced view of current AI capabilities, describing them as “Idiot-Savants.”

  • The Savant: Formidable at spotting patterns and mimicking human reasoning at scale.
  • The Idiot: Brittle and lacking real understanding (e.g., AI confusing blurred backgrounds for wildlife).
  • Policy Implications: Because AI is a “general-purpose technology on steroids,” Gill argued that it cannot be left to market forces alone. He called for mandatory testing and human oversight for high-stakes decisions.
3. Real-World Impact: “Low-Hanging Fruit”

For countries like India, Gill identified voice-based AI systems in local languages as the most immediate path to inclusion.

  • Breaking Barriers: Voice-enabled models can bypass literacy and linguistic hurdles, delivering agricultural, health, and educational services directly to the “bottom of the pyramid.”
  • Sustainability: He advocated for Small Language Models (SLMs)—energy-efficient models that can run on low-bandwidth networks and inexpensive devices.
4. Sovereignty and the Data Gap

Gill highlighted a stark global asymmetry in infrastructure:

  • The Compute Gap: He noted that as of last year, the entire continent of Africa had fewer than 1,000 GPUs.
  • Local Data as Sovereignty: True AI sovereignty, he argued, begins with local-language datasets. Protecting these datasets is essential to ensure that AI models reflect national cultures and values rather than external biases.
5. Toward July 2026: Global AI Governance

Gill announced that the UN is preparing for a Global AI Governance Dialogue in Geneva (July 2026).

  • Science-First: The UN has established an independent scientific panel of 40 experts to provide evidence-based assessments to counter both “AI hype” and “AI fear.”
  • A New Architecture: He proposed a $3 billion global AI fund to support nearly 90 under-resourced countries in building talent, compute capacity, and policy frameworks.