At a time of profound threats to the biosphere, democratic institutions and global stability, scientifically informed perspectives and policies are needed to counter conspiratorial narratives and outright distrust in science and civil discourse. While the Industrial Machine Age and Free Market Capitalism engendered an era of extraordinary material wealth for the “industrialized world”, it has also imperiled all life on the planet. Furthermore, the failure of “free markets”and industrialized democracies to effectively and equitably govern has engendered a growing nihilism and fatalistic acquiescence to authoritarianism. The nightmare of an all knowing and all seeing Machine Intelligence, as exemplified in Artificial Intelligence, especially, as seen in recent ChatGPT projects, seems to further confirm such an inevitability.
But this need not be the case. Our group, Coalition for Collective Intelligence Commons (CCIC) was recently convened at the MIT Media Lab on the premise that we are transitioning from a 17th century “abiotic” or mechanistic view of Nature and ourselves to one based upon biotic principles, that is, a physics of living things. With the convergence of computational biology, complexity sciences, neuroscience, genomics, and physics has emerged science and evidence based computational methods for reimagining our democratic and economic institutions in a manner that is in concert with Nature and human dignity. This approach has recently come together to provide a coherent framework for a scale free and domain free AI that is based on the principles of living things. Known as Active Inference, Bayesian Belief modeling and Free Energy Minimization, this work builds upon a long history of cybernetics, information, computation and complexity sciences. It is best known recently through the work of Judea Pearl and the computational neuroscientist, Karl Friston.
The purpose of the CCIC is to form an entity for researchers, scientists, ecological and democratic activists, and new “biotic” companies to ensure that such technologies do not become captured by the powerful few, but serve the global commons, the Greater Good, through science and evidence based policies, firms, and institutions that strengthen democratic and open societies.
John Clippinger and CICC at MIT Media Lab, March 6-7, 2023
John Clippinger and CICC at MIT Media Lab, March 6-7, 2023
The upcoming Shinzo Abe Initiative Conference on April 5, 2023, in Tokyo features a lineup of esteemed speakers from various fields. The conference, titled “Make the Economy of Japan Great in the Age of Global Enlightenment,” aims to address the challenges faced by the Japanese economy in the current global landscape and provide recommendations for sustainable growth and prosperity, with the aid of AI.
The speakers at the conference are Cameron Kerry, Former US Acting Secretary of Commerce, Governor Michael Dukakis, Chair of the Boston Global Forum (BGF), Adam Posen, President of the Peterson Institute for International Economics, Alex Pentland, MIT professor, Masazumi Wakatabe, Deputy Governor of the Bank of Japan, Nguyen Anh Tuan, CEO of BGF, Yasuhide Nakayama, former State Minister, Ministry of Defense and Foreign Affairs, and Koichi Hamada, Yale professor and Special Economic Adviser to Prime Minister Shinzo Abe. Ambassador Etsuro Honda, an architect of Abenomics, will also be in attendance.
The discussions at the conference will be moderated by Ambassador Ichiro Fujisaki, a prominent Japanese diplomat who served as the Ambassador of Japan to the United States from 2008 to 2012.
The conference is an event that brings together leaders, policymakers, and experts to discuss Japan’s economic and political future and explore opportunities for growth.
The Shinzo Abe Initiative was founded on July 10, 2022 by Global Enlightenment Leaders of the Boston Global Forum, in honor of Prime Minister Shinzo Abe, who spearheaded several economic reforms during his tenure, after his assassination.
Masazumi Wakatabe, Deputy Governor of the Bank of Japan
Cameron Kerry at the 10th Anniversary Conference of Boston Global Forum, November 22, 2022
Vietnamese writers Michelle Nguyen and Nguyen Dinh Thang, as well as renowned singer Bui Thi Thuy, have teamed up with AIWS-The Age of Global Enlightenment to promote education and raise awareness of the initiative’s 7-layer model for building a society deeply integrated with AI.
The initiative seeks to promote a more ethical, inclusive, and sustainable world by encouraging the development of AI technologies aligned with ethical principles that can help to solve some of the world’s most pressing problems. Its 7-layer model includes the Social Contract for the AI Age and the Global Enlightenment Economy, as well as Global Enlightenment Education which promotes the idea that “every citizen is an innovator” and aims to bridge the divide between AI in all areas.
Michelle Nguyen, an accomplished writer, is thrilled to work with AIWS to promote global awareness and education. She firmly believes in the potential of technology and AI to create positive change in the world and is eager to explore new ways to use these technologies to promote ethical and sustainable development. Similarly, Nguyen Dinh Thang, a prominent business leader with expertise in IT and banking industries and a passion for promoting access to quality education for all, is enthusiastic about the project.
Together, Michelle Nguyen and Nguyen Dinh Thang have just published the book “AI in the Age of Global Enlightenment,” inspired by the book “Remaking the World – Toward an Age of Global Enlightenment.” The book explores the potential of AI to promote ethical and sustainable development, in line with the principles of AIWS.
Renowned singer Bui Thi Thuy is also excited to be part of this initiative. She believes that music can be a powerful tool for promoting the core value of “every citizen is an innovator and creator” and is looking forward to using her platform to raise awareness of the pioneering concepts of AIWS and to inspire people to think critically about the future of technology and AI.
Together, these talented artists and writers are making a significant contribution to the global conversation about the future of technology and AI. Through their work with AIWS-The Age of Global Enlightenment, they are helping to remake the world towards an Age of Global Enlightenment. They hope that their efforts will inspire others to think critically about the role of technology and AI in shaping the future of our world and work towards a more ethical and inclusive society.
Distinguished women scholars came together on February 28th, 2023 at the BGF High-level Dialog to discuss the framework of regulations for AI assistants and ChatGPT. The panel included Nazli Choucri from MIT, Martha Minow, former dean of Harvard Law School, Ruth L. Okediji, Director of the Berkman Center at Harvard Law School, Margaret Hagan from Stanford Law School, Shyamal Sharma Visiting Research Scholar at Brandeis University and Founder of Relational Society, and Caroline Irma Maria Nevejan, the Chief Science Officer with the City of Amsterdam.
The scholars explored the ethical and legal implications of AI assistants and chatbots, as well as the challenges associated with their development and deployment. They stressed the importance of a regulatory framework that ensures responsible use and development of these technologies.
The panelists concluded that a comprehensive regulatory framework is necessary to balance the potential benefits and risks of AI assistants and chatbots. They emphasized the importance of involving diverse stakeholders in the development of this framework, including technology developers, policymakers, civil society organizations, and academia.
The event was moderated by Governor Michael Dukakis, Chair of BGF, and Nguỵen Anh Tuan, CEO of BGF. Keynote speakers included Vint Cerf, the father of the internet, and MIT professor Alex Pentland, one of the most powerful data scientists.
Distinguished women scholars discussing at the BGF High-level Dialogue on Regulation for AI Assistants and ChatGPT on February 28, 2023
The Dialogue included distinguished leaders and scholars: the Honorable Governor Michael Dukakis of Massachusetts, Co-founder and Chairman of the Boston Global Forum (BGF); MIT Professors Nazli Choucri and Alex “Sandy” Pentland; Harvard Professors Thomas Patterson, Dr. David Silbersweig, Martha Minow, and Ruth L. Okediji; Executive Director of the Legal Design Lab and lecturer at Stanford Law School Margaret Hagan; Caroline Irma Maria Nevejan, Chief Science Officer with the City of Amsterdam; Vint Cerf, known as “Father of the Internet’; and Zlatko Lagumdžija, former Prime Minister of Bosnia and Herzegovina. Moderators: Governor Michael Dukakis and Nguyen Anh Tuan.
The emergence of OpenAI’s ChatGPT and similar AI-enabled applications (the Social Contract for the AI Age considered as AI Assistants) pose both potential benefits and risks for humanity and a sustainable democratic global order. In general, AI is the new frontier in international relations that calls for a new post-nuclear global order. While AI in itself is not new, we are indeed at the dawn of a new AI era where much is unknown about the many shapes and directions that Natural Language Processing and General Purpose Technology, two mainstays of AI and AI-enabled applications, may take us in the near future. For this reason, and especially due to the relative lack of public knowledge and transparency of rapid developments in the field, it is increasingly more critical for global communities such as ours to “think on our feet” as to how best AI can be optimized for benefiting the human condition and for preventing or mitigating potential harm, whether intentional or not, through regulation toward common good. AI platforms and AI-enabled media have input data that far surpass the intelligence and agility of its human creators, i.e., the most sophisticated and technologically savvy individual humans who orchestrated the design of AI, with an enormous scope for unfathomable societal impact in real time. The situation requires that like-minded nation states and multidisciplinary scientific communities as well as technology and other industries leaders in the private sector collaborate for developing and implementing a robust Shared Framework for AI Governance as well as a Pact for Strategic Deterrence of Misuse of AI by rogue states and other bad actors (see here for a recent article on AI and the future of geopolitics in Foreign Affairs by industry leader and former CEO of Google, Inc. Eric Schmidt). The discussion centered on AI governance and alternative approaches to regulate the field.
Major Approaches Discussed
Develop and implement a cascading menu of regulatory options, analogous to human-AI interface in smart cars; i.e., from self-driving capable mode to minimally AI-assisted human driver mode, and anywhere in-between
Audit trails and transparent fixes to unintended behavior or misuse of AI
Attention to corporate responsibility in regulatory framework, with AI entrepreneurs thought of as essential players, who respond to incentives and are at risk to common perversions in poorly regulated markets, who could also be trusted partners when engaged through shared objectives, shared values, and reasonable regulatory standards that promote growth and innovation
Intelligent safeguards in AI design, as in circuit breakers for electricity, for preventing and interrupting rogue (adverse) events and misuse
Invest in the conscious cultivation of human solidarity, empathy, and compassion, the fundamental human values that are the very essence of a social contract
Challenge our assumptions and actions as to regulate “What” “Why” “How” etc.
This previous point could be addressed from a systems perspective, i.e., consider AI as a component of many other intersecting and interrelated components of the human universe (global society), and apply systems thinking starting from “What” “How” and so on for a dynamic, comprehensive regulatory framework
Test input assumptions and data when designing algorithms for preventing bias and other errors in the design of AI
Adopt “Do not implement until all is known about the option” as a standard practice, “all” meaning certain crucial aspects such as data privacy, copyright issues, etc. (an example from The Netherlands)
Engage domain experts and/or interest groups organized as a participatory community in the design phase of an AI application, e.g., physicians and the American Medical Association co-creating a clinically/health-related AI product with team/s of AI technology design experts
Keep the Four Pillars (US, Japan, European Alliance, India) of Liberal Democracy when developing regulation
Align with the Global Alliance for Digital Governance
Consider Businesses, Nations, Geopolitical Regions, etc. as distinct stakeholder groups when developing regulations, as well as the “What” “How” “Why” “When” aspects of a framework
Consider the GDPR, the IEEE standards, the Social Contract for the AI Age etc. as existing models, with an awareness that context matters and will require adoption of best practices with modifications necessary to suit a different local context
Overall, a code of conduct, a playbook that governs responsible use of AI for the common good, is imperative, here and now!
Nguyen Anh Tuan, Co-moderator of the BGF High-level Dialogue on ChatGPT and AI Assistants