The Dialogue included distinguished leaders and scholars: the Honorable Governor Michael Dukakis of Massachusetts, Co-founder and Chairman of the Boston Global Forum (BGF); MIT Professors Nazli Choucri and Alex “Sandy” Pentland; Harvard Professors Thomas Patterson, Dr. David Silbersweig, Martha Minow, and Ruth L. Okediji; Executive Director of the Legal Design Lab and lecturer at Stanford Law School Margaret Hagan; Caroline Irma Maria Nevejan, Chief Science Officer with the City of Amsterdam; Vint Cerf, known as “Father of the Internet’; and Zlatko Lagumdžija, former Prime Minister of Bosnia and Herzegovina. Moderators: Governor Michael Dukakis and Nguyen Anh Tuan.
Emerging Consensus
The emergence of OpenAI’s ChatGPT and similar AI-enabled applications (the Social Contract for the AI Age considered as AI Assistants) pose both potential benefits and risks for humanity and a sustainable democratic global order. In general, AI is the new frontier in international relations that calls for a new post-nuclear global order. While AI in itself is not new, we are indeed at the dawn of a new AI era where much is unknown about the many shapes and directions that Natural Language Processing and General Purpose Technology, two mainstays of AI and AI-enabled applications, may take us in the near future. For this reason, and especially due to the relative lack of public knowledge and transparency of rapid developments in the field, it is increasingly more critical for global communities such as ours to “think on our feet” as to how best AI can be optimized for benefiting the human condition and for preventing or mitigating potential harm, whether intentional or not, through regulation toward common good. AI platforms and AI-enabled media have input data that far surpass the intelligence and agility of its human creators, i.e., the most sophisticated and technologically savvy individual humans who orchestrated the design of AI, with an enormous scope for unfathomable societal impact in real time. The situation requires that like-minded nation states and multidisciplinary scientific communities as well as technology and other industries leaders in the private sector collaborate for developing and implementing a robust Shared Framework for AI Governance as well as a Pact for Strategic Deterrence of Misuse of AI by rogue states and other bad actors (see here for a recent article on AI and the future of geopolitics in Foreign Affairs by industry leader and former CEO of Google, Inc. Eric Schmidt). The discussion centered on AI governance and alternative approaches to regulate the field.
Major Approaches Discussed
- Develop and implement a cascading menu of regulatory options, analogous to human-AI interface in smart cars; i.e., from self-driving capable mode to minimally AI-assisted human driver mode, and anywhere in-between
- Audit trails and transparent fixes to unintended behavior or misuse of AI
- Attention to corporate responsibility in regulatory framework, with AI entrepreneurs thought of as essential players, who respond to incentives and are at risk to common perversions in poorly regulated markets, who could also be trusted partners when engaged through shared objectives, shared values, and reasonable regulatory standards that promote growth and innovation
- Intelligent safeguards in AI design, as in circuit breakers for electricity, for preventing and interrupting rogue (adverse) events and misuse
- Invest in the conscious cultivation of human solidarity, empathy, and compassion, the fundamental human values that are the very essence of a social contract
- Challenge our assumptions and actions as to regulate “What” “Why” “How” etc.
- This previous point could be addressed from a systems perspective, i.e., consider AI as a component of many other intersecting and interrelated components of the human universe (global society), and apply systems thinking starting from “What” “How” and so on for a dynamic, comprehensive regulatory framework
- Test input assumptions and data when designing algorithms for preventing bias and other errors in the design of AI
- Adopt “Do not implement until all is known about the option” as a standard practice, “all” meaning certain crucial aspects such as data privacy, copyright issues, etc. (an example from The Netherlands)
- Engage domain experts and/or interest groups organized as a participatory community in the design phase of an AI application, e.g., physicians and the American Medical Association co-creating a clinically/health-related AI product with team/s of AI technology design experts
- Keep the Four Pillars (US, Japan, European Alliance, India) of Liberal Democracy when developing regulation
- Align with the Global Alliance for Digital Governance
- Consider Businesses, Nations, Geopolitical Regions, etc. as distinct stakeholder groups when developing regulations, as well as the “What” “How” “Why” “When” aspects of a framework
- Consider the GDPR, the IEEE standards, the Social Contract for the AI Age etc. as existing models, with an awareness that context matters and will require adoption of best practices with modifications necessary to suit a different local context
- Overall, a code of conduct, a playbook that governs responsible use of AI for the common good, is imperative, here and now!