On February 8, 2023, Boston Global Forum and Global Alliance for Digital Governance announced concept notes for a Regulation Framework for ChatGPT and AI Assistants:
- Transparency: AI Assistants and ChatGPT should be transparent in their operations and decision-making processes, with the ability for individuals to understand and review the data and algorithms used.
- Accountability: There should be clear lines of responsibility and accountability for the actions and decisions made by AI Assistants and ChatGPT, and mechanisms for individuals to seek redress if they are affected by adverse outcomes.
- Privacy: AI Assistants and ChatGPT must protect the privacy of individuals and their data, and not misuse or disclose personal information without consent. This is elaborated further in the Social Contract for the AI Age.
- Bias and Discrimination: AI Assistants and ChatGPT should be designed and operated in a way that minimizes the potential for bias and discrimination, and actively works to eliminate it.
- Security: AI Assistants and ChatGPT must be designed with robust security measures to protect against hacking and other forms of cybercrime or loopholes.
- Ethical Use: AI Assistants and ChatGPT should operate in accordance with ethical principles. It is introduced in the 7-layer model of AI World Society (AIWS).
- Continuous Improvement: Regulations and laws for AI Assistants and ChatGPT should be flexible and adaptable, allowing for ongoing refinement and improvement in response to new developments and emerging flashpoints.
- International Cooperation: Given the global nature of AI technology, there should be international cooperation and coordination on regulations and laws for AI Assistants and ChatGPT, to ensure consistent standards and practices across borders with the Social Contract for the AI Age.
- Public Engagement: Regulations and laws for AI Assistants and ChatGPT should involve active public engagement and consultation, to ensure that the perspectives and needs of individuals and society are taken into account.
- Independent Oversight: There should be independent oversight bodies, such as regulatory agencies or watchdog groups, to monitor the deployment and use of AI Assistants and ChatGPT, and ensure compliance with regulations, laws, and principles. The Global Alliance for Digital Governance (GADG) is recommended as a watchdog institution.