By Michael Dukakis, Nguyen Anh Tuan, Alex Pentland
Artificial intelligence (AI) and automated systems are increasingly affecting our daily lives. Banking algorithms decide who is eligible for housing or financial loans, healthcare algorithms are making decisions on coverage and standard of care. Companies are using hiring algorithms to sort resumes for potential employees. While all of these innovations make life more convenient, they pose risks to the public and are often rife with bias and discrimination.
Further, there has been substantial investment in the development and adoption of AI, but nowhere near as much money or energy has been put toward safeguards or protection, regulations or even a standard code of ethics.
Earlier this month, the White House Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights,” one that aims to ensure the use of AI is fair, equitable and nondiscriminatory. As members of the Boston Global Forum, we applaud President Biden and the OSTP for advancing this important measure which protects people from threats and defines guardrails on technology to reinforce civil rights, civil liberties and privacy, and equal opportunities ensuring access to critical resources and services.
The Blueprint outlines five common-sense protections with respect to AI to which all citizens should be entitled:
- AI should be safe and effective;
- It shouldn’t discriminate;
- It shouldn’t violate data privacy;
- We should know when AI is being used;
- We should be able to opt out and talk to a human when we encounter a problem.
It’s not binding legislation, but rather a set of recommendations for government agencies and technology companies using AI. It’s also a great tool to educate the public as well as organizations responsible for protecting and advancing our civil rights and civil liberties.
This is a necessary first step for our country, but the effort must be a global one.
On the world stage, bad actors in other nations are increasingly using AI to spread disinformation and propaganda through deep fakes and other manipulated media – all of which are in direct conflict to the values of democracy and freedom.
Last year, the Boston Global Forum and World Leadership Alliance – Club de Madrid brought prominent international leaders together to explore ideas and strategies and for a Global Law and Accord on Artificial Intelligence and Digital Rights.
The group established the Global Alliance for Digital Governance (GADG) to coordinate resources among governments, international organizations, corporations, think tanks, civil society and influencers for AI and a digital sphere for good, to make the resources more effective to synthesize and maximize their impact. It is not an organization, but rather, a network for sharing resources and cooperating among governments. At the core of this imperative is to establish a common understanding of policy and practice, anchored in general principles to help maximize the “good” and minimize the “bad” associated with AI:
- Fairness and justice for all: The first principle is already agreed upon in the international community as a powerful aspiration. It is the expectation of all entities – private and public – to treat, and be treated, with fairness and justice.
- Responsibility and accountability for policy and decision making —private and public: The second principle recognizes the power of the new global ecology that will increasingly span all entities worldwide—private and public, developing and developed.
- Precautionary principle for innovations and applications: The third principle is well established internationally. It does not impede innovation but supports it. It does not push for regulation but supports initiatives to explore the unknown with care and caution.
- Ethics-in-AI: Fourth is the principle of ethical integrity—for the present and the future. Different cultures and countries may have different ethical systems, but everyone, everywhere recognizes and adopts some basic ethical precepts. At issue is incorporating the commonalities into a global ethical system for all phases, innovations, and manifestations of artificial intelligence
At home and abroad, we must move toward a framework, an ecosystem, and a social contract for the AI age. Without adequate guidelines and useful directives, the undisciplined use of AI poses risks to the wellbeing of individuals and creates fertile ground for economic, political, social, and criminal exploitation. As we gain consensus on principles and practices among members of the global society, we will generate and enhance social benefits and wellbeing for all, shared by all.