How Biden’s new executive order tackles AI risks, and where it falls short

Nov 5, 2023Global Alliance for Digital Governance

The comprehensive, even sweeping, set of guidelines for artificial intelligence that the White House unveiled in an executive order on Oct. 30, 2023, shows that the U.S. government is attempting to address the risks posed by AI.

The order is only a step, however, and it leaves unresolved the issue of comprehensive data privacy legislation. Without such laws, people are at greater risk of AI systems revealing sensitive or confidential information.

The executive order directs the Department of Commerce to develop guidance for labeling AI-generated content. Federal agencies will be required to use AI watermarking – technology that marks content as AI-generated to reduce fraud and misinformation – though it’s not required for the private sector.

The executive order also recognizes that AI systems can pose unacceptable risks of harm to civil and human rights and the well-being of individuals: “Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms.”

What the executive order doesn’t do

A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy legislation. The executive order only calls on Congress to adopt privacy legislation, but it does not provide a legislative framework. It remains to be seen how the courts will interpret the executive order’s directives in light of existing consumer privacy and data rights statutes.

Without strong data privacy laws in the U.S. as other countries have, the executive order could have minimal effect on getting AI companies to boost data privacy. In general, it’s difficult to measure the impact that decision-making AI systems have on data privacy and freedoms.

It’s also worth noting that algorithmic transparency is not a panacea. For example, the European Union’s General Data Protection Regulation legislation mandates “meaningful information about the logic involved” in automated decisions. This suggests a right to an explanation of the criteria that algorithms use in their decision-making. The mandate treats the process of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they can understand how the system affects them. But knowing how an AI system works doesn’t necessarily tell you why it made a particular decision.

With algorithmic decision-making becoming pervasive, the White House executive order and the international summit on AI safety highlight that lawmakers are beginning to understand the importance of AI regulation, even if comprehensive legislation is lacking.

 

https://www.pbs.org/newshour/politics/analysis-how-bidens-new-executive-order-tackles-ai-risks-and-where-it-falls-short

https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694

https://theconversation.com/biden-administration-executive-order-tackles-ai-risks-but-lack-of-privacy-laws-limits-reach-216694