Building the Framework for an AI International Accord: The EU is considering a ban on AI for mass surveillance and social credit scores

Apr 18, 2021News

The Boston Global Forum building the framework for an AI International Accord. On April 28, the AI International Accord Panel with the attendance of Ambassador Stavros Lambrinidis; Gabriela Ramos, the Assistant Director-General for the Social and Human Sciences of UNESCO; and Governor Michael Dukakis, Co-founder and Chairman of the Boston Global Forum.

The EU is considering a partial AI ban that will be formally announced April 21.

The proposed ban is the first of its kind

Europe’s legislative body will likely focus on “high risk” AI systems and could fine companies up to €20m or 4% of revenue if they don’t comply.

“High risk” refers to things like mass surveillance and social credit scores that can impact safety and privacy. AI for things like manufacturing and energy would likely be good to go.

The regulations include:

  • A surveillance banson AI systems that track people indiscriminately
  • A ban on social credit scores that track individual behaviors, impact hiring and judiciary decisions, and rate trustworthiness
  • Bias prevention measures like human oversight in testing datasets
  • AI notifications that would be sent to people when interacting with AI systems

The catch: Experts say the rules are vague and leave room for loopholes.

The rules also come as AI investment in the US is hotter than ever:

  • The National Security Commission on Artificial Intelligence recently called for $32Bin annual nonmilitary federal spending on AI research
  • Already in 2021, 442 US VC deals with AI startups were wortha combined $11.65B

US companies could likely be subject to the EU’s new rules. Though the draft could change come April 21, we have a feeling few companies will be big fans.