Professor Christo Wilson comments on the report by Belfer Center on Attacking Artificial Intelligence

Sep 8, 2019News

In August 2019, the Belfer Center for Science and International Affairs, Harvard Kennedy School published the paper “Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It” by Marcus Comiter.

Christo Wilson, Associate Professor, Khoury College of Computer Sciences, Northeastern University, a team member of the Social Contract 2020, and a Michael Dukakis Leadership Fellow, comments on this report:

“This is a pretty decent report. My only quibble is their term “AI attack” — in the cybersecurity community, these attacks are known as “adversarial learning”. If you look up the former term you won’t find anything, but the latter term will turn up all the relevant academic literature. The resident expert on adversarial learning at Northeastern is Alina Oprea, formerly of RSA Labs. I’m happy to make an introduction if it’s useful.

The problems outlined in the report are very real. Machine learning, wonderful as it can be, is also surprisingly brittle and downright dumb. One of my favorite examples is the case where a researcher fabricated an odd looking pair of sunglasses: when worn, it caused a facial recognition system to erroneously identify them as a celebrity. Adversarial learning is one of the hottest topics in cybersecurity at the moment, and there is an equal amount of work being done to identify new attack vectors, as well as harden machine learning systems against attacks. It’s still early days though, and there’s a lot of work to be done on both sides.”