Musk and Pinker Disagree on Threat Posed by AI

Mar 3, 2018News

Last week, a brief feud erupted between two of the brightest thinkers on the topic of human progress: Harvard psychologist Steven Pinker and tech CEO Elon Musk. It began with a recent interview of Pinker in Wired (for his new book Enlightenment Now), where he said that, “if Elon Musk was really serious about the AI threat he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.” This was in response to a past comment by Musk that AI is one of the greatest existential threats to humanity currently.

In a tweet, Musk responded: “Wow, if even Pinker doesn’t understand the difference between functional/narrow AI (eg. car) and general AI, when the latter *literally* has a million times more compute power and an open-ended utility function, humanity is in deep trouble.” Elon Musk’s car company Tesla is currently researching self-driving cars, and because of this Musk has stepped down from OpenAI, citing this conflict of interest.

This is just one of the more public examples of an ongoing and serious debate throughout the tech sector: is the technology we’re developing an existential threat? And what defines AI in the first place? This was the subject of a recent talk by Max Tegmark, author of Life 3.0, who himself has dedicated much to answering these questions.

Although the brief spat between Pinker and Musk seems to have simmered down, the question at its core remains. AIWS was founded to answer questions such as these. Recognizing the potential dangers of AI and related technology, we bring together leaders and researchers to discuss ideas on how to make sure it benefits society.