Why the overestimation of Artificial Intelligence is dangerous

Jan 6, 2019News

“Artificial Intelligence has great potential to change our lives, but it seems that the term is being overused,” said Zachary Lipton, assistant professor at Carnegie Mellon University.

Billions of dollars are invested in AI startups, or AI projects at giant companies. The problem is, opportunities are overshadowed by those who claim too much about technological power.

At the MIT Technology Review’s EmTech conference, Zachary Lipton warned that hype would blur the followers of its limitations. It is increasingly difficult to distinguish between what is real progress and what is not an exaggeration.

AI technology is known as deep learning, it is very powerful in image recognition, voice translation and now it can help power everything from self-driving cars to translation applications on smartphones.

But technology still has significant limitations. Many deep learning models only work well with huge amounts of data. They often struggle in conditions of rapid changes in the real world.

In his presentation, Lipton also highlighted the tendency of AI boosters to claim human-like capabilities for the technology. The risk is that AI bubbles will make people believe in dominant algorithms like autonomous vehicles and clinical diagnoses.

“Policymakers don’t read the scientific literature,” warned Lipton, “but they do read the clickbait that goes around.” The media here is really complicated because it doesn’t work well enough to distinguish real strengths and PR tricks.

Lipton is not the only researcher to ring a warning bell. In the recent post, “Artificial Intelligence-The Revolution Hasn’t Happened Yet,” Michael Jordan, a professor at University of California, Berkeley, says that “AI is all too often bandied about as “an intellectual wildcard,” and this makes it harder to think critically about the technology’s potential impact.”

As AI is deployed to be used by business, industry, and private citizens, it is essential to that AI technologies remain benevolent and free from risk of misuse, error, or loss of control, according to Layer 7 of the AIWS 7-Layer Model developed by the Michael Dukakis Institute. Through Layer 7, and the Model as a whole, AIWS hopes to ensure that inviting AI into our lives will have positive effects.