Is AI Our Final Invention?

May 31, 2020News

The future of artificial intelligence can be either exciting or worrisome depending on who you talk to. Some believe that AI may take over humanity, while others believe that we’re far away from AI achieving anything close to human intelligence. One author recently wrote a book addressing the potentially apocalyptic perspective of AI. James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era shared insights into his book on a recent AI Today podcast. Despite the title of the book, he is surprisingly more on the fence about it than his book title would imply.

AI is becoming an increasing part of our daily lives. From intelligent assistants to facial recognition, AI technology is starting to permeate all sorts of our personal interactions. Though we might have achieved the grand visions of a singular intelligence system capable of learning any task, so-called Artificial General Intelligence (AGI), we are increasingly living in a world where the everyday person uses AI on a daily basis. With this sudden boost in AI usage, people are beginning to question just how safe this technology actually is. In many instances, AI and machine learning have the potential to provide significant benefit. However, we have reason to be concerned for AI systems that go wrong.

Barrat, however, sees a problem with managing AI. He points out that it is more than possible for the “good guys” to accidentally be “bad guys” and that AI may have unintended consequences from inappropriate application of the technology. Barrat points out that even companies with great technology will use their market strength to suppress competitive technology and perspectives. The notion of “good” in a corporate or government setting can be complex. He also points out that the unique nature of AI is that it learns from experience, and that experience might include human bias. Barrat details how depending on what data you feed a system, it can alter the outcome. His example is that if you feed a system photo of doctors that all look alike, the system might infer that all doctors are only white men. This can lead to problems down the line and imperfect system logic.

The original article can be found here.

Regarding to AI Ethics, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS.net) has developed AIWS Ethics and Practice Index to measure the extent to which a government’s AI activities respects human values and contributes to the constructive use of AI. The Index has four components by which government performance is assessed including transparency, regulation, promotion and implementation.