According to AI News, world leaders in innovation are giving warnings of potential AI catastrophea which may lead to lockdown of research.
Recently, autonomous robotic industries have been developing at a remarkable speed, and at the same time, have caused considerable damage over multiple incidents. Autonomous vehicles take up a chunk of incidents, such as the casualty involving an Uber self-driving vehicle. Soon, when there are more and more autonomous AI, there will be a lot of responsibility on researchers’ shoulders regarding the safety of users.
“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US,” said Andrew Moore, the new head of AI at Google Cloud at the Artificial Intelligence and Global Security Initiative.
It is agreed that AI should not be used for military weapons, however it seems to be inevitable since “there will always be players willing to step in.” “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” said Russian President Vladimir Putin.
Concerned about accidental and adversarial use of AI, Governor Michael Dukakis, Chairman of the Michael Dukakis Institute (MDI) believes global accord is needed to ensure the rapidly growing technology is used responsibly by governments around the world. For that reason, he co-founded Artificial Intelligence World Society, a project that aims to bring scientists, academics, government officials and industry leaders together to keep AI a benign force serving humanity’s best interests. At the moment, MDI has been developing the concept of AI-Government and AIWS Index in Ethics as two components of AIWS.