Two years ago, arms control advocates had reason for hope when scores of countries metat the United Nations in Geneva to discuss the future of lethal autonomous weapons systems, or LAWS. The main goal was to limit or regulate military AI.At the time, Russia was strongly against these efforts, arguing that “it is hardly acceptable for the work on LAWS to restrict the freedom to enjoy the benefits of autonomous technologies being the future of humankind” due to “the difficulty of making a clear distinction between civilian and military developments of autonomous systems based on the same technologies is still an essential obstacle in the discussion on LAWS.” (source: defenseone.com)
Fast forward to today and Russia is seemingly changingits stand on this issue. At least, that is according to a signal sent by Russian Security Council Secretary Nikolai Patrushev, who saidthis week, “we believe that it is necessary to activate the powers of the global community, chiefly at the UN venue, as quickly as possible to develop a comprehensive regulatory framework that would prevent the use of the specified [new] technologies for undermining national and international security.”
This reversal of course by Russia is interesting considering its strengthened support for AI use in military applications during the past two years. President Vladimir Putin even said that AI is necessary in weapons production.
Russian officials now seemed interested in standards for AI. What caused Russia to change its stand? Regardless of the answer, the AI World Society welcomes Russia’s softened approach to this important matter. We hope that our Comprehensive Report and Guidelineson AI Ethics could be helpful.