Regulation of AI Should Reflect Current Experience

Mar 7, 2020News

The rapid proliferation of applications of artificial intelligence and machine learning—or AI, for short—coupled with the potential for significant societal impact has spurred calls around the world for new regulation.

The European Union and China are developing their own rules, and the Organization for Economic Cooperation and Development has developed principles that enjoy the support of its members plus a handful of other countries. In January, the U.S. Office of Management and Budget (OMB) also issued its own draft guidance, ensuring the United States a seat at the table during this ongoing, multi-year, international conversation.

The U.S. guidance—covering “weak” or narrow AI applications of the kind we experience today—reflects a light-touch approach to regulation, consistent with a desire to reward U.S. ingenuity. Critics say the White House is embracing “permissionless innovation,” which involves the development and circulation of products or services without prior approval from regulators. Supporters have argued that the dynamic, boundary-pushing innovation principle is better than the restrictive precautionary principle.

The original article can be found here.

According to AI regulation, Michael Dukakis Institute for Leadership and Innovation (MDI) established the Artificial Intelligence World Society (AIWS) and AIWS Innovation Network for the purpose of promoting ethical norms and practices in the development and use of AI. AIWS will identify, publish and promote principles for the virtuous application of AI, and AIWS-IN will develop apps consistent with these principles for use in healthcare, education, transportation, national security, and other areas.