Harvard Professor David Silberswieg, Boston Global Forum Board Member, recommends to discuss the letter of AI experts warned: Mitigating the risk of extinction from AI should be a global priority alongside other societal risks.
Leaders from OpenAI, Google DeepMind, Anthropic and other AI labs warn that future systems could be as deadly as pandemics and nuclear weapon.
Here are the articles of their call: the New York Times, the Boston Globe, and NPR.
Before this call, on May 29, 2023, Boston Global Forum released the Statement of Boston Global Forum in Actions to build AI Legal Framework, AI International Accord for Global Security.
BGF invite distinguished leaders, thinkers, innovators and civic society organizations to join for the actions.
Let us unite our efforts in building a future where data and AI technologies are leveraged for the benefit of all, adhering to the fundamental principles of transparency, responsibility, and innovation.
Please send your ideas and comments to [email protected]