Minister Hayashi will be a keynote speaker at the 2nd BGF Shinzo Abe Initiative Conference in Tokyo on April 5, 2023, centered around the theme “Make the Economy of Japan Great in the Age of Global Enlightenment.” The conference will to bring together experts and policymakers to discuss key issues related to Japan’s economic and political future, building on the legacy of former Prime Minister Shinzo Abe.
As the Minister of Foreign Affairs of Japan, Yoshimasa Hayashi will likely emphasize the importance of strong economic cooperation between Japan and the United States. Building on the legacy of former Prime Minister Shinzo Abe, Minister Hayashi may highlight the need for continued efforts to promote trade and investment between the two countries. In addition, given that Japan is set to host the G7 Summit in 2023, Minister Hayashi may also discuss Japan’s role in promoting global economic growth and stability.
Overall, Minister Hayashi’s speeches and presentations are expected to focus on Japan’s commitment to promoting economic cooperation and addressing key global challenges through active engagement with the United States and other G7 members, and his participation in the BGF Shinzo Abe Initiative Conference will provide a platform for him to share his vision for Japan’s future economic development.
The Second BGF High-Level Dialogue on Regulation Framework for ChatGPT, GPT4 and AI Assistants featured three distinguished speakers who shared their valuable insights on the legal and regulatory challenges of using AI assistants in various industries. The event was held on March 22nd, 2023, and was live streamed on YouTube.
Cameron Kerry, the former Acting Secretary for the Department of Commerce, opened his remarks by highlighting the fact that a lot has changed in the AI arena since the BGF in-person conference in December 2022. He cited the announcement by Google of its generative AI assistant Bard just the previous day, as well as public remarks by Bill Gates on rapid developments in AI assistants and research evidence from the UPenn on AI’s impacts on jobs. He underscored the need for humility when approaching AI regulation, citing “crossing the river and feeling the stones underfoot” as a metaphor for the pathway to regulating AI. Cam shared a word about ongoing initiatives at The Brookings Institute on how AI could be used to improve research (in various fields). He spoke about the need for regulatory oversight of AI assistants at both design and policy development stages to ensure that they are used ethically and responsibly. He emphasized the importance of transparency and accountability in the development and use of these technologies, such that any potential for discrimination in AI design, implicit or otherwise, is mitigated for preventing physical and emotional harm and for optimizing equity. He spoke of the urgency of capacity building for research and policy development for AI and AI assistants, that existing models could be used for new policy development. As far as regulation of intellectual property laws for AI (in the U.S.), he called attention to integrating existing patent laws into new IP policies for AI. Patent rights fuel[ed] human genius (Abraham Lincoln). He observed that normative decision-making needs to be made by humans to allow human genius to thrive.
John Clippinger, Founder of Brattle Research Corporation, Research Scientist at MIT Media Labs and former Co-Director of the Harvard Law Lab, discussed the potential impact of AI assistants on privacy and data protection. He highlighted the need for “Privacy by Design” in AI, just as he had first flagged back when Facebook was launched, and Collective Intelligence for the common good.
Tom Kehler, an AI pioneer and Chief Scientist of Crowdsmart, spoke about the challenges of regulating AI assistants in industries such as finance and healthcare. He opened his remarks with Judea Pearl’s observation on expert systems “You are much smarter than your data” as he continued to emphasize the need for striking a balance between innovation and regulation to ensure that AI assistants are developed and used in an ethical and responsible manner. He noted that [we need] balance in new ecosystems where humans and machines are working together and where humans are at the last mile ahead of machines. He offered several examples of collective reasoning for problem solving, including a local initiative on reducing gun violence in Cincinnati and NATO, observing that we have a path forward from examples such as NATO, of collective reasoning for a more cooperative dialogue and action although the context may be different.
Cam Kerry speaks at the Second BGF High-level Dialogue
Tom Kehler and John Clippinger speak at the Second BGF High-level Dialogue
On March 25, Ambassador Stavros Lambrinidis, a Global Enlightenment Leader and Distinguished Contributor to the book “Remaking the World – Toward an Age of Global Enlightenment”, spoke at the European Conference “Frontline Europe: a continent’s struggle for relevance, unity and value” at Harvard, a conference that brought together political leaders, world-class experts and practitioners from across Europe to explore and discuss the challenges facing the continent today. From the geopolitical challenges posed by the war in Ukraine to today’s business opportunities and outlook in Europe, all the way through the evolution of the European Union and the transatlantic relationship the conference delved deep into the issues that are shaping the future of Europe.
Stavros Lambrinidis is the Ambassador of the European Union to the United States, as of March 1, 2019. Previously, he served as the European Union Special Representative for Human Rights, Foreign Affairs Minister of Greece, and Member of the European Parliament. He has received numerous recognitions for his work on human rights and privacy, including the Electronic Privacy Information Center’s “Champion of Freedom Award” in 2020 and the Boston Global Forum’s “World Leader in Artificial Intelligence World Society Award” in 2021. He is a member of the President’s Council on International Activities at Yale University and a former president of the DC Bar Association’s Human Rights Committee.
The Second BGF High Level Dialogue on Regulation Framework for ChatGPT, GPT4 and AI Assistants provided valuable insights into the complex issues surrounding the use of AI assistants. Speakers emphasized the importance of developing an ethical and responsible framework for the use of these technologies and highlighted the need for regulatory oversight to ensure that they are used in a way that benefits society as a whole.
Dr. David Silbersweig, Harvard Medical School, a renowned psychiatrist and neuroscientist, spoke about the potential impact of AI assistants on mental health. He emphasized the need to develop an ethical framework for the use of AI in psychiatry and mental health care to ensure that patients receive the best possible care. Dr. Silbersweig also discussed the potential risks associated with the use of AI in mental health care and called for caution in implementing these technologies.
Professor Ruth L. Okediji, an expert in Intellectual Property law, Co-director of Berkman Klein Center, Harvard Law School, spoke about the legal and regulatory implications of using AI assistants. She highlighted the need to strike a balance between innovation and regulation to ensure that AI assistants are developed and used in an ethical and responsible manner. Professor Okediji also discussed the importance of protecting intellectual property rights in the development and use of AI assistants.
Professor Ruth L. Okediji speaks at the Second BGF High-level Dialogue
Professor David Silbersweig speaks at the Second BGF High-level Dialogue
The speech of Yasuhide Nakayama, Coordinator of the Shinzo Abe Initiative for Peace and Security, at the second BGF High-level Dialogue on Regulation Framework for ChatGPT, GPT4, and AI Assistants, March 22, 2023.
In 1945, Japan and the United States marked the end of their war. Today, 78 years later, the two nations that once fought against each other have formed the world’s most admired alliance, the Japan-US alliance. And today, on the grand stage of the WBC finals, they are battling it out in a baseball game. Currently, Russia is waging an aggressive war against Ukraine, and it is crucial to stop this invasion as soon as possible. As someone living in Japan, I predict that what is happening in Europe today could potentially occur in the Taiwan Strait tomorrow. Collaboration between the Chinese People’s Liberation Army and Russia seems to be escalating every day, with military spending and activities increasing compared to the past. Two years ago in the fall, the Russian military conducted military exercises as far west as Hawaii. As you may know, the South China Sea has become a military base for China. While AI stands for artificial intelligence, it also has the meaning of “artificial island” in China. If China’s People’s Liberation Army were to launch a JL-3 missile towards the United States from the South China Sea, it is important to understand that the White House is already within its range. Furthermore, preparations and technologies for battles in space, cyberspace, and electromagnetic waves are being aggressively developed. It is easy to predict that countries will use AI for military purposes. No matter how many rules are established, humans are not gods and can lie. Until this issue is resolved, the battle between good and evil may never end. However, we must not give up on this. The responsibility for creating artificial intelligence lies with humanity. In order to prevent malicious AI from doing harm, we must nurture benevolent AI. There is no time to rest in this battle and competition.
When AI is implemented in society, laws and regulations will be necessary.
Here are some thoughts on the laws and regulations needed for the social implementation of AI:
Transparency and accountability
Even if AI is programmed by humans, it can make autonomous judgments based on algorithms and data. Therefore, it is necessary to make the decisions and reasoning of AI transparent. If AI makes a mistake, it is important to clarify who is responsible for it.
Protection of privacy
AI may collect and use personal information, so laws such as the Personal Information Protection Act and the Data Protection Act will be necessary. It is also important to clearly state the purpose of collecting personal information and obtain the individual’s consent.
Elimination of bias and discrimination
AI can have biases and discriminatory thinking like humans. Therefore, rules are needed to prevent biased or discriminatory decision making by AI.
Ensuring safety
If AI makes important decisions, it is important to ensure that the decisions are safe. For example, if an autonomous vehicle causes an accident, it is necessary to clarify who is responsible.
Consideration of social impact
Rules are needed to minimize the impact of AI on society. For example, if jobs are automated by AI, policies will be needed to support people affected by the automation. In summary, laws and regulations are necessary for the social implementation of AI, including transparency and accountability, protection of privacy, elimination of bias and discrimination, ensuring safety, and consideration of social impact. These regulations will make the social implementation of AI safer and more sustainable.
Yesterday, Japan’s Prime Minister Fumio Kishida visited Kyiv, Ukraine and held talks with President Zelensky. It is also planned for this year’s G7 Summit to be held in Hiroshima, Japan, which is the birthplace of Prime Minister Kishida. In this sense, I believe that the Prime Minister’s commitment to nuclear disarmament is among the highest of any politician in the world. As a personal dream of mine, I would like to attract the United Nations Asia-Pacific headquarters to Japan. New York has the UN headquarters, and Geneva in Switzerland has the UN European office. Despite the presence of populous countries such as India and China in Asia, there is no place for everyone to gather and discuss together as the United Nations. I feel that Hiroshima, as the first city to suffer an atomic bombing, is the most suitable city and location to attract the United Nations Asia-Pacific headquarters due to its historical significance. In Japanese, “AI” can also mean “love”. Therefore, it is important for the social implementation of AI to make AI understand and appreciate “love”. Lastly, I believe that when a malevolent AI or an AI that antagonizes humanity emerges in this world, the only ones capable of fighting against it would be benevolent AI and God. I am very much looking forward to meeting you all in Japan in April.
Mr. Yasuhide Nakayama speaks at the Second BGF High-level Dialogue