by Editor | Sep 1, 2019 | News
The Rīga Conference has become a unique venue for constructive dialogue on international security issues between leading global decision makers. The event is organized jointly by the Latvian Transatlantic Organisation (LATO), the Ministry of Defence of the Republic of Latvia, and the Ministry of Foreign Affairs of the Republic of Latvia. We facilitate debates at various levels, engaging high-level politicians, diplomats, experts, as well as local and international media. By attracting key international players and tackling the most pressing issues that our societies currently face, The Rīga Conference demonstrate its commitment to think and work in a global context.
Over the course of the last decade years, The Rīga Conference has built its name and reputation across the region. Highlights have included having the Prime Ministers of Latvia, Poland, Estonia, Lithuania, and Finland together in the same room to discuss economic growth in a time of austerity; the presentation by Georgian President Mikheil Saakashvili in 2008 and the address by President of the United States George W. Bush in 2006.
At this year’s conference, amongst other things, The Riga Conference will consider EU role in rise of geoeconomics, security challenges in the information age and hybrid warfare, transatlantic relationship as a critical axis of global stability, relations between Russia and the West as well as prospects of Eastern Partnership countries, and the PLENARY SESSION: POLITICAL POWER IN THE DIGITAL AGE. The Riga Conference 2019 will take place on Oct 11 and 12 at National Library of Latvia.
The Founder of the Riga Conference is the former President of Latvia, Vaira Vike-Freiberga.
by Editor | Sep 1, 2019 | News
Governor Michael Dukakis, Co-founder and Chairman of the Boston Global Forum, will lead a discussion for a framework for peace and security in the 21st century at Consulate General of Greece in Boston. Professors of Harvard, MIT, and the US Naval World College will attend and discuss. The world needs a framework to keep peace and security in the 21st century.
Today, there are many challenges and threats to peace in the world: extreme nationalism, new dictatorships, totalitarianism, fake news, cyberattacks, global power’s power projection on their smaller neighbours, illegal threatening of others’ sovereignty, violations of international laws, using centralised power and massive population of to create unfair competition in the economy, arm races, etc.
Which roles can the US, the EU, and other democratic countries play against non-democratic powers which utilise authoritarianism and nationalism?
The roundtable will take place at 5:30 pm, Sept 9, 2019, the World Reconciliation Day, at Greece Consulate General in Boston, 86 Beacon street, Boston, MA 02108. The Greek Consul General in Boston, Stratos Efthymion, will co-host this roundtable.
by Editor | Sep 1, 2019 | News
Dr. Michel SERVOZ is the Special Adviser to the President of European Commission, Jean-Claude Juncker, for Robotics, Artificial Intelligence, and the Future of Labour, joins the Social Contract 2020 Team and discusses about:
- Data protection and the potential to come to some kind of agreement/understanding on an international data rule book: the issue of data use is also very important from a European perspective to enable access to AI by small companies;
- Tax: the light of the recently announced French digital tax (which will be appear in several other EU countries soon), and also in the light of announcements by some Presidential hopefuls in the US.
- Digital money: big differences across countries: China is ahead of everyone else, while Europe is very conservative on the issue. Again, the world need for a rule book to set some international standards (as we have for banking, i.e Basel III);
- Algorithms as law: the issue is to move from general principles on the use of AI to concrete rules on what to do and not do; general principles have been adopted by different countries and corporations, but they are not very concrete and do not foresee means of redress; how do we establish enforceable principles without creating a bureaucratic burden for investors and researchers?
- Dictatorship: issues concerning the competitive position of the big players: all the big players are US and China based, enjoying quasi-monopoly positions in some sectors, European companies are very small in this field, and emerging economies are largely absent from AI; the later will create dangerous imbalances, of will increase seriously some existing imbalances. How can we make sure that AI benefit to the economic development of all countries?
by Editor | Sep 1, 2019 | News
You hear a lot these days about the sheer transformative power of AI.
There’s pure intelligence: DeepMind’s algorithms readily beat humans at Go and StarCraft, and DeepStack triumphs over humans at no-limit hold’em poker. Often, these silicon brains generate gameplay strategies that don’t resemble anything from a human mind.
There’s astonishing speed: algorithms routinely surpass radiologists in diagnosing breast cancer, eye disease, and other ailments visible from medical imaging, essentially collapsing decades of expert training down to a few months.
Although AI’s silent touch is mainly felt today in the technological, financial, and health sectors, its impact across industries is rapidly spreading. At the Singularity University Global Summit in San Francisco this week Neil Jacobstein, Chair of AI and Robotics, painted a picture of a better AI-powered future for humanity that is already here.
The bottom line: people who will be impacted by AI need to be in the room at the conception of an AI solution. People will be displaced by the new technology, and ethical AI has to consider how to mitigate human suffering during the transition. Just because AI looks like “magic fairy dust doesn’t mean that you’re home free,” the panelists said. You, the sentient human, bear the burden of being responsible for how you decide to approach the technology.
The original article can be found here.
According to Michael Dukakis Institute for Leadership and Innovation (MDI), ethical AI is also an important topic for Artificial Intelligence World Society (AIWS). We will need to constructively develop AI for helping everyone achieve well-being and happiness as well as ethical norms, especially avoiding bias and enhancing transparency.
by Editor | Sep 1, 2019 | News
Enterprises are putting a lot of time, money, and resources behind their nascent Artificial Intelligence (AI) efforts, banking on the fact that they can automate the way application leverage the massive amounts of customer and operational data they are keeping. The challenge is not just to bringing machine learning into the datacenter. It has to fit into the workflow without impeding it. For many, that’s easier said and done.
Dotscience, a startup comprised of veterans from the DevOps world, dropped out of stealth mode this week and published a report that showed that enterprises may not be reaping the rewards from the dollars they are putting behind their AI projects. According to the report, based on a survey of 500 IT professionals, more than 63 percent of businesses are spending anywhere from $500,000 to $10 million on AI programs, while more than 60 percent also said they are confronting challenges with the operations of these programs. Another 64.4 percent that are deploying AI in their environments found that it is taking between seven and 18 months to move these AI workloads from an idea into production.
There’s a need to ensure that not only can machine learning developers collaborate and make code that can be reproduced but also easily track models and data, trace the a model from its training data back to the raw data (provenance), view relationships between parameters and metrics and monitor models to ensure they are behaving as expected. In addition, they need to be able to attach external S3 datasets and to attach to any system, from a laptop and a GPU-powered machine to datacenter hardware and cloud instances.
The original article can be found here.
The end-to-end integration of AI applications with enterprise system is essential for any company business, which is also highlighted in AI Ethics report by AI World Society (AIWS) for developing AI algorithms and data management with ethical principles and practices.
by Editor | Aug 25, 2019 | Statements, News
Japan and the US have expressed alarm after South Korea formally withdrew from an intelligence-sharing deal with Tokyo.
Analysts said the move could jeopardise efforts to track and curtail North Korea’s nuclear weapons programme.
Senior officials exchanged angry statements on Friday amid what experts are calling the worst crisis in Japan-South Korea relations in decades, with prime minister Shinzo Abe saying the decision had “damaged mutual trust”.
Mr. Abe, before leaving to join the G7 summit in France, said that Tokyo “will continue to closely coordinate with the US to ensure regional peace and prosperity, as well as Japan’s security”.
“The situation is escalating, and it’s hard to see how the spiralling conflict can be stopped,” said Koichi Ishizaka, an expert on intercultural communication and a professor at Rikkyo University in Tokyo.
Mr. Ishizaka said Mr. Abe probably also feels like he can score domestic political points by taking a hard stance on South Korea, despite the close cultural ties between the two countries.
“Although cordial exchange between the people is working for a brighter future, politics has taken a step back and has not caught up with that,” he said.
The original article can be found here.
The Boston Global Forum honored Prime Minister Shinzo Abe with the World Leader for Peace and Cybersecurity Award on Global Cybersecurity Day, December 12, 2015, at Harvard University Faculty Club.
by Editor | Aug 25, 2019 | News
We live in times of high-tech euphoria marked by instances of geopolitical doom-and-gloom. There seems to be no middle ground between the hype surrounding cutting-edge technologies, such as Artificial Intelligence (AI) and their impact on security and defence, and anxieties over their potential destructive consequences. AI, arguably one of the most important and divisive inventions in human history, is now being glorified as the strategic enabler of the 21st century and next domain of military disruption and geopolitical competition. The race in technological innovation, justified by significant economic and security benefits, is widely recognised as likely to make early adopters the next global leaders.
Technological innovation and defence technologies have always occupied central positions in national defence strategies. This emphasis on techno-solutionism in military affairs is nothing new. Unsurprisingly, Artificial Intelligence is often discussed as a potentially disruptive weapon and likened to prior transformative technologies such as nuclear and cyber, placed in the context of national security. However, this definition is problematic and sets the AI’s parameters as being one-dimensional. In reality, AI has broad and dual-use applications, more appropriately comparable to enabling technologies such as electricity, or the combustion engine.
Growing competition in deep-tech fields such as AI are undoubtedly affecting the global race for military superiority. Leading players such as the US and China are investing heavily in AI research, accelerating use in defence. In Russia, the US, and China, political and strategic debate over AI revolutionising strategic calculations, military structures, and warfare is now commonplace. In Europe, however, less attention is being paid to the weaponisation of AI and its military application. Despite this, the European Union’s (EU) European Defence Fund (EDF) has nonetheless earmarked between 4% and 8% of its 2021-2027 budget to address disruptive defence technologies and high-risk innovation. The expectation being that such investment would boost Europe’s long-term technological leadership and defence autonomy.
In 2018, Michael Dukakis Institute for Leadership and Innovation (MDI) established the Artificial Intelligence World Society (AIWS) to collaborate with governments, think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of AI for helping everyone achieve well-being and happiness as well as ethical norms. This effort will be a guidance on AI development to serve and strengthen democracy, human rights, and the rule of law for a better world society.
by Editor | Sep 15, 2019 | News
The Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation will organize the AI World Society Conference on 1:30 pm, September 23, 2019 at Harvard University Faculty Club, to introduce and discuss the Social Contract 2020 and rules and laws for AI and the Internet.

The keynote speaker is Professor Alex Sandy Pentland, who directs the MIT Connection Science and Human Dynamics labs and had previously helped create and direct the MIT Media Lab and the Media Lab Asia in India. He is one of the most-cited scientists in the world and Forbes recently declared him one of the “7 most powerful data scientists in the world”, along with the Google founders and the Chief Technical Officer of the United States; he also co-led the World Economic Forum discussion in Davos that led to the EU privacy regulation GDPR and was central in forging the transparency and accountability mechanisms in the UN’s Sustainable Development Goals. He has received numerous awards and prizes, such as the McKinsey Award from Harvard Business Review, the 40th Anniversary of the Internet from DARPA, and the Brandeis Award for work in privacy.
Professor Pentland is a co-founder of the Social Contract 2020 and a member of the Board of Thinkers of the Boston Global Forum.
Following the speech of Professor Pentland, Professor Christo Wilson, Northeastern University, Harvard Law School Fellow, and Michael Dukakis Leadership Fellow, will present solutions for AI transparency.
Then, Paul F. Nemitz and Michel Servoz will present Rules and International Laws of AI World Society.
Paul F. Nemitz is the Principal Advisor in the Directorate General for Justice and Consumers. He was appointed by the European Commission on 12 April 2017, following a 6-year appointment as Director for Fundamental Rights and Citizen’s Rights in the same Directorate General. As Director, Nemitz led the reform of Data Protection legislation in the EU, the negotiations of the EU – US Privacy Shield, and the negotiations with major US Internet Companies of the EU Code of Conduct against incitement to violence and hate speech on the Internet. He also is a team member of The Social Contract 2020.
Mr. Michael Servoz is the Special Adviser to the President of European Commission, Senior Adviser for “Robotics, Artificial Intelligence and the Future of European Labour Law” – European Political Strategy Centre (EPSC).
Scholars of Harvard, MIT, Tufts will join the AI World Society Conference as participants.
Download Agenda for this conference here
by Editor | Aug 25, 2019 | News
After news emerged that a multi-disciplinary team led by the University of Surrey successfully filed the first ever patent applications for inventions autonomously created by AI without a human inventor, Ian Bolland caught up with team leader Professor Ryan Abbott about the potential knock-on effect for life sciences.
Professor Abbott began by explaining the potential effects the filing will have on life sciences and other industries.
“These filings are important to any area of research and development as well as any area that relies on patents. Patents are more important in the life sciences than in many other areas, particularly for drug discovery. AI has also been used extensively in the drug discovery process for a long time for tasks like screening of compounds and in silico analysis. These tasks can be the foundation for patent filings.
“As AI is becoming increasingly sophisticated, it is likely to play an increasing role in R&D including in the life sciences. It is an exciting prospect that AI may be able to improve the efficiency of some historically very inefficient practices. Pharma and tech companies are likely to develop AI to automate more and more of the drug discovery process.”
Abbott believes that AI is going to become more autonomous because of its current trends, and its continuing and progressive involvement in R&D. According to Michael Dukakis Institute for Leadership and Innovation (MDI), AI can be an important tool to relieve people of resource constraints and arbitrary/inflexible rules in R&D, but AI algorithms should follow ethical principles that promote fairness and avoid unjust effects on people.
The original article can be found here.