Researchers train a model to reach human-level performance at recognizing abstract concepts in video.
The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending.
Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them.
Their model did as well as or better than humans at two types of visual reasoning tasks — picking the video that conceptually best completes the set, and picking the video that doesn’t fit. Shown videos of a dog barking and a man howling beside his dog, for example, the model completed the set by picking the crying baby from a set of five videos. Researchers replicated their results on two datasets for training AI systems in action recognition: MIT’s Multi-Moments in Time and DeepMind’s Kinetics.
In addition to AI for reasoning, AI and causal inference is also an important topic. Professor Judea Pearl is a pioneer for developing a theory of causal and counterfactual inference based on structural models. In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF). In the future, Professor Judea will also contribute to Causal Inference for AI transparency, which is one of important AIWS topics on AI Ethics.
Paul Nemitz, a member of AIWS Standards Committee, co-author of the AIWS-G7 Summit Initiative 2019, launched his new book “The Human Imperative – Power, Freedom and Democracy in the Age of Artificial Intelligence”.
The book is about power in the age of Artificial Intelligence (AI). It looks at what the new technical powers that have accrued over the last decades mean for the freedom of people and for our democracies. Our starting point is that AI must not be considered in isolation, but rather in a very specific context: the concentration of economic and digital technological power that we see today. Analysis of the effects of AI requires that we take a holistic view of the business models of digital technologies and of the power they exercise today. The rise of technology and the power of control and manipulation associated with it leads, in our firm conviction, to the need to reconsider the principle of the human being, to ensure humans collectively benefit from this technology, that they control it, and that a humane future determined by man remains possible.
The Policy Lab on Transatlantic Approaches on Digital Governance: A New Social Contract in the Age of Artificial Intelligence, will be held in a virtual format over a three-day period between the 16th and the 18th of September, hosted by The World Leadership Alliance-Club de Madrid (WLA-CdM) and the Boston Global Forum (BGF).
The COVID-19 outbreak and ensuing global health crisis have significantly accelerated the deployment and decentralization of digital technologies and Artificial Intelligence (AI). Their role
in almost every facet of today’s life merits in-depth analysis, particularly timely in the current context, when their use will present us with both challenges and opportunities, concerns and solutions.
Our Policy Lab will bring the governance experience of World Leadership Alliance-Club de Madrid Members, democratic former Presidents and Prime Ministers from over seventy countries, together with the knowledge of experts and scholars in a multi-stakeholder, multidisciplinary platform aimed at generating action-oriented analysis and policy recommendations for the development of a new social contract on digital governance. All this, from a Transatlantic perspective and with the experience of a ravaging, global pandemic that has underlined the need to strengthen international cooperation and the multilateral system as we build a digital future for all. Through the voice and agency of WLA-CdM Members, WLA-CdM and BGF will bring the results of this discussion to the global conversation steered by the UN as part of its 75th anniversary and other major action-oriented discussions taking place on this most pressing topic.
Vint Cerf, Father of the Internet; Nguyen Anh Tuan, CEO of BGF; the former Prime Minister of Finland, former President of Latvia, former Prime Minister of Bosnia and Herzegovina, and professors of Harvard and MIT will speak at the Session I: The AIWS Social Contract 2020 and AIWS Innovation Network: A Platform for Transatlantic Cooperation, on September 17th at 9:30 EDT.
We write to offer our respect and to thank you for your work to promote peace and cyber security. You have worked tirelessly to make cybersecurity a priority in Japan and in the international community.
You supported the Ethics Code of Conduct for Cyber Peace and Security conceived by the Boston Global Forum in December 2015. You also supported the BGF-G7 Summit Initiative – Ise-Shima Norms 2016, when Japan hosted the G-7 Summit. Among the Summit Initiatives was the joint Boston Global Forum-G7 effort to prevent cyberwar and fake news.
You also promoted a human-centric view of Artificial Intelligence that led to the OECD AI Principles which were adopted by the G-20 nations at the Ministerial Summit in Osaka. You also put forward the concept of Data Free Flow with Trust at the World Economic Forum in 2019, a new model for data governance that OECD Secretary General Angel Gurria described as “ambitious and timely.”
And through Abenomics, the “three arrows” of monetary easing, fiscal stimulus, and structural reforms, unemployment had fallen and the ratio of public debt to GDP had leveled off. You brought women to the workforce, grew the economy, and kept deflation under control.
You have long been a great friend of the Boston Global Forum. We were honored to give you the 2015 World Leader in Cybersecurity award for your exemplary leadership and your contributions in promoting cybersecurity in Japan and Asia.
The Boston Global Forum looks forward to future collaboration with the government of Japan as we promote the New Social Contract in the Age of AI, as well as the AI World Society City as a practical model.
Wishing you good health and continued success.
Yours sincerely,
Governor Michael Dukakis
Chair, the Boston Global Forum
Nguyen Anh Tuan
CEO, the Boston Global Forum
Marc Rotenberg
Director, Center for AI and Digital Policy at Michael Dukakis Institute
“The Tetrad project, including the open-source Tetrad software package and the now standard
reference book, ‘Causation, Prediction, and Search’ (1993), are the basis for the
modern theory of causal discovery,” said Chris Meek, principal researcher at Microsoft Research. “The ideas and software that grew from this project have fundamentally shifted how researchers explore and interpret observational data.” The book has almost 8,000 citations.
The Tetrad project was started nearly 40 years ago by Glymour, then a professor of history and philosophy of science at the University of Pittsburgh and now Alumni University Professor Emeritus of Philosophy at CMU, and his doctoral students, Richard Scheines, now Bess Family Dean of the Dietrich College of Humanities and Social Sciences and a professor of philosophy at CMU, and Kevin Kelly, now professor of philosophy at CMU.
Fundamental to the work was providing a set of general principles, or axioms, for deriving testable predictions from any causal structure. For example, consider the coronavirus. Exposure to the virus causes infection, which in turn causes symptoms (Exposure –> Infection –> Symptoms). Since not all exposures result in infections, and not all infections result in symptoms, these relations are probabilistic. But if we assume that exposure can only cause symptoms through infection, the testable prediction from the axiom is that Exposure and Symptoms are independent given Infection. That is, although knowing whether someone was exposed is informative about whether they will develop symptoms, once we already know whether someone is infected or not — knowing whether they were exposed adds no extra information — a claim that can be tested statistically with data.
Causality, with its focus on modeling and reasoning about interventions, can … take the field [of AI] to the next level … the CMU group including Peter Spirtes, Clark Glymour, Richard Scheines and Joseph Ramsey was at the center of the development, not just in terms of algorithm development, but crucially also by providing Tetrad, the de facto standard in causal discovery software,” said Bernhard Schölkopf, Amazon Distinguished Scholar and chief machine learning scientist and director, Max Planck Institute for Intelligent Systems in Germany.
It is useful to note that their contributions in conjunction with Judea Pearl’s Causality are the two most important texts on causality using Bayesian networks and AI. In 2011, Professor Judea Pearl was awarded Turing Award. In 2020, Professor Pearl was also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, Professor Judea Pearl also contribute on Causal Inference for AI transparency, which is one of important AIWS.net topics on AI Ethics.
Much of artificial intelligence (AI) in common use is dedicated to predicting people’s behavior. It tries to anticipate your next purchase, your next mouse-click, your next job move. But such techniques can run into problems when they are used to analyze data for health and development programs. If we do not know the root causes of behavior, we could easily make poor decisions and support ineffective and prejudicial policies.
AI, for example, has made it possible for health-care systems to predict which patients are likely to have the most complex medical needs. In the United States, risk-prediction software is being applied to roughly 200 million people to anticipate which patients would benefit from extra medical care now, based on how much they are likely to cost the health-care system in the future. It employs predictive machine learning, a class of self-adaptive algorithms that improve their accuracy as they are provided new data. But as health researcher Ziad Obermeyer and his colleagues showed in a recent article in Science magazine, this particular tool had an unintended consequence: black patients who had more chronic illnesses than white patients were not flagged as needing extra care.
What went wrong? The algorithm used insurance claims data to predict patients’ future health needs based on their recent health costs. But the algorithm’s designers had not taken into account that health-care spending on black Americans is typically lower than on white Americans with similar health conditions, for reasons unrelated to how sick they are—such as barriers to health-care access, inadequate health care, or lack of insurance. Using health-care costs as a proxy for illness led the predictive algorithm to make recommendations that were accurate for white patients—lower health-care spending was the consequence of fewer health conditions—but perpetuated racial biases in care for black patients. The researchers notified the manufacturer, which ran tests using its own data, confirmed the problem, and collaborated with the researchers to remove the bias from the algorithm.
This story illustrates one of the perils of certain types of AI. No matter how sophisticated, predictive algorithms and their users can fall into the trap of equating correlation with causation—in other words, of thinking that because event X precedes event Y, X must be the cause of Y. A predictive model is useful for establishing the correlation between an event and an outcome. It says, “When we observe X, we can predict that Y will occur.” But this is not the same as showing that Y occurs because of X. In the case of the health-care algorithm, higher rates of illness (X) were correctly correlated with higher health-care costs (Y) for white patients. X caused Y, and it was therefore accurate to use health-care costs as a predictor of future illness and health-care needs. But for black patients, higher rates of illness did not in general lead to higher costs, and the algorithm would not accurately predict their future health-care needs. There was correlation but not causation.
This matters as the world increasingly turns to AI to help solve pressing health and development challenges. Relying solely on predictive models of AI in areas as diverse as health care, justice, and agriculture risks devastating consequences when correlations are mistaken for causation. Therefore, it is imperative that decision makers also consider another AI approach—causal AI, which can help identify the precise relationships of cause and effect. Identifying the root causes of outcomes is not causal AI’s only advantage; it also makes it possible to model interventions that can change those outcomes, by using causal AI algorithms to ask what-if questions. For example, if a specific training program is implemented to improve teacher competency, by how much should we expect student math test scores to improve? Simulating scenarios to evaluate and compare the potential effect of an intervention (or group of interventions) on an outcome avoids the time and expense of lengthy tests in the field.
It is useful to note that AI and causality has been contributed by professor Judea Pearl, who is awarded Turing Award 2011. In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF). At this moment, Professor Judea Pearl also contribute on Causal Inference for AI transparency, which is one of important AIWS.net topics on AI Ethics.
On August 21, 2020, at the United Nations 2045 Roundtable organized by the United Nations Academic Impact, Mr. Nguyen Anh Tuan, Co-founder and CEO of the Boston Global Forum, Co-founder of AIWS.net, presented AI World Society City (AIWS City) after Keynote Speaker Vint Cerf presented “The People Centered Economy: The New AI and Internet Ecosystem for Work and Life”. AIWS City is virtual city that applies the standards of “the Social Contract 2020”, “People Centered Economy”, “Trustworthy Economy”, “Intellectual Society-Thoughtful Civic Society”, and “AI-Government”, with AIWS Value to foster Vint Cerf’s idea that “all people can create value for each other”.
Here is a brief overview of AIWS City:
Purpose:
To practice concepts from “the Social Contract 2020, A New Social Contract in the Age of AI”, the People Centered Economy, and the Intellectual Society-Thoughtful Civic Society.
To use the Internet and AI to shape brighter futures and to create an ecosystem for work and life with the philosophy of the People Centered Economy.
The AIWS City is an application and practice of the thoughts and ideas of Vint Cerf.
Principles and Concepts:
Vint Cerf’s idea:” “All people can create value for each other. A good economy has an ecosystem of organizations that lets that happen, in the most meaningful and fulfilling ways.”
AIWS City is virtual city that applies the standards of “the Social Contract 2020”, “People Centered Economy”, “Trustworthy Economy”, “Intellectual Society-Thoughtful Civic Society”, and “AI-Government”.
The features of Intellectual Society-Thoughtful Civil Society are knowledge, critical thinking and social responsibility.
AIWS City can assist citizens to become more thoughtful byenhancing knowledge, critical thinking and social responsibility
AIWS created the concept of AIWS Value. AIWS Value: traditional value (products, services, data, etc.) + social values (contributions), recognize and exchange traditional and social values.
AIWS City will operate based on AIWS Value in order to create a good Ecosystem of the People Centered Economy – “all people can create value for each other”.
Model of AIWS City:
AIWS City includes: Government of City, Citizens, Companies, and Intellectual Society-Thoughtful Civil Society
Government: use AI-Government (Government assisted by AI, Data Science, and Internet)
Build infrastructure for AI-Government based on Internet and Data Science (AI).
Government creates social works for citizens and supports special education program for citizens.
Companies: apply Trustworthy Economy and support the People Centered Economy.
Citizens: create values from ecosystem of AIWS City.
Intellectual Society-Thoughtful Civil Society in AIWS City: promote “the Social Contract 2020” and AIWS Value, as well as collaborate to create the Ecosystem of AIWS City.
Implementation:
Key people: Nguyen Anh Tuan, Alex Pentland, Vint Cerf, Michael Dukakis, and Board Members of Michael Dukakis Institute, key leaders of AIWS.net.
Ramu Damodaran, Chief of the Academic Impact and Editor in Chief of the United Nations Chronicle Magazine, thoughtfully moderated keynote speaker Vint Cerf, and other speakers and participants such as Governor Michael Dukakis, Nguyen Anh Tuan, former PM of Bosnia and Herzegovina Zlatko Lagumdzija, Professor Jun Murai (father of the Internet in Japan), Professors Nazli Choucri (MIT), professor David Silbersweig (Harvard). The United Nations 2045 Roundtable contribute new concepts and models for Unleashed 2045: Reinventing the United Nations at 100 such as:
The People Centered Economy: The New AI and Internet Ecosystem for Work and Life “All people can create value for each other. A good economy has an ecosystem of organizations that lets that happen, in the most meaningful and fulfilling ways.”
AI World Society City (AIWS City), virtual city that applies the standards of “the Social Contract 2020”, “People Centered Economy”, “Intellectual Society-Thoughtful Civic Society”, and “AI-Government”, with AIWS Value to foster for Vint Cerf’s idea “All people can create value for each other”.
On August 21, 2020, Vint Cerf, Father of the Internet, Vice President and Chief Internet Evangelist of Alphabet-Google, was the keynote speaker of the United Nations 2045 Roundtable.
He talked about the need to create a new ecosystem with Internet and AI that is for and serves people, do not abuse or harm humans through advanced technology – that people should be at the center. These ideas of Vint Cerf will be put in the Social Contract 2020 version 2.0, which will be announced September 9, 2020 with Vint Cerf as a co-author.
The Boston Global Forum and Michael Dukakis Institute honored Vint Cerf with World Leader in AIWS Award 2019. He is also a Mentor of AIWS.net.