by Editor | Apr 12, 2020 | News
Michael Dukakis was discharged from a Los Angeles hospital where he was admitted on March 24 with bacterial pneumonia. In a note to The National Herald he wrote, “doing a lot better after nine days in the hospital and wrapping up the academic quarter here at UCLA. Hope we’ll be heading back to Boston soon, but I think that will be awhile. All the best, and stay in touch.”
Dukakis is 86 years old and in excellent health. He is a distinguished professor of Political Science at Northeastern University in Boston where he teaches the first semester of each year and in the Spring semester he teaches at UCLA. He and his wife Kitty live in a house near the University.
He had experienced respiratory symptoms and was twice tested for the coronavirus but the results came back negative. He was then diagnosed with bacterial pneumonia.
Dukakis served twice as governor of Massachusetts. The first time from 1975 to1979 and the second time from 1983 to1991. In 1988 he was the first Greek-American to run for President if the United States when he and received the nomination of the Democratic Party. Dukakis enjoys teaching at Northeastern and he remains active in the Democratic Party and politics on the local and national level. He encourages young people including Greek-Americans to get involved in politics and public life.
The original article can be found here.
Congratulations to Governor Michael Dukakis, co-founder and Chairman of the Boston Global Forum, co-founder and Chairman of the Michael Dukakis Institute, and co-founder of the AI World Society Innovation (AIWS.net).
by Editor | Apr 12, 2020 | News
It’s too early to quantify the economic impact of the global COVID-19 pandemic, but because of this outbreak compounded with the U.S.-China trade war, global supply chains and businesses linked to the world’s second-biggest economy are being impacted. As I sit here in Singapore and monitoring the spread of the outbreak in Asia and beyond, the mounting human cost is also especially of deep concern to me.
But even amid adversity comes the opportunity for innovation and invention. Chinese tech companies Alibaba, Tencent and Baidu have opened their artificial intelligence (AI) and cloud computing technologies to researchers to quicken the development of virus drugs and vaccines. U.S.-based medical startups are using AI to rapidly identify thousands of new molecules that could be turned into potential cures. Yet another is using the same technology for early warning and detection by analyzing global airline ticketing data.
This has all served as a reminder for me, an AI founder, of the immense potential of AI for improving efficiency, growth and productivity. AI-enabled automation can really make a difference for traditional services and offline businesses transitioning to digital and online channels.
If increasing productivity while lowering cost is vital for your business, then you might benefit from taking another look at these particular areas where AI can really help: automating processes, gaining customer and competitive insight through data and improving customer and employee engagement.
The original article can be found here.
According to the impact of AI to world society, Michael Dukakis Institute for Leadership and Innovation (MDI) established the Artificial Intelligence World Society Innovation Network (AIWS.net) to monitor AI developments and uses by governments, corporations, and non-profit organizations to assess whether they comply with the norms and standards codified in the AIWS Social Contract 2020.
by Editor | Apr 12, 2020 | News
The statistical branch of Artificial Intelligence has enamored organizations across industries, spurred an immense amount of capital dedicated to its technologies, and entranced numerous media outlets for the past couple of years. All of this attention, however, will ultimately prove unwarranted unless organizations, data scientists, and various vendors can answer one simple question: can they provide Explainable AI?
Although the ability to explain the results of Machine Learning models—and produce consistent results from them—has never been easy, a number of emergent techniques have recently appeared to open the proverbial ‘black box’ rendering these models so difficult to explain.
One of the most useful involves modeling real-world events with the adaptive schema of knowledge graphs and, via Machine Learning, gleaning whether they’re related and how frequently they take place together.
When the knowledge graph environment becomes endowed with an additional temporal dimension that organizations can traverse forwards and backwards with dynamic visualizations, they can understand what actually triggered these events, how one affected others, and the critical aspect of causation necessary for Explainable AI.
Investments in AI may well hinge upon such visual methods for demonstrating causation between events analyzed by Machine Learning.
As Judea Pearl’s renowned The Book of Why affirms, one of the cardinal statistical concepts upon which Machine Learning is based is that correlation isn’t tantamount to causation. Part of the pressing need for Explainable AI today is that in the zeal to operationalize these technologies, many users are mistaking correlation for causation—which is perhaps understandable because aspects of correlation can prove useful for determining causation.
Causation is the foundation of Explainable AI. It enables organizations to understand that when given X, they can predict the likelihood of Y. In aircraft repairs, for example, causation between events might empower organizations to know that when a specific part in an engine fails, there’s a greater probability for having to replace cooling system infrastructure.
The original article can be found here.
Regarding to AI and Causality, AI World Society (AIWS.net) has created a new section on Modern Causal Inference in 2020. This section will be led by Professor Judea Pearl, who is a pioneering figure on AI Causality and the author of the well-known book The Book of Why. Professor Judea’s work will contribute to Causal Inference for AI transparency, which is one of important AIWS topics.
by Editor | Apr 5, 2020 | Event Updates
An online preliminary policy discussion about the Transatlantic Approaches on Digital Governance: A New Social Contract in Artificial Intelligence (AI) Age will be held under the AI World Society Innovation Network (AIWS.net) from April 28, 2020.
A face-to-face policy discussion will be held in Boston in September 16-18, 2020.
The World Leadership Alliance-Club de Madrid (WLA-CdM) in partnership with the Boston Global Forum (BGF) is organizing a Transatlantic and multi-stakeholder dialogue on global challenges and policy solutions in the context of the need to create a new social contract on digital technologies and Artificial Intelligence (AI).
Over the years, Transatlantic relations have been characterized by close cooperation and continuous work for common interests and values. This cooperation has been essential to enhance the multilateralism system, considering the shared principles from both sides on democracy, rule of law, and fairness.
By comparing American and European approaches in the creation of a new social contract on AI Age and digital governance, under the critical eye of former democratic Heads of State or Government, this policy dialogue will stimulate new thinking and bring out ideas from representatives of governments, academic institutions and think tanks, tech companies, and civil society, from both regions.
At the same time, the discussion will generate a space to encourage and strengthen Transatlantic cooperation on the new social contract of digital governance in the framework of needed reforms of the multilateral system and will serve as a platform to establish a Transatlantic Alliance for Digital Governance. Besides, the policy discussion aims to discuss the creation of an initiative to monitor governments as well as companies in using AI and generate an AI Ethics Index at all levels.
Given the world health emergency experienced in the first months of the year related to the COVID-19 pandemic and its impact in all actors and spheres of life, digital technologies, and artificial intelligence have been strong allies to face the situation in multiple dimensions (scientific, health, social, etc.). However, digital technologies also bring new challenges to address under these circumstances. New communication channels have contributed to the rapid spread of fake news about COVID-19, generating disinformation, increasing confusion and influencing society’s perception, raising collective concern. On other occasions, the new tools used to track and face the virus could imply a violation of privacy rights.
The relevance of the topic leads us to include a global health security component to the Policy Lab, analyzing the implications of artificial intelligence and new technologies in this regard, as well as the response of governments, international organizations, companies and society, where the situation has demonstrated that a Social Contract on digital governance and the renewal of multilateralism and global cooperation mechanisms are more necessary than ever.
by Editor | Apr 5, 2020 | News
2019 World Leader in AI World Society Award recipient, a mentor of AIWS.net, one of fathers of Internet, Vint Cerf have tweeted:
“Good news – VA Public Health has certified my wife and me as no longer contagious with COVID19. Recovering!”
AIWS.net congratulates Mr. Vint Cerf on this wonderful news and hope the world will defeat COVID-19 soon.
The Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation honored Mr. Vint Cerf as World Leader in AI World Society (AIWS) at the AIWS-G7 Summit Initiative Conference on April 25, 2019 at Loeb House, Harvard University.
At this event, the Boston Global Forum presented the AIWS-G7 Summit Initiative to the French Government, host country of G7 Summit 2019. This initiative conceived AI-government, AI-citizen, and a smart democracy with deeply applied AI.
by Editor | Apr 5, 2020 | News
Questionable decisions made by governments, courts, or organizations can be reviewed and judged by AI, through a system with transparent methodology and algorithm and trusted and unbiased data. AIWS.net is developing a Judging System to meet this demand.
There are a few arguable decisions recently, one of them happened on April 2, 2020, Pakistani court overturns murder conviction in killing of Wall Street Journal reporter Daniel Pearl. Professor Judea Pearl, 2020 World Leader in AI World Society (AIWS), a mentor of AIWS.net, father of journalist Daniel Pearl, tweeted:
“It is a mockery of justice. Anyone with a minimal sense of right and wrong now expects Faiz Shah, prosecutor general of Sindh to do his duty and appeal this reprehensible decision to the Supreme Court of Pakistan.”
And: “To all readers who shared hopeful thoughts with us, we are grateful and happy to inform you that the government of Sindh has ordered that Daniel’s murder suspects will be kept in detention for another 90 days, pending an appeal. Thanks for being with us.”
Ahmed Omar Saeed Sheikh had been facing a death sentence. The Karachi court instead reduced his sentence to seven years, after hearing an appeal last month.
Saeed was found guilty in 2002 of orchestrating the kidnapping, terrorism and murder, according to The Wall Street Journal. The newspaper reported that the court overturned the terrorism and murder convictions and downgraded the kidnapping conviction.
Many members of AIWS.net criticize this decision by the Pakistani court and offer that AIWS.net can apply the Judging System to review this decision, then send an appeal statement to Pakistani Prime Minister Imran Khan.
by Editor | Apr 5, 2020 | News
From its epicenter in China, the novel coronavirus has spread to infect 414,179 people and cause no less than 18,440 deaths in at least 160 countries across a three-month span from January 2020 till date. These figures are according to the World Health Organization (WHO) Situation report as of March 25th. Accompanying the tragic loss of life that the virus has caused is the impact to the global economy, which has reeled from the effects of the pandemic.
Due to the lockdown measures imposed by several governments, economic activity has slowed around the world, and the Organization for Economic Cooperation and Development (OECD) has stated that the global economy could be hit by its worst growth rate since 2009. The OECD have alerted that the growth rate could be as slow as 2.4%, potentially dragging many countries into recession. COVID-19 has, in a short period of time, emerged as one of the biggest challenges to face the 21st century world. Further complicating the response to this challenge are the grey areas surrounding the virus itself, in terms of its spread and how to treat it.
As research details emerge, the data pool grows exponentially, beyond the capacity of human intelligence alone to handle. Artificial intelligence (AI) is adept at identifying patterns from big data, and this piece will elucidate how it has become one of humanity’s ace cards in handling this crisis. Using China as a case-study, China’s success with AI as a crisis management tool demonstrates its utility, and justifies the financial investment the technology has required to evolve over the last few years.
Advancements in AI application such as natural language processing, speech recognition, data analytics, machine learning, deep learning, and others such as chatbots and facial recognition have not only been utilized for diagnosis but also for contact tracing and vaccine development. AI has no doubt aided the control of the COVID-19 pandemic and helped to curb its worst effects.
According to AI development and essential application to society, Michael Dukakis Institute for Leadership and Innovation (MDI) established the Artificial Intelligence World Society Innovation Network (AIWS.net) for the purpose of promoting ethical norms and practices in the development and use of AI in different areas, especially health care and medical support.
by Editor | Apr 5, 2020 | News
Learning algorithms, which are becoming an increasingly ubiquitous part of our lives, do precisely what they are designed to do: find patterns in the data they are given. The problem, however, is that even if patterns in the data were to lead to very accurate predictions, the processes by which the data is collected, relevant variables are defined, and hypotheses are formulated may depend on structural unfairness found in our society. Algorithms based on such data may well serve to introduce or perpetuate a variety of discriminatory biases, and thereby maintain the cycle of injustice.
For instance, it is well known that statistical models predict recidivism at higher rates among certain minorities in the United States [1]. To what extent are these predictions discriminatory? What is a sensible framework for thinking about these issues? A growing community of experts with a variety of perspectives is now addressing issues like this, in part by defining and analyzing data science problems through the lens of fairness and transparency, and proposing ways to mitigate harmful effects of algorithmic bias [2-7].
Adjusting for biases in data for the purposes of learning and inference is a well-studied issue in statistics. Take, for example, the perennial problem of missing data when trying to analyze polls. If we wish to determine who will win in a presidential election, we could ask a subset of people for whom they will vote, and try to extrapolate these findings to the general population. The difficulty arises, however, when one group of a candidate’s supporters are systematically underrepresented in polls (perhaps because those supporters are less likely to trust mainstream polling). This creates what is known as selection bias in the data available from the poll. If not carefully addressed in statistical analysis, selection bias may easily yield misleading inferences about the voting patterns of the general population. Common methods for adjustment for systematically missing data include imputation methods and weighting methods. Imputation aims to learn a model that can correctly predict missing entries, even if missing due to complex patterns, and then “fill in” those entries using the model. Weighting aims to find a weight that quantifies how “typical’’ a fully observed element in the data is, and then use these weights to adjust parts of the data that are fully observed to more closely approximate the true underlying population.
Alternatively, causal inference problems occur when we want to use data to guide decision-making. For example, should we eat less fat or fewer carbohydrates? Should we cut or raise taxes? Should we give a particular type of medication to a particular type of patient? Answering questions of this type introduces analytic challenges not present in outcome prediction tasks. Consider the problem of using data collected using electronic health records to determine which medications to assign to which patients. The difficulty is that medications within the hospital are assigned with the aim of maximizing patient welfare. In particular, patients who are sicker are more likely to get certain types of medication, and sicker patients are also more likely to develop adverse health outcomes or even die. In other words, an observed correlation between receiving a particular type of medication and an adverse health event may be due to the poor choice of medication, or it may be due to a spurious correlation due to underlying health status. These sorts of correlations are so common in data, that they led to a common refrain in statistics: correlation does not imply causation.
The original article can be found here.
Regarding to AI and Causality, AI World Society (AIWS.net) has created a new section on Modern Causal Inference in 2020. This section will be led by Professor Judea Pearl who is one of the pioneers to mathematize causal modeling in the empirical sciences. Professor Judea’s work will contribute to Causal Inference for AI transparency, which is one of important AIWS topics to identify, publish and promote principles for the virtuous application of AI in different domains including healthcare, education, transportation, national security, and other areas.
by Editor | Mar 29, 2020 | News
Professor Judea Pearl is the father of “Causal Revolution”, a breakthrough in AI to help machine learning can answer question “Why?” and solve problem automating human-level intelligence (sometimes called “strong AI”).
The Boston Global Forum and Michael Dukakis Institute honored him as 2020 World Leader in AIWS.
We introduce sophisticated ideas of Professor Judea Pearl:
“Current machine learning systems operate, almost exclusively, in a statistical, or model-free mode, which entails severe theoretical limits on their power and performance. Such systems cannot reason about interventions and retrospection and, therefore, cannot serve as the basis for strong AI. To achieve human level intelligence, learning machines need the guidance of a model of reality, similar to the ones used in causal inference tasks. To demonstrate the essential role of such models, I will present a summary of seven tasks which are beyond reach of current machine learning systems and which have been accomplished.
I believe that causal reasoning is essential for machines to communicate with us in our own language about policies, experiments, explanations, theories, regret, responsibility, free will, and obligations—and, eventually, to make their own moral decisions.”
Professor Judea Pearl received the Turing Award in 2012 with Causal Inference Model, to support and help machine learning can answer question “Why”.
Yoshua Bengio received the Turing Award in March 2019, and wrote an article on Wired August 2019, to show his interest to build Algorithms to Understand the “Why”.