by Editor | Apr 26, 2020 | News
“What we need now is critical thinking. We have brains. God gave us brains. We shouldbe using them. Critical thinking is hard work.”
On April 22, 2020, Vint Cerf, Vice President and Chief Evangelist of Google, Father of the Internet, and World Leader in AIWS Award recipient, shared his perspectives on “Pandemic geopolitics and recovery post-COVID” as part of a live video discussion moderated by David Bray, Atlantic Council GeoTech Center Director, on the role of the tech, data, and leadership in the global response to and recovery from COVID-19.
by Editor | Apr 26, 2020 | News
Blockchain and artificial intelligence (AI) solve different tasks, but they can work together to improve many processes in the financial services industry, from customer service to loan application reviews and payment processing.
Adopting AI and blockchain technologies can make your financial sector smarter and help it to perform more effectively. Blockchains can provide transparency and data aggregation; they also enforce contract terms. Meanwhile, AI can automate decision-making and improve internal bank processes.
AI and blockchain technology can revolutionize critical processes in the financial services industry. However, these technologies have to be carefully calibrated and integrated with existing operations. The low level of dependence between blockchain and AI technologies is helpful, as you can begin by introducing only one of these technologies to your banking processes. This will allow you to focus on the most important things, such as creating a clear road map, developing an MVP and introducing your product to the market faster than your competitors.
The original article can be found here.
To support AI application in the world society including financial services, Artificial Intelligence World Society Innovation Network (AIWS.net) created AIWS Young Leaders program including Young Leaders and Experts from Australia, Austria, Belgium, Britain, Canada, Denmark, Estonia, France, Finland, Germany, Greece, India, Italy, Japan, Latvia, Netherlands, New Zealand, Norway, Poland, Portugal, Russia, Spain, Sweden, Switzerland, United States, and Vietnam.
by Editor | Apr 26, 2020 | News
Artificial intelligence owes a lot of its smarts to Judea Pearl. In the 1980s he led efforts that allowed machines to reason probabilistically. Now he’s one of the field’s sharpest critics. In his latest book, “The Book of Why: The New Science of Cause and Effect,” he argues that artificial intelligence has been handicapped by an incomplete understanding of what intelligence really is.
Three decades ago, a prime challenge in artificial intelligence research was to program machines to associate a potential cause to a set of observable conditions. Pearl figured out how to do that using a scheme called Bayesian networks. Bayesian networks made it practical for machines to say that, given a patient who returned from Africa with a fever and body aches, the most likely explanation was malaria. In 2011 Pearl won the Turing Award, computer science’s highest honor, in large part for this work.
But as Pearl sees it, the field of AI got mired in probabilistic associations. These days, headlines tout the latest breakthroughs in machine learning and neural networks. We read about computers that can master ancient games and drive cars. Pearl is underwhelmed. As he sees it, the state of the art in artificial intelligence today is merely a souped-up version of what machines could already do a generation ago: find hidden regularities in a large set of data. “All the impressive achievements of deep learning amount to just curve fitting,” he said recently.
In his new book, Pearl, now 81, elaborates a vision for how truly intelligent machines would think. The key, he argues, is to replace reasoning by association with causal reasoning. Instead of the mere ability to correlate fever and malaria, machines need the capacity to reason that malaria causes fever. Once this kind of causal framework is in place, it becomes possible for machines to ask counterfactual questions — to inquire how the causal relationships would change given some kind of intervention — which Pearl views as the cornerstone of scientific thought. Pearl also proposes a formal language in which to make this kind of thinking possible — a 21st-century version of the Bayesian framework that allowed machines to think probabilistically.
Pearl expects that causal reasoning could provide machines with human-level intelligence. They’d be able to communicate with humans more effectively and even, he explains, achieve status as moral entities with a capacity for free will — and for evil.
The original article can be found here.
In the support of AI Ethics, Michael Dukakis Institute for Leadership and Innovation (MDI) established the Artificial Intelligence World Society (AIWS.net) for the purpose of promoting ethical norms and practices in the development and use of AI. Besides, Michael Dukakis Institute and the Boston Global Forum honored Professor Judea Pearl as 2020 World Leader in AI World Society for his contribution on AI Causality. In the future, Professor Judea will also contribute to Causal Inference for AI transparency, which is one of important AIWS topics on AI Ethics.
by Editor | Apr 19, 2020 | Event Updates
At 9:00 am (Boston time), April 22, 2020, Professor Alex Pentland, will speak at online AIWS Roundtable: Solutions to reopen in democratic countries
Professor Alex Pentland is the director of MIT Connection Science and co-founder of AIWS.net
Moderator: Dr. Lyndon Haviland, Adviser of United Nations Secretary General
Participants: senior business leaders of global corporation who are alumni of Harvard Business School at AMP program.
Professor Alex Pentland is one of 7 most powerful data scientists according to Forbes Magazine ranking. He will present using personal data to prevent Coronavirus without violate privacy rights and can reopen economy and society.
by Editor | Apr 19, 2020 | News
The world’s current growing pandemic of the Wuhan virus amplifies the paramount and ultimate goal of our AI initiative. Our vision is to establish a universal AI value system ensuring that AI technology advancement would make lives better and not give ruling authority more tools to oppress or exacerbate human suffering.
The number one of our AI Universal World Guidelines states:
“Right to Transparency: All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produces the outcome.”
While our world is still preoccupied with fighting the Wuhan virus, it has been so clear to all that had the ruling power of China been more fort coming with it people and with the world, this pandemic could have been avoided or its killing impact could have been much more limited. As early as November of 2019 in Wuhan, China there had been many red flags about this new killing disease; doctors had ringed the alarm bell. Unfortunately to all of us, the Chinese government not only kept these warnings secret, but also punished the doctors and suppressed any truthful voice of its people. In Asian culture, there is a saying that “treating an illness is similar to fight a fire”. Time is the essence. Any delay will cause irreparable harm.
During the most critical period of November and December of 2019, China has repeatedly announced that there was nothing to worry about and that it has everything under controlled while Chinese were dying in the thousands. And furthermore, the World Health Organization just repeated the official version of the Chinese government. This lack of transparency from a national government and from an international organization no doubt has contributed terribly to the avoidable suffering of millions of people across our Globe.
Every government has a different set of values. The more common values we have the better. We hope that in the near future, at least in the AI World Society, we all share basic universal principles, of which transparency is valued, respected, and commonly practiced.
by Editor | Apr 19, 2020 | News
Professor John Quelch, co-founder of the Boston Global Forum, Vice Provost, University of Miami, Dean, Miami Herbert Business School and Leonard M. Miller University Professor.
Recently, Professor Quelch has contributed a video to advise people preventing Coronavirus with 7Cs.
by Editor | Apr 19, 2020 | News
Most C-suite executives know they need to integrate AI capabilities to stay competitive, but too many of them fail to move beyond the proof of concept stage. They get stuck focusing on the wrong details or building a model to prove a point rather than solve a problem. That’s concerning because, according to our research, three out of four executives believe that if they don’t scale AI in the next five years, they risk going out of business entirely. To fix this, we offer a radical solution: Kill the proof of concept. Go right to scale.
We came to this solution after surveying 1,500 C-suite executives across 16 industries in 12 countries. We discovered that while 84% know they need to scale AI across their businesses to achieve their strategic growth objectives, only 16% of them have actually moved beyond experimenting with AI. The companies in our research that were successfully implementing full-scale AI had all done one thing: they had abandoned proof of concepts.
To scale value in the AI era, the key is to think big and start small: prioritize advanced analytics, governance, ethics, and talent from jump. It also demands planning. Decide what value looks like for you, now and three years from now. Don’t sacrifice your future relevance by being so focused on delivering for today that you aren’t prepared for the next wave. Understand how AI is changing your industry and the world, and have a plan to capitalize on it.
This is certainly new territory, but there is still time to get ahead if you lay the groundwork now. What you don’t have time to do is waste it proving a concept that already exists as consensus.
The original article can be found here.
According to Artificial Intelligence World Society Innovation Network (AIWS.net), AI can be an important tool to serve and strengthen democracy, human rights, and the rule of law. In this effort, Michael Dukakis Institute for Leadership and Innovation (MDI) invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society.
by Editor | Apr 19, 2020 | News
“Correlation is not causation.”
Though true and important, the warning has hardened into the familiarity of a cliché. Stock examples of so-called spurious correlations are now a dime a dozen. As one example goes, a Pacific island tribe believed flea infestations to be good for one’s health because they observed that healthy people had fleas while sick people did not. The correlation is real and robust, but fleas do not cause health, of course: they merely indicate it. Fleas on a fevered body abandon ship and seek a healthier host. One should not seek out and encourage fleas in the quest to ward off sickness.
The rub lies in another observation: that the evidence for causation seems to lie entirely in correlations. But for seeing correlations, we would have no clue about causation. The only reason we discovered that smoking causes lung cancer, for example, is that we observed correlations in that particular circumstance. And thus a puzzle arises: if causation cannot be reduced to correlation, how can correlation serve as evidence of causation?
The Book of Why, co-authored by the computer scientist Judea Pearl and the science writer Dana Mackenzie, sets out to give a new answer to this old question, which has been around—in some form or another, posed by scientists and philosophers alike—at least since the Enlightenment. In 2011 Pearl won the Turing Award, computer science’s highest honor, for “fundamental contributions to artificial intelligence through the development of a calculus of probabilistic and causal reasoning,” and this book sets out to explain what all that means for a general audience, updating his more technical book on the same subject, Causality, published nearly two decades ago. Written in the first person, the new volume mixes theory, history, and memoir, detailing both the technical tools of causal reasoning Pearl has developed as well as the tortuous path by which he arrived at them—all along bucking a scientific establishment that, in his telling, had long ago contented itself with data-crunching analysis of correlations at the expense of investigation of causes. There are nuggets of wisdom and cautionary tales in both these aspects of the book, the scientific as well as the sociological.
Professor Pearl’s book was also highly acclaimed with the praise by Dr. Vint Cerf, Chief Internet Evangelist at Google Inc. and World Leader in AI World Society (AIWS) award, “Pearl’s accomplishments over the last 30 years have provided the theoretical basis for progress in artificial intelligence… and they have redefined the term ‘thinking machine.'” In 2020, AI World Society (AIWS.net) has created a new section on Modern Causal Inference, which will be led by Professor Judea Pearl. Professor Judea’s work will contribute to AI transparency, which is one of important AIWS topics to identify, publish and promote principles for the virtuous application of AI in different domains including healthcare, education, transportation, national security, and other areas.
The original article can be found here.
by Editor | Apr 12, 2020 | News
Reps. Adam Schiff (D-CA) and Steve Chabot (R-OH), the co-chairs of the Congressional Freedom of the Press Caucus, released the following statement on the decision by a Pakistani appeals court’s commutation of sentences of four men for convicted of murdering journalist Daniel Pearl:
“It is deeply disturbing that a Pakistani appeals court recently commuted the sentences of four men convicted of brutally murdering Wall Street Journal reporter Daniel Pearl in 2002, a decision that would drastically reduce their sentences. We welcome the announcement that this decision will be further appealed and urge the Pakistani Supreme Court in the strongest terms to ensure this miscarriage of justice does not stand.
“This is also cause to remember and honor the tremendous personal risks that journalists like Daniel Pearl take all around the world to tell stories that must be told. It is our responsibility to stand with them. In 2010, Congress passed the Daniel Pearl Freedom of the Press Act to hold those who persecute reporters accountable. With journalists increasingly under threat around the world, it’s time to strengthen the penalties on those who attack journalists, not commute the sentences of cold blooded murderers.”
On April 9, 2020, CEO of the Boston Global Forum, Mr. Nguyen Anh Tuan, sent a letter to Pakistani Prime Minister to protest the decision on April 2, 2020 of the high court in Pakistan’s Sindh province, which ordered the release of four men convicted of participating in the 2002 murder and kidnapping of Wall Street Journal reporter Daniel Pearl.
Professor Judea Pearl, father of journalist Daniel Pearl, wrote on his twitter: “We are grateful to Congressmen Adam Schiff and Steve Chabot for taking a strong, bipartisan stand on behalf of justice and press freedom.”. Professor Judea Pearl is the Chancellor’s Professor of UCLA and a Mentor of AIWS.net.