Machine Learning Is Living in the Past

Machine Learning Is Living in the Past

Current machine learning platforms largely fail to provide time-series predictions because “correlations that have held in the past may simply not continue to hold in the future,” the London-based company causalLens notes. That’s a particular problem in areas like finance and business where time-series data types are ubiquitous.

Those correlations tend to be single data points, unsuited to capturing context or complex relationships. In one example, an algorithm can be given access to a data set about dairy commodity prices to predict the price of cheese. The algorithm may conclude that butter prices as a guide to predicting the cost of limburger.

Eluding the algorithm is a fundamental assumption about the cost of dairy products: the hidden common cause of price spikes for cheese and butter is the cost of milk. Therefore, a sudden change in the price of butter—consumers’ preference for olive oil, for instance—is unrelated to milk prices. Hence, the faulty correlation between butter and cheese can’t be used to predict the latter’s price.

The company touts its “causal AI” framework as looking beyond correlations to learn obvious relationships and then “propose plausible hypotheses about more obscure chains of causality,” it noted in a recent research bulletin. The approach allows data scientists to add domain knowledge and real-world context to improve predictive analytics. Causal AI proponents also argue their approach makes better use of data to come up with more accurate predictions through the framework’s ability to simulate different scenarios.

The original article can be found here.

It is useful to note that AI and causal inference has been contributed by professor Judea Pearl, who was awarded Turing Award in 2011. In 2020, Professor Pearl is also awarded as World Leader in AI World Society (AIWS.net) by Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF).

Prime Minister Shinzo Abe says no need for new emergency declaration as coronavirus cases surge

Prime Minister Shinzo Abe says no need for new emergency declaration as coronavirus cases surge

The government does not think it necessary to declare a state of emergency over the novel coronavirus again, Prime Minister Shinzo Abe has said, despite a nationwide surge in the number of cases.

“We’re doing careful monitoring with a strong sense of tension, but we’re not in a situation that immediately warrants the issuance of a fresh state of emergency declaration,” Abe told reporters.

We ask the public to take full precautions” against COVID-19, he said after a meeting on the situation with economic revitalization minister Yasutoshi Nishimura, health minister Katsunobu Kato and others.

Referring to the conditions that could help spread the virus, Abe said, “We ask people to avoid the 3Cs and to refrain from speaking loudly.” The 3Cs refer to confined spaces, crowded places and close-contact settings.

He also said virus testing capacity has not maxed out yet despite a recent surge of testers, and vowed to engage further in the early detection of infected people and treatment.

The original article can be found here.

Boston Global Forum honored Prime Minister Abe with the World Leader for Peace and Security Award on Global Cybersecurity Day December 12, 2020 at Harvard University Faculty Club.

WLA-CdM and BGF co-organize the Online AIWS Roundtable: “Digital Technologies, Elections and Democracy in times of COVID-19”

WLA-CdM and BGF co-organize the Online AIWS Roundtable: “Digital Technologies, Elections and Democracy in times of COVID-19”

The World Leadership Alliance-Club de Madrid (WLA-CdM) in partnership with the Boston Global Forum (BGF) is organizing a Policy Lab on Transatlantic Approaches to Digital Governance: A New Social Contract in the Age of Artificial Intelligence that will take place on 16-18 September 2020.

In preparation for the Policy Lab, WLA-CdM and BGF will be organizing a series of preliminary online roundtables that seek to fuel and enrich deliberations within the Policy Lab. The first of these roundtables took place on 12 May, 2020, and focused on the deployment of digital technologies in response to COVID-19 pandemic, and their implications on privacy rights.

A second online roundtable on Digital Technologies, Elections and Democracy in times of the COVID-19 pandemic will be held on 28 July 2020 and analyse how digital technologies can contribute to protecting democracies and guaranteeing free, fair and transparent elections in times of global emergencies.

Objective of this event:

  • To contribute to the global discussions on how digital technologies and Artificial Intelligence can promote stable democracies in times of global crises.
  • To collect ideas for the Social Contract 2020 version 1.0 launched on May 2020.
WLA-CdM and BGF will co-organize a Virtual Conference in September 2020 to discuss The Social Contract 2020

WLA-CdM and BGF will co-organize a Virtual Conference in September 2020 to discuss The Social Contract 2020

The World Leadership Alliance-Club de Madrid (WLA-CdM) in partnership with the Boston Global Forum (BGF) is organizing a Policy Lab on Transatlantic Approaches to Digital Governance: A New Social Contract in the Age of Artificial Intelligence that will take place on 16-18 September 2020.

Due to social distancing measures, we have decided to organize a virtual conference instead. The key content of this conference is “The Social Contract 2020: A New Social Contract in the Age of AI”

There are 20 former presidents and prime ministers that will join together with distinguished thinkers, legislators, and business leaders.

The magnitude and relevance of the COVID-19 pandemic has, naturally, upended original plans for Policy Lab on Transatlantic Approaches to Digital Governance: A New Social Contract in the Age of Artificial Intelligence which was originally designed to take place in the Spring of 2020. However, this initiative is now more important than ever as we seek to engage in multi-stakeholder discussions on the interaction between artificial intelligence/emerging technologies and measures/policies adopted by governments, international organizations, companies and society in times of global crises such as the one spawned by COVID 19.

The spread and penetration of digital technologies has been transforming society, the way in which we work, communicate and participate in different public and private spaces for some time now. The COVID-19 outbreak and ensuing global health crisis has significantly accelerated this process, imposing rapid and widespread digitalization even in the political sphere.

Since the beginning of the COVID-19 pandemic, we have seen online party meetings and even parliamentary sessions conducted via videoconference. We have also been able to verify the complexity of adapting certain governance interaction to a virtual format due to internal public administration rules and regulations Likewise, we have seen an increased interest in moving even voting online with a clear consciousness and concern over the opportunities and risks that this would entail in the context of elections.

Beyond the AI hype cycle: Trust and the future of AI

Beyond the AI hype cycle: Trust and the future of AI

There’s no shortage of promises when it comes to AI. Some say it will solve all problems while others warn it will bring about the end of the world as we know it. Both positions regularly play out in Hollywood plotlines like Westworld, Carbon Black, Minority Report, Her, and Ex Machina. Those stories are compelling because they require us as creators and consumers of AI technology to decide whether we trust an AI system or, more precisely, trust what the system is doing with the information it has been given.

The ability to do this successfully is largely dependent on user data. System performance, reliability, and user confidence in AI model output is affected as much by the quality of the model design as the data going into it. Data is the fuel that powers the AI engine that virtually converts the potential energy of user data into kinetic energy in the form of actionable insights and intelligent output. Just as filling a Formula 1 race car with poor or tainted fuel would diminish performance, and the driver’s ability to compete, an AI system trained with incorrect or inadequate data can produce inaccurate or unpredictable results that break user trust. Once broken, trust is hard to regain. That is why rigorous data stewardship practices by AI developers and vendors are critical for building effective AI models as well as creating customer acceptance, satisfaction, and retention.

Responsible data stewardship establishes a chain of trust that extends from consumers to the companies collecting user data and those of us building AI-powered systems. It’s our responsibility to know and understand privacy laws and policies and consider security and compliance during the primary design phase. We must have a deep understanding of how the data is used and who has access to it. We also need to detect and eliminate hidden biases in the data through comprehensive testing.

The original article can be found here.

Regarding to AI Ethics, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS.net) has developed AIWS Ethics and Practice Index to measure the extent to which a government’s AI activities respects human values and contributes to the constructive use of AI. The Index has four components by which government performance is assessed including transparency, regulation, promotion and implementation.

Japanese State Minister criticized China at the Democratic Alliance on Digital Governance Conference

Japanese State Minister criticized China at the Democratic Alliance on Digital Governance Conference

Yasuhide Nakayama, Member of the House of Representatives of Japan, the State Minister for Foreign Affairs, Head of the Japan Liberal Democratic Party’s Foreign Affairs Division, raised concerns and criticized China’s threats to peace and security in the world, especial in respect to the naval sovereignty of Vietnam, the Philippines, and Japan.

All democratic countries fear the Chinese Communist Party, so he hopes that all leaders of democratic countries ally and consolidate to face China’s threats together.

Yasuhide Nakayama is a Mentor of AI World Society Innovation Network (AIWS.net).

 

“China’s Government uses many ‘weapons’ to attack the US”

“China’s Government uses many ‘weapons’ to attack the US”

At the Democratic Alliance on Digital Governance Conference, organized by the Boston Global Forum, Mr. Nam Pham, Assistant Secretary of Business Development and International Trade, Government of Massachusetts, raised concerns about China-US relations and called for a rule-based reshaping of the relationship.

The costs to the US for relations with the PRC include both economical and in regard to human rights.

On the other hand, Chinese policies are not only a strategy of a company, but rather China’s Government have strategies for almost all companies to steal technologies of the US.

China causes terrible actions in sovereign naval territories of Japan, Vietnam, the Philippines, as well as a land conflict with India at the border. China hides information about COVID-19, resulting in a pandemic around the world.

Chinese use many weapons to attack US and western countries. These weapons are not guns, but are still very dangerous, and if left for a long time, they will destroy economies and values of the US and Western countries.

He concluded: the US government need to quickly review and reshape the relationship with China before it is too late.

 

UN2045: Building a Trustworthy Economy

UN2045: Building a Trustworthy Economy

Today’s financial systems are not trusted by citizens because they are inherently unstable and winner-take-all, so for most people the system offers only failure.  Today, new digital technologies allow the fine-grain feedback needed to build systems that are dramatically more stable, which reward everyone’s contribution to society, and provide everyone with a realistic opportunity to build a good life.

Professor Alex Pentland, Faculty Founding Director of MIT Connection Science, Mentor of AI World Society Innovation Network (AIWS.net), and a co-author of the Social Contract 2020, has conceived the “Trustworthy Economy”, based on data science, digital technologies and AI. This is very meaningful for the UN 2045 Initiative.

The United Nations Academic Impact and the Boston Global Forum co-organize the UN 2045 Roundtable “Building a Trustworthy Economy” at 10:00 AM EDT, July 24. The keynote speaker is Professor Alex Pentland, MIT, and the moderator is Ramu Damodaran, Chief of Academic Impact of the United Nations, and Editor-in-Chief of UN Chronicle Magazine. AIWS.net hosts this event as an AIWS Roundtable.

The History of AI Initiative considers Trustworthy Economy a History of AI event also.

CENTER FOR AI AND DIGITAL POLICY Update – EU Privacy Decision Will Have Global Consequences

CENTER FOR AI AND DIGITAL POLICY Update – EU Privacy Decision Will Have Global Consequences

The European Court of Justice this week sided with the Austrian privacy advocate Max Schrems and found that “Privacy Shield,” the framework for data transfers from Europe to the United States, does not protect the personal information of Europeans. The decision will have far-reaching implications for trans-Atlantic trade, tech, data protection, and democratic governance. To continue transfers of personal data from Europe to the US — key to the continued growth of the US tech industry — the US will need to update domestic privacy laws and establish a data protection agency. Several bills now pending in Congress would do this, though the prospects for passage in an election year remain unclear.

This is the second successful challenge that Schrems has brought to the Court of Justice. In 2015, the Court struck down “Safe Harbor” after Schrems argued that the first EU-US framework lacked sufficient safeguards for personal data. US and EU negotiators then put together Privacy Shield, but many doubted the Court of Justice would endorse the revised data transfer policy, particularly after Europe enacted the General Data Protection Regulation, a comprehensive new privacy law to protect the personal information of Europeans.

The “Schrems I” decision arose in 2015 against the backdrop of the 2013 Snowden disclosures and concern that US intelligence agencies had too easy access to the personal data gathered by US tech firms. These concerns remain in the European Court’s opinion in “Schrems II.” Current US surveillance law contains few safeguards for non-US persons and the US remains one of the few democratic countries in the world without a data protection agency.  But the second Schrems decision appears in 2020 when there is also growing concern about the fairness of Artificial Intelligence techniques, the unregulated use of face surveillance, and the recognition that mass surveillance curbs democratic freedoms and solidifies authoritarian governments. Europe itself has made strengthening democratic institutions and adherence to the rule of law top priorities over the next several years. So, the impact of the Schrems II decision will likely reach beyond EU-US relations. Other governments also collect and process the personal data of Europeans — the decision of the European court will have global consequences.

The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy.

 

Marc Rotenberg, Director

Center for AI and Digital Policy at Michael Dukakis Institute