AIWS-IN Roundtable started 02/02/2020 with AIWS-IN Roundtable on UN 2045

AIWS-IN Roundtable started 02/02/2020 with AIWS-IN Roundtable on UN 2045

The World Leadership Alliance – Club de Madrid (WLA-CdM), in partnership with the Boston Global Forum (BGF), is organizing a transatlantic and multi-stakeholder dialogue on global challenges and policy solutions in the context of the need to create a new social contract on digital technologies and Artificial Intelligence (AI).

To get ideas and opinions from global leaders and distinguished thinkers for policy dialog from April 27 to 29 at Harvard, and MIT, and contribute for United Nations 2045 project, the Boston Global Forum, World Leadership Alliance-Club de Madrid, and United Nations Academic Impact co-organize AIWS Innovation Network Roundtable, an online discussion on AIWS Innovation Network (AIWS-IN). It started from 02/02/2020 with discussion between Mr. Nguyen Anh Tuan, CEO of BGF, co-founder of AIWS-IN, and Mr. Ramu Damodaran, Chief of United Nations Academic Impact. Governor Michael Dukakis, co-founder of AIWS-IN, is the moderator of the AIWS-IN Roundtable with participants such as Professor Alex Sandy Pentland, MIT, co-founder of AIWS-IN, Professor David Silbersweig, Harvard, co-founder of AIWS-IN, professor Nazli Choucri, MIT, co-founder of AIWS-IN, and professor Joseph Nye, Harvard. The AIWS-Roundtable will finish on April 20, 2020. The discussion focuses on balance centers of power and balance of new great powers. AI Assistants, a new center of power, will be discussed to find out solutions, regulations, and practice to manage and govern it. This is a part of the Social Contract 2020.

EPIC Petitions FTC for Regulations on AI Use

EPIC Petitions FTC for Regulations on AI Use

The Electronic Privacy Information Center (EPIC) filed a petition with the Federal Trade Commission (FTC) today seeking regulations for the use of artificial intelligence (AI) technologies.

EPIC – a public interest group that focuses on data privacy issues – said it decided to file the FTC petition after filing complaints regarding the use of AI in employment screening, as well as the secret scoring of young athletes.

“The injuries caused to consumers by commercial AI use are substantial and frequently unavoidable,” EPIC wrote. “In assessing privacy-related injuries to consumers, the Commission typically focuses on the sensitivity of the personal data at issue, the relationship between the consumer and the business(es) engaged in the challenged practice, and the consumers’ knowledge of and agency over the practice,” the group said.

EPIC also pointed to sentiments of support at the FTC for updated regulations on AI use, including quoting FTC Commissioner Rebecca Kelly Slaughter as saying last month, “it is imperative for the FTC to take all action within its authority right now to protect consumers in this space.”

Additionally, EPIC took note of the Office of Management and Budget’s (OMB) Guidance for Regulation of AI, pointing out that it “applies to all Federal agencies,” while incorporating many aspects of the Organization for Economic Cooperation and Development (OECD) Principles on AI.

“Only by initiating a rulemaking on AI can the Federal Trade Commission preserve the rights of American consumers, defend American values, protect privacy, and promote civil rights,” EPIC wrote.

Electronic Privacy Information Center (EPIC) has been led by Marc Rotenberg (President and Executive Director of EPIC). He is also one of mentors and an important contributor to AI World Society Innovation Network (AIWS-IN), which is part of AIWS to identify, publish and promote principles for the virtuous application of AI in different domains including healthcare, education, transportation, national security, and other areas.

The original article can be found here.

AI meets operations

AI meets operations

One of the biggest challenges operations groups will face over the coming year will be learning how to support AI- and ML-based applications. On one hand, ops groups are in a good position to do this; they’re already heavily invested in testing, monitoring, version control, reproducibility, and automation. On the other hand, they will have to learn a lot about how AI applications work and what’s needed to support them. There’s a lot more to AI Operations than Kubernetes and Docker. The operations community has the right language, and that’s a great start; I do not mean that in a dismissive sense. But on inspection, AI stretches the meanings of those terms in important but unfamiliar directions.

Three things need to be understood about AI.

First, the behavior of an AI application depends on a model, which is built from source code and training data. A model isn’t source code, and it isn’t data; it’s an artifact built from the two. Source code is relatively less important compared to typical applications; the training data is what determines how the model behaves, and the training process is all about tweaking parameters in the application so that it delivers correct results most of the time.

Second, the behavior of AI systems changes over time. Unlike a web application, they aren’t strictly dependent on the source. Models almost certainly react to incoming data; that’s their point. They may be retrained automatically. They almost certainly grow stale over time: users change the way they behave (often, the model is responsible for that change) and grow outdated.

Last, and maybe most important: AI applications are, above all, probabilistic. Given the same inputs, they don’t necessarily return the same results each time. This has important implications for testing. We can do unit testing, integration testing, and acceptance testing—but we have to acknowledge that AI is not a world in which testing whether 2 == 1+1 counts for much. And conversely, if you need software with that kind of accuracy (for example, a billing application), you shouldn’t be using AI. In the last two decades, a tremendous amount of work has been done on testing and building test suites. Now, it looks like that’s just a start. How do we test software whose behavior is fundamentally probabilistic? We will need to learn.

To support and collaborate AI application and operation, Artificial Intelligence World Society Innovation Network (AIWS-IN) created AIWS Young Leaders program including Young Leaders and Experts from Australia, Austria, Belgium, Britain, Canada, Denmark, Estonia, France, Finland, Germany, Greece, India, Italy, Japan, Latvia, Netherlands, New Zealand, Norway, Poland, Portugal, Russia, Spain, Sweden, Switzerland, United States, and Vietnam.

The original article can be found here.

AI healthcare companies set for exponential growth

AI healthcare companies set for exponential growth

Companies specialising in artificial intelligence (AI) in healthcare are in “rude health” and are set for exponential growth over the next five years, according to a new report.

The report by adviser and broker finnCap outlines the companies that are employing AI to its best advantage and where its application should have a meaningful business benefit.

AI has potential applications across life sciences, including drug discovery, clinical trials and patient care, in addition to potential improvements in speed and efficiency of company operations.

Many trials are still unsuccessful because drugs fail to show efficacy and safety and AI is seen as a way of improving the chances of success by screening for various factors that could affect outcomes.

The global market was worth $2.1bn in 2018, with exponential growth to $36.1bn predicted by 2025, at a combined annual growth rate of 50.2%.

AI presents various new challenges, and the pharmaceutical industry has highlighted many technologies in the past that promised to drive productivity, but nothing has yet worked on a large scale.

Nevertheless, the authors believe that AI is likely to become a greater differentiator in the next 5-10 years and the report presents case studies and real-world examples of the benefits it could provide.

Regarding to AI impact for society and healthcare, the Michael Dukakis Institute for Leadership and Innovation (MDI) established the Artificial Intelligence World Society Innovation Network (AIWS-IN) for helping people achieve well-being and happiness, relieve them of resource constraints and arbitrary/inflexible rules and processes, and solve important issues, such as SDGs.

The original article can be found here

How AI Will Change The Way We Work In 2020

How AI Will Change The Way We Work In 2020

If there is one technology that has become the buzzword of this decade, it would be artificial intelligence (AI).

In the beginning of 2010s, consumer natural-language processing (NLP) allowed us to talk to our phones and control smart home appliances reliably. At the time, a lot of people expected NLP to explode in other domains, but it never really materialized, either because of poor implementations or a focus on other types of development.

However, over the next decade, we can expect to see NLP put to use in complex software to lower the barrier to entry. For example, customer relationship management (CRM) software, which is crucial for any business, is finding higher adoption among salespeople thanks to conversational AI. The application of AI in different software also helps in identifying repetitive tasks and automating them, thereby improving employee productivity.

The organizations that will be most successful using AI over the next decade are the ones implementing single-vendor technology platforms today. If data is scattered in applications using different data models, it’s going to be difficult to work with. But when all data is on a single platform, it’s much easier to feed it into a machine-learning algorithm. The more data that’s available, the more useful the predictions and machine-learning models are going to be.

Regarding to AI development and impact for our society, the Michael Dukakis Institute for Leadership and Innovation (MDI) established the Artificial Intelligence World Society Innovation Network (AIWS-IN) to monitor AI developments and uses by governments, corporations, and non-profit organizations to assess whether they comply with the norms and standards codified in the AIWS Social Contract 2020.

The original article can be found here.

Why Apple And Microsoft Are Moving AI To The Edge

Why Apple And Microsoft Are Moving AI To The Edge

Artificial intelligence (AI) has traditionally been deployed in the cloud, because AI algorithms crunch massive amounts of data and consume massive computing resources.  But AI doesn’t only live in the cloud. In many situations, AI-based data crunching and decisions need to be made locally, on devices that are close to the edge of the network.

AI at the edge allows mission-critical and time-sensitive decisions to be made faster, more reliably and with greater security. The rush to push AI to the edge is being fueled by the rapid growth of smart devices at the edge of the network – smartphones, smart watches and sensors placed on machines and infrastructure. Earlier this month, Apple spent $200 million to acquire Xnor.ai, a Seattle-based AI startup focused on low-power machine learning software and hardware. Microsoft offers a comprehensive toolkit called Azure IoT Edge that allows AI workloads to be moved to the edge of the network.

Will AI continue to move to the edge? What are the benefits and drawbacks of AI at the edge versus AI in the cloud? To understand what the future holds for AI at the edge, it is useful to look back at the history of computing and how the pendulum has swung from centralized intelligence to decentralized intelligence across four paradigms of computing.

To support AI technology and application, Artificial Intelligence World Society Innovation Network (AIWS-IN) created AIWS Young Leaders program including Young Leaders and Experts from Australia, Austria, Belgium, Britain, Canada, Denmark, Estonia, France, Finland, Germany, Greece, India, Italy, Japan, Latvia, Netherlands, New Zealand, Norway, Poland, Portugal, Russia, Spain, Sweden, Switzerland, United States, and Vietnam.

The original article can be found here.

AIWS Social Contract 2020 contributes principles to shape a more peaceful, secure, and prosperous world

AIWS Social Contract 2020 contributes principles to shape a more peaceful, secure, and prosperous world

Thinkers and organizations called to build a new social contract in digital age.

Some examples are “Do we need a new social contract?” by Maurizio Bussolo and Marc Fleurbaey on Brookings Institution April 11, 2019; “A social contract to transform our world by 2030” at World Economic Forum, Open Forum Davos 2016; “A Trial by Fire and the New Social Contract” by Dr. Kai-Fu Lee on Jun 13, 2019; “We need to build a new social contract for the digital age” by a group from Queensland University of Technology’s School of Law; and professor Thomas Kochan, MIT and some other groups calling for new social contract for labors in AI age.

To meet the call for a new social contract in AI Age, Boston Global Forum created concepts and principles for one. It is named the AIWS Social Contract 2020. The first conference to discuss the AWS Social Contract 2020 was at the AIWS Conference on September 23, 2019 at Harvard University Faculty Club, then Mr. Nguyen Anh Tuan, CEO of Boston Global Forum, presented the AIWS Social Contract 2020 at the Riga Conference, October 12, 2019, then he and professor Alex Sandy Pentland presented the AIWS Social Contract 2020 at World Leadership Alliance-Club de Madrid conference on October 21, 2019. To create a platform for distinguished thinkers, innovators, and young leaders to practice and apply principles of the AIWS Social Contract 2020, the Michael Dukakis Institute for Leadership and Innovation created AIWS Innovation Network, official launched on Global Cybersecurity Day Symposium December 12, 2019 at Loeb House, Harvard University.

This is the first social contract to shape a new world with politics, society, public policy, and economy, with new concepts of AI-governments, AI-citizens, AI Assistant, 7 layers and 7 centers of powers, and how to balance centers of powers and great powers in the world.

World Leadership Alliance-Club de Madrid will co-organize the policy dialog to discuss about the AIWS Social Contract 2020 from April 27 to 29, 2020 at Harvard University and MIT.

Michael And Kitty Dukakis Endorse Markey For Senate

Michael And Kitty Dukakis Endorse Markey For Senate

Former Gov. Michael Dukakis and Kitty Dukakis are backing Sen. Edward Markey in his primary contest against Rep. Joseph Kennedy III.

The former three-term governor and 1988 Democratic presidential nominee released an endorsement video on Tuesday, crediting Markey’s work over the years on energy innovation and climate change.

Markey described them as the “true founders of the grassroots movement” and said he was grateful for their support. “Michael and Kitty Dukakis are the embodiment of the bold, progressive values that have made the Democratic party what it is today,” Markey said.

The original article can be found here.

Governor Michael Dukakis is a co-founder and chairman of the Boston Global Forum, as well as a co-founder of AI World Society Innovation Network (AIWS-IN). He also is a co-author of the AIWS Social Contract 2020.

Social Contract 2020: Toward Safety, Security, & Sustainability for AI World

Social Contract 2020: Toward Safety, Security, & Sustainability for AI World

                                                Professor Nazli Choucri, MIT

                                                 Co-founder of the AIWS Innovation Network

January 21, 2020

Preface

Advances in AI, internet, social media, and threats to cybersecurity jointly shape a new worldwide ecosystem for which there is no precedent. At issue is building new dimensions, even principles, which would shape the future of international law

  1. INTRODUCTION – NEW GLOBAL ECOLOGY

The term “artificial intelligence” refers to the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, translation between languages, self-driving cars, and so forth.  Almost everyone recognizes that advances in AI have already altered conventional ways of viewing the world around us. This is creating new realities for everyone – as well as new possibilities.

These advances are powerful in many ways. They have created a new global ecology; yet they remain opaque and must be better understood. We have created new tradeoffs that must be assessed. We must now focus on critical principles and essential supporting practices for the new and emerging Social Contract 2020.

We must now re-think and consolidate the best practices for human development, recognizing the power and the value of the individual and of society.

 

  1. NEW REALITY – AND NEW UNKOWNS

Advances in AI are far more rapid that we appreciate. Fully understanding the scale of the AI domain remains elusive. We have seen a shift from executing instructions by humans to replicating humans, outperforming humans, and transcending humans.

We are at the beginning of a new era, a world of mind-machine convergence with biological drivers for both mind and machine. Also elusive is the management of embedded insecurities in applications of this new ubiquitous technology and the imperatives of safety and security.

When all is said done, AI remains: devoid of consciousness, empathy, and perhaps select other human features, such as ethics, so fundamental to humanity and the social order. Its current logic is situated at the frontiers of biological intelligence and machine intelligence. While it is generally anchored in past data, it has made possible whole new sources and forms of design space.

In sum: The world of AI today is framed by a set of unknowns — known unknowns and unknown unknowns — where technological innovation interacts with the potential for a total loss of human control.

 

III. INTERNATIONAL CONSENSUS

There is a clear awareness in the international community of the challenges and opportunities, as well as the problems and perils of AI, and many are seeking ways of managing their approach to AI. At least 20 countries have announced formal strategies to promote the use and development of AI. No two strategies are alike, however there are common themes even among countries who focus on different aspects of AI policy. The most common themes addressed pertain to:

  • Scientific research,
  • Talent development,
  • Skills and education,
  • Public and private sector adoption,
  • Ethics and inclusion,
  • Standards and regulations, and
  • Data and digital infrastructure.

Concurrently, AI is becoming a focus for foreign policy and international cooperation – for both developed and developing states. There is a shared view that no country will be able to compete or meet the needs of its citizens without substantial AI capability.

More important, many countries are now involved in technology leapfrogging rather than in replicating known trajectories of the past century. It is no longer expected, nor is it necessary, to replicate the stages of economic development of the west —one phase at a time. Countries now frame their own priorities and strategies.

In sum, all countries are going through a common experience of adapting to and managing unknowns.  This commonality of shared elements result in a welcoming international atmosphere for a Social Contract 2020. What is the Social Contract 2020?

 

  1. FOUNDATIONS and PRINCIPLES

There is a long tradition of consensus-based social order founded on cohesion and not use of force nor formal regulation or legislation. It is a necessary precursor for managing change and responding to societal needs.

 The foundational questions are:  what, why, why and how?

What?

A social contract is about supporting a course of action. It is inclusive and equitable. It focuses on the relationships among people, governments, and other key entities in society.

Why?

To articulate the concerns and find common convergences. And to frame ways of addressing and managing potential threats.

Who?

In today’s world, participants in the Social Contract 2020 involve:

  • Individuals as citizens and members of a community
  • Governments who execute citizen goals
  • Corporate and private entities whose operations involve

Business rights and responsibilities

  • Civil society that transcends the above
  • Innovators of AI and related technologies, and
  • Analysts of ethics and responsibility. None of the above can be “left out.”

Each of these constitutes a distinct center of power and influence.

How?

The starting point consists of three foundational principles for powerful international cooperation that provide solid anchors for the Social Contract 2020:

(1) Precautionary Principle for Innovations and Applications:

The precautionary principle is well established internationally. It does not impede innovation, but supports it. It does not push for regulation, but supports initiatives to explore the unknown with care and caution.

(2)  Fairness and Justice for All

The second principle is already agreed upon in the international community as a powerful aspiration. It is the expectation of all entities – private and public — to treat, and be treated, with fairness and justice.

(3) Responsibility and accountability for policy and decision – private and public

The third principle recognizes the power of the new global ecology that will increasingly span all entities worldwide — private and public, developing and developed.

Jointly, these basic foundations – what, why, who and how – create powerful anchors for framing and implementing the Social Contract 2020.

 

  1. SOCIAL CONTRACT 2020

All participants and centers of power and influence contribute to framing the legal order in the age of AI.  And each has rights and responsibilities that must be articulated and respected. An initial framing is presented below:

(1) Individuals, Citizens, Groups:

Everyone is entitled to basic rights and dignity that are enhanced (?) by AI and the Internet Age and entail greater responsibility:

Data Rights and Responsibilities

Each individual has a right to privacy and is entitled to a device to access and control their own data. Individuals have a right to organize ways of managing their data, individually or collectively.

Education and Political Participation

Each individual has the Right to be involved directly and effectively in political decisions.            Each has access to education/knowledge pertaining to the use and impact of AI.

Responsibility:

Each individual is prohibited from exercising adverse behaviors, such as hacking and disseminating disinformation.

 

(2)  Governments:

Every government is expected to behave responsibly in the management of AI for governance and for interactions with individuals.

Governments Standards:

  • Create incentives for citizens to use AI in ways that benefit society.

 

United Nations and International Organizations:

  • Extend sphere to include AI and extend the upholding of international standards/norms/practices pertaining thereto.
  • Create and manage a universal digital currency.

 

(3) Business Entities

Business operations and related rights come with accountability and responsibility – nationally and internationally.

  • Respect independent audits for fairness, accountability, and cybersecurity.
  • Respect common AI values, standards, norms, and data ownership rules, and expect penalties for noncompliance.

 

(4) Civil Society Organizations:

Rights and responsibilities of civil society organizations include monitoring governments and firms with respect to common values.

  • Civil society organizations are responsible for compliance with common values/norms/standards/laws and expect penalties for noncompliance.
  • Support and recognize exemplary citizen contributions in AI area.

 

(5) AI Assistants:

AI assistants provide an interface to facilitate compliance with established standards.

  • Support AI users and assist them to serve the broad interests of society.
  • Engage with other power centers for mutual support and supervision.

 

VI PREFERENCES and PERFORMANCE

            The Social Contract 2020 consists of general principles and directives for its implementation. Each country is different, as would be the approach to the implementation and adoption of Social Contract 2020. These preferences are often in the nature of tradeoffs at the intersection of AI and society. These are simply adjustment mechanisms to facilitate implementation of Social Contract 2020.  For example:   

  • Performance & explicability
  • Ethics & efficiency
  • Growth & sustainability
  • Convenience vs. safety
  • Power & accountability
  • Regulation & innovation
  • Security vs. stability

Social Contract 2020 helps steer societies to transcend current practices and forms of e-government by enabling and providing applications of AI to assist decision making for all critical functions – notably the provision of public services, performance of civic functions, and evaluation of public officials – supported by a Center for National Decision Making and Data (NDMD).

AI supported public services span major critical functions to enable automated public services assisted by AI, notably:

  • Health care and public health:

Build AI hospitals for remote, rural, and mountainous areas.

  • Education:

Build AI schools for remote, rural, and mountainous areas.

  • Law, legal services:

Build AI law, legal services.

  • Public transportation:

AI public transportation information and support system.

  • Public services for tourism:

AI public services for tourism.

  • Public services to support labors:

AI labor, job guidance system.