Why AI could be one of the most disruptive technologies in history

Why AI could be one of the most disruptive technologies in history

Artificial Intelligence (AI) has quickly made the transition from future technology to one that surrounds us in our everyday lives. AI is being built into the products and services we use each day to transform our lives for the better but we will know how this emerging technology affect the future of work.

To understand the growing role AI is playing in the lives of both businesses and consumers, Jeff Wong, Ernst & Young (EY) Global Chief Innovation Officer, has elaborated important points as follows:

AI has sa ignificant ability to improve efficiency- doing things faster and with more accuracy.  In particular, AI saved 2.1 million hours of people’s time by automating repetitive tasks. He also stressed out the obligation of leaders of companies and governments around the world to invest in their communities and workforces to make sure that all people are developing skills that will enable them to evolve and grow with the AI change. It is strongly aligned with AIWS government and Practices Index for AI Ethics, which has been developed by Michael Dukakis Institute (MDI) to promote government’s AI activities for human values and constructive use of AI.

Is the “Populist” Tide Retreating?

Is the “Populist” Tide Retreating?

Professor Joseph Nye, member Boston Global Forum’s Board of Thinkers, has thoughts about Populism on Project-Syndicate:

Strong support for immigration and globalization in the US sits uneasily with the view that “populism” is a problem. In fact, the term remains vague and explains too little – particularly now, when support for the political forces it attempts to describe seems to be on the wane.

AI World Society-G7 Summit Initiative will conceive ideas about applying AI to solve populism and to make a better society with deeply applied AI.

AI Research: US vs. China

AI Research: US vs. China

In our AIWS Weekly newsletter last week, we discussed DARPA’s efforts in keeping the US ahead in AI Technology. This week, we take a look at the race between the US and China from the perspective of AI research output.

A new analysis by the Allen institute for AI shows that China’s AI research publications are rapidly increasing and “the two nations will produce an equal share of top AI publications by 2020”. It is not by coincidence that the Chinese government announced in 2017 a new AI strategy that aims to rival the US by 2020. Similar to DARPA’s playing a key role in the US’s evolving AI strategy, China’s defense ministry is also investing deeply in AI innovation, according to a study by the Center for a New American Security.

Quantity does not always come with quality. The quality of a research publication depends on its citation count and publication venue. On the citation count, the data from the Allen Institute’s aforementioned analysis shows that AI research output from China has improved sharply; a jump from less than 5% in 2000 to almost 30% in 2017 in the share of the top 10% most-cited AI research publications. The US share decreased slowly from 40% to 30% during the same period.

However, given the size of China, it would be interesting to know if the citation data takes into account citations by authors from the same country. Issues have been raised about Chinese research institutions’ reputation for low quality and even fraudulent publications.

Anytime it comes to ethics, whether it is about publishing AI research or developing AI products, it is a serious issue because of the highly potential impact of AI in the future. The AI World Society (AIWS) puts AI ethics as a focus of many of the organization’s works. Last year, we produced a set of Ethical Frameworks for AI Norms and Standards, towards building a social model that will make Artificial Intelligence (AI) safe, trustworthy, transparent, and humanistic.

On the other hand, we cannot wait until the quality of China’s AI research output is verified. “The US government needs to do more to support AI research”, said Oren Etzioni, CEO of the Allen Institute for AI.

Some of the smartest people in technology pondered how to make AI trustworthy

Some of the smartest people in technology pondered how to make AI trustworthy

New York Times recently reported about the New York Summit/Leading in the Age of AI Conference in Half Moon Bay, California, where some of the top minds shared their outlooks for AI and its applications. The summit ended with a lot of room to debate about what we should do with AI ethics.

Is government regulation the answer? At least, that is suggested by Amazon and Microsoft. “Law is needed,” said Brad Smith, Microsoft’s president and chief legal officer.

Many employees of technology companies, however, think differently. They argue that the immediate responsibility rested with the company itself. “Regulation slows progress, and our species needs progress to survive the many threats that face us today”, according to the employees of Clarifai, a tech company that develops AI-powered products for the Pentagon.

Other activists and researchers like Meredith Whittaker, the co-founder of the AI Now Institute, call for both ethical actions from the company side and regulations from the government. The need for the latter is due to the forces of capitalism that continues to drive tech companies toward greater profits.

The debate about AI Ethics has gotten so divided even between the leaders of a company and its employees. Many technology companies have already created corporate principles and set up ethics officers or review boards to ensure their systems are designed and deployed in an ethical way. Still, many employees left their company because they were disappointed with the lack of actual actions. “You functionally have situations where the foxes are guarding the henhouse,” said Liz Fong-Jones, a former Google employee who left the company late last year due to the matter.

So, is ethical AI even possible?

This question remains open, but we believe that ethical AI will be impossible if we do not take into account the fact that governments of large countries have significant influences over the development of the world. Therefore, we need a framework for cooperation between major governments given the conditions of uncertainty and complexity in the AI ecosystem. For this reason, AIWS proposed a model of Government AIWS Ethics and Practices Index and look at the strategies, activities, and progresses of major governments (including G7 countries and other influential countries such as Russia, China, India) in the field of AI. We hope that this effort is helpful toward a feasible solution for ethical AI.

The exciting course “Shaping Work of the Future”

The exciting course “Shaping Work of the Future”

We are delighted to introduce the exciting course “Shaping Work of the Future”

 of Professor Thomas Kochan, member of the Editorial Board of Shaping Futures Magazine, a publication of the Michael Dukakis Institute for Leadership and Innovation. Here is a letter from Professor Thomas Kochan to Boston Global Forum’s colleagues.

Dear Boston Global Forum Colleagues,

Several years ago I participated in one of your forums.  I see you recently focused on AI and its impacts.

Given our shared interests, I would like to invite members of your network to join us in our online MOOC on Shaping Work of the Future that starts March 19.  It is free and open to all.

This year we will give special emphasis to how advances in technology (AI, Robotics, etc.) can be used to create better work, more inclusive societies, and broadly shared prosperity.  We link the course to our MIT Task Force on Work of the Future and draw in lots of experts from MIT and around the world.

Attached is a description of the course and here is a link to a video about it and a place to register.  I would very much appreciate it if you would share this with your network.  Please let me know if I can be helpful in any way.

https://www.edx.org/course/shaping-the-future-of-work-0

Best wishes,

Tom Kochan

Thomas A. Kochan

George M. Bunker Professor of Management

MIT Sloan School of Management

Co-Director, Institute for Work and Employment Research

“.

AI tool for cancer diagnosis wins FDA breakthrough status

AI tool for cancer diagnosis wins FDA breakthrough status

While Artificial Intelligence (AI) is making inroads in healthcare facilities management, as an aid to improve efficiency in such areas as scheduling, staffing and billing, AI as a diagnostic tool in patient care is still in its infancy.

Paige.AI is calling itself the first publicly announced AI tool for cancer diagnosis, which is giving developers speedier agency review in recognition of the product’s potential to improve treatment for life-threatening conditions or irreversibly debilitating diseases.

Launched in early 2018 with technology developed by Memorial Sloan Kettering data scientist Thomas Fuchs and colleagues, Paige.AI receives digitized slides through its licensing agreement with the cancer center. The startup said it is working to develop a portfolio of AI products across cancer subtypes, beginning with prostate cancer.

The AI tool for cancer diagnosis has been approved by FDA, which is a key milestone for AI technology acceptance in medical care. This breakthrough AI application has a strong impact to society, which also reflects the AIWS’s vision as “AI can be a force for helping people achieve well-being and happiness, unleash their potential, obtain greater freedom, relieve them of resource constraints and arbitrary/inflexible rules and processes, and solve important issues.”

More detail information in website: https://paige.ai/

China’s masses of data give it an edge in AI—but they may not forever

China’s masses of data give it an edge in AI—but they may not forever

Last Thursday, MIT hosted a celebration for the new Stephen A. Schwarzman College of Computing, a $1 billion effort to create an interdisciplinary hub of AI research. During an onstage conversation between Schwarzman, the CEO and co-founder of investment firm Blackstone, and the Institute’s president, Rafael Reif, Schwarzman noted, as he has before, that his leading motivation for donating the first $350 million to the college was to give the US a competitive boost in the face of China’s coordinated national AI strategy.

That prompted a series of questions about the technological race between the countries. They essentially boiled down to this: When it comes to AI, more data is better, because it is a brute-force situation. How can the US outcompete China when the latter has far more people and the former cares more about data privacy? Is it, in other words, just a lost cause for the US to try to “win”?

Here was Reif’s response: “That is the state of the art today—that you need tons of data to teach a machine.” He added, “State of the art changes with research.”

Reif’s comments served as an important reminder about the nature of the AI: throughout its history, the state of the art has evolved quickly. We could very well be one breakthrough away from a day when the technology looks nothing like what it does now. In other words, data may not always be king.

AI World Society has created a new model for the government with deeply applied AI called AI-Government, with the core of Decision Making Center, looking forward to new approaches, algorithms, and methods that would not require too much data and would simulate human brain thinking well.

China’s Huawei has big ambitions to weaken the US grip on AI leadership

China’s Huawei has big ambitions to weaken the US grip on AI leadership

In spite of tensions with the US and its allies, Huawei is rapidly building a suite of Artificial Intelligence (AI) offerings unmatched by any other company over the world. Huawei has big ambitious vision stretches from AI chips for data centers and mobile devices to deep-learning software and cloud services, which currently offered by Amazon, Microsoft, or Google. However, the company’s technological ubiquity and the fact that Chinese companies are ultimately answerable to their government are big reasons why the US views Huawei as an unprecedented national security threat.

In recent years, Huawei plans to increase its investments in AI and integrate it throughout the company to “build a full-stack AI portfolio.” In particular, Huawei launched an AI chip for its smartphones, called Ascend, which is comparable to iPhone chip, and tailor-made for running machine-learning code to support face and voice recognition. Huawei also offers a cloud computing platform with 45 different AI services and will plan to release its first deep-learning framework, called MindSpore, which will compete with Google’s TensorFlow or Facebook’s PyTorch.

Besides to the technology itself, governments and companies also require to follow ethical AI, which has been initiated and promoted under AI World Society (AIWS) by Boston Global Forum (BGF) and Michael Dukakis Institute (MDI). In specific, the AIWS Ethics and Practices Index is designed with technical standards to track the AI activities of governments, respect human values and contribute to the constructive use of AI. It is very important to have agreement on AI ethics and standards with a number of national governments and leading countries in G7 and OECD as the AIWS ultimate goal, although there is still a tension between East and West.

Japan’s Abe refuses to deny that he nominated Trump for Nobel Peace Prize

Japan’s Abe refuses to deny that he nominated Trump for Nobel Peace Prize

“I’m not saying that it is not the fact,” Abe said, adding that the US President had been working to curb North Korea’s nuclear and missile development and that he “highly” appreciated his leadership.

“I’ll continue to (offer) my utmost cooperation to President Trump to solve the North Korean nuclear and missile issues and the abduction issue, which is the most important … for Japan,”

Prime Minister Shinzo Abe said, and quoted by news from CNN.

Boston Global Forum honored Prime Minister as World Leader for Peace and Cybersecurity Award at Global Cybersecurity Day, December 12, 2015 at Harvard University Faculty Club in Cambridge, MA. At this event, Prime Minister Abe sent a video of the acceptance speech, confirming and committing Japan Government to make the cyberspace safe.