Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership

Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership

“Within the past several years, Chinese researchers have achieved a track record of consistent advances in basic research and in the development of quantum technologies. The quantum ambition is intertwined with China’s national strategic objective to become a science and technology superpower. The United States must recognize the trajectory of China’s advances in these technologies and the promise of their potential military and commercial applications.” This is part of the introduction of the Report “Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership” published in September, 2018.

In the newly published report, authors Elsa B. Kania and John Costello show the basics of quantum technology, China’s related efforts in the field, and what measures the United States should pursue to preserve its technological leadership.

As China realized the strategic potential of quantum science with the ambition to strengthen the nation’s economy and military, it has become China’s prioritization in technology. Despite being a latecomer in the race, China seems to be one step ahead of “the second quantum revolution”. In the past years, China has achieved breakthroughs in the development of quantum technologies, including quantum cryptography, communications, and computing, as well as reports of progress in quantum radar, sensing, imaging, metrology, and navigation.

According to the report, China’s advances in quantum science could impact the future military and strategic balance. In fact, by investing on navigation China is striving to be a true peer competitor in these technological frontiers of military to U.S. Under this context, it is necessary for the US to build upon and redouble existing efforts to protect its position. One of the recommendations is that the Department of Defense should consider further experimentation on these technologies to leverage its advantages in innovation.

Since technology is advancing in an unprecedented pace, people should not ignore the potential damages when it gets out of hand. Therefore, it’s needed to have a set of moral standards for developers to protect the safety and prosperity of human—which is the purpose of AIWS initiative’s establishment.

⇒ Read The Full Report Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership HERE

Authors: Elsa Kania, Adjunct Fellow, CNAS Technology & National Security Program; John K. Costello, Director, Office of Strategy, Policy, and Plans at the Department of Homeland Security’s (DHS) National Protection and Programs Directorate.

An instrument to deal with biases in AI algorithms

An instrument to deal with biases in AI algorithms

IBM recently released a tool called ‘Fairness 360’ which can detect biases in algorithms to adjust the code.

For AI to work properly, a vast range of unbiased data is required. IBM is making a move in this bias problem, stepping in with an instrument called Fairness 360. According to the AI News’s article, the software will be cloud-based and open source, it will also work with various common AI frameworks including Watson, Tensorflow, SparkML, AWS SageMaker, and AzureM.  The system searches for signs of bias in algorithms to recommend solutions to correct the problems.

Humans have natural biases, and that means a developer’s bias can creep into his or her algorithm. The problem is that AI’s developers do not know what exact decision their AIs can make. Therefore, with this IBM tool, they can see what factors are being used by their AIs.

This invention can play a vital role for developers to ensure accountability and transparency for further technology development, which is also the long-term aim of the AIWS Initiative.

Facebook is making an effort to suppress false news

Facebook is making an effort to suppress false news

Since the incident in 2016 election, false news has been suppressed on Facebook, while Twitter has yet to act.

According to a paper released recently, Facebook engaged with 570 fake news sites, with approximately 200 million monthly engagements with these sites at its peak in 2018. After 2 years, Facebook’s efforts paid off with more human and technological power to restrict unreliable news. With more content moderators, new offices and AI software, the amount of fake news engagement witnessed a significant drop, there being 70 million engagements with false news in July 2018. On the contrary, Twitter remains at 4 to 6 million engagements from 2016 to 2018.

This study shows a great amount of fake news, however indicates Facebook’s attempt to curb this trend. Reviewed by MIT Technology Review, Facebook seems to be moving the platform in the right direction.

On Global Cybersecurity Day December 12, 2018 at Loeb House, Harvard University, BGF will discuss How AIWS solution to solve disinformation and fake news.

 

AI robots could be more than just the “terminators” we imagine

AI robots could be more than just the “terminators” we imagine

There has been speculation of the AI future, many of which were the visions of AI taking over the human world. Recently, Michael Sulmeyer and Kathryn Dura said otherwise in their work on AI.

Michael Sulmeyer – Director of the Cyber Security Project at the Harvard Kennedy School’s Belfer Center for Science and International Affairs and Kathryn Dura – Joseph S. Nye, Jr. Intern for the Technology and National Security Program at the Center for a New American Security expressed their view on the potential of AI in contrast to visions of AI in science fiction. They believe in “AI’s potential effects in cyberspace not by facilitating attacks but rather improving cybersecurity”.

One of the most common features of cyber defense is related to situational awareness, which is a field where AI has more advantages. In order to execute a cyberattack, the attacker will know the system well. However, when the system is run by AI, “with the right combination of data, computing power and algorithms, AI can help defenders gain far greater mastery over their data and networks” – said Rob Joyce Former White House Cybersecurity Coordinator.

Another cybersecurity challenge that AI can easily handle is the prevalence of code reuse. Coders do not usually write their own code from scratch, which is efficient but risky due to the lack of accountability for the integrity of the code. AI could help companies identify errors in code and come up with alternatives.

It is necessary for people, especially policy makers, to carefully consider the advantages of using AI for security, since this could be a major advancement for the military. A long-term strategy for AI development should be worked on to prevail against adversaries in both physical and cyber realms.

Policy Dialogue: Education for Shared Societies

Policy Dialogue: Education for Shared Societies

On 16th and 17th of October, a policy dialogue hosted by World Leadership Alliance will be held in Lisbon (Portugal) with the hope to send the messages of Shared Societies and Preventing Violence in education field.

The World Leader Ship Alliance- Club de Madrid (WLA-CdM) recently launched a new initiative called “Education for Shared Societies”, which is developed with the purpose of mobilizing political movement around a Global Agenda to leverage and drive engagement on education for all as a catalyst to address challenges affecting the advance of shared societies.

The policy dialogue is expected to draw on the leadership experience of more than 100 democratic former Heads of State and Government from more than 70 countries – the members of the WLA-CdM to draft an agenda or set of policy imperatives for the three pillars of the Education for Shared Societies project: Inclusive education for migrants and refugees, Education for preventing and countering violent extremism, Digital resilience for shared societies.

At the same time, WLA – CdM has also been working with MDI to develop the AIWS 7-Layer Model to build Next Generation Democracy. This initiative is hoped to provide a baseline for guiding AI development to ensure positive outcomes and to reduce the risks of pervasive and realistic risks and the related harms that AI could pose to humanity.

 

Condolences over the passing of President Tran Dai Quang

Condolences over the passing of President Tran Dai Quang

With deepest sorrow and remorse, Boston Global Forum (BGF) and Michael Dukakis Institute for Leadership and Innovation (MDI) offers our condolences over the passing of President Tran Dai Quang – President of the Socialist Republic of Vietnam.

President Tran Dai Quang was a respected leader, who was very supportive to bring global citizenship education to Vietnam. In 2016, Professor John Quelch, Professor John Savage, Professor Carlos Torres – Members of Board of Thinkers, Boston Global Forum had  official meetings at President Tran Dai Quang’s office and received his warm welcome. On December 21, 2016, he welcomed and had an official meeting with Professor Carlos Torres, Chair of UNESCO-UCLA Chair, Global Learning and Global Citizenship Education, and Mr. Nguyen Anh Tuan, Chair of International Adviser Committee of UNESCO-UCLA, Chair of Global Learning and Global Citizenship Education.

President Tran Dai Quang presented a special ceramic gift about Vietnamese culture to Global Citizenship Education Network

His contribution toward Viet Nam’s Citizenship Education program is particularly notable and will continue to contribute to the well being of Viet Nam for many future generations. Boston Global Forum, as well as Global Citizenship Education Network were deeply touched when President took time from his profound work and duties to support and encourage the members of our organization. His support has encouraged our leadership to continue their work for world peace, security and prosperity. Prof. John Quelch and Prof. John Savage sent condolences and expressed their deep sympathies over the loss of the President.

Highlights of the AIWS Conference 2018

Highlights of the AIWS Conference 2018

On Thursday, September 20, 2018, the Michael Dukakis Institute (MDI), together with AI World, successfully organized the first AIWS Conference based on the theme: AI-Government and AI Arms Races and Norms.

Distinguished Professor Governor Michael Dukakis – Chairman of MDI moderated this conference.

Opening Remark by Governor Michael Dukakis – Chairman of the Michael Dukakis Institute for Leadership and Innovation (MDI) at the AIWS Conference

Key highlights of the conference include:

– Introducing the concepts of AI-Government, and the AIWS Index.

– Prof. Matthias Scheutz, Director of the Human-Robot Interaction Laboratory at Tufts University presented ideas of controlling technology for AI systems and robots, ensuring safety and ethical standards.

– Prof. Joseph Nye, Harvard University addressed the problem of norms for AI.

– Prof. Marc Rotenberg, President of Electronic Privacy Information Center (EPIC) presented the transparency of algorithm.

– Introduced the strategic collaboration between MDI and AI World – The industry’s largest conference and expo on the business and technology of AI for enterprise.

– Introduced the idea for an AIWS Square.

The concepts of AI-Government were presented for the first time during the AIWS conference. In addition, the AIWS Index for AI Ethics of Governments is also being studied by MDI and strategic partners. According to Prof. Nazli Choucri, AI-Government is the mechanism by which government operates and AI brings out the diversity in the decision-making process. Three points by Prof. Choucri on how to make AI-Government work are: provision of human oversight, improvement of responsiveness through feedbacks demands and pressures on government and potential corrective mechanisms by government, prevention of the excessive centralized control of any entity used in the AI for governments.

Prof. Nazli Choucri, Member of BGF’s Board of Thinkers, Cyber-politics Director of MDI, Professor of Political Science at MIT at the AIWS Conference

Prof. Matthias Scheutz, Director of the Human-Robot Interaction Laboratory at Tufts University presented his ideas of controlling technology for AI systems and robots to ensure safety and ethical standards for humanity. “AI and robotics technology will both ultimately require built-in ethical constraints to ensure that the technology is safe and beneficial to humans” said Prof. Scheutz. The greatest danger is when algorithms are out of control and people can’t decide what the systems can and will learn in the future. He also pointed out as false some common prevention solutions which he believes are basically insufficient to safeguard AI and robotic technologies. Therefore, Prof. Scheutz believes it is essential to design AI systems from the ground up, even from the hardware level, with ethical provisions that include social and moral norms, ethical principles, and laws that the system cannot ignore and must use in its initial operation.

Prof. Matthias Scheutz, Member of MDI’s AIWS Standards and Practice Committee, Director of the Human-Robot Interaction Laboratory of Tufts University at the AIWS Conference

“Many of the debates around the employment of AI techniques have the same focus with the debates associated with the use of computing technology and government agencies back in the 1960s and 1970s”, said Prof. Marc Rotenberg, President of Electronic Privacy Information Center (EPIC). In Prof. Rotenberg’s opinion, the core interest and the protection of privacy is not about secrecy or confidentiality, it is about the fairness of the processing concerning data on individuals. Part of the problem is that as these systems have become more sophisticated, they have also become more opaque. These systems are widespread and have enormous impact on the lives of individuals, for that reason, individuals have the right to know the movement of automated decisions. Together with EPIC, Prof. Rotenberg is sending the message to the United States Congress that algorithmic transparency will be key in the AI age to foster public participation and policy formulation.

Prof. Marc Rotenberg, Member of MDI’s AIWS Standards and Practice Committee, President of EPIC at the AIWS Conference

Prof. Joseph Nye opened his speech by talking about the expansion of Chinese firms in the US market and their ambition to surpass the US in the field of AI. Prof. Nye believes the idea of an AI arms race and geopolitical competition in AI that can have profound effects on our society, however, he says prediction that China will be ahead of the US on AI by 2030 is “uncertain” and “indeterminate” since China’s only advantage is having more data and little concern about privacy. Talking about the norms for AI, Prof. Nye thinks that as people unleashes AI, which is leading to warfare and autonomy of offensives, we should have a treaty to control it. One of his suggestion is that we have international institutions which will essentially monitor the various programs in AI in various countries.

Prof. Joseph Nye, Member of BGF’s Board of Thinkers, Distinguished Service Professor of Harvard University at the AIWS Conference

Notably in this conference, MDI officially introduced the cooperation with AI World – The industry’s largest conference and expo on the business and technology of enterprise AI. This cooperation marks the determination between two organizations, toward the aim of developing, measuring, and tracking the progress of ethical AI policy-making and solution adoption among governments and corporations. On this event, Eliot Weinman – Chairman of AI World Conference and Expo also became a member of AIWS Standards and Practice Committee.

The AIWS Conference at Harvard Faculty Club, September 20, 2018

In this conference, the MDI launches the initiatives of building AI Time Square, a place for originating and developing ideals and noble human values in the AI Age. AI Square would include AIWS House to introduce achievements of the 7 AI Application layers in a society in which AI is comprehensively applied to all aspects of the society and life. This initiative is hoped to create a symbol of human culture in the era of AI, especially for the 21th century.

Professor Matthias Scheutz: Ensuring AI Safety for humanity

Professor Matthias Scheutz: Ensuring AI Safety for humanity

At the AIWS Conference, Professor Matthias Scheutz – Director of the Human-Robot Interaction Laboratory at Tufts University gave a keynote speech about the potential of AI and robotics technologies and called for ethical provisions for the design of AI systems from the outset to prevent accidental failures.

Prof. Matthias Scheutz, Member of MDI’s AIWS Standards and Practice Committee, Director of the Human-Robot Interaction Laboratory of Tufts University at the AIWS Conference

In his speech, Prof. Scheutz showed that the greatest risk caused by AI and robotics technologies is when unconstrained machine learning is out of control where AI systems acquire knowledge and start to pursue goals that were not intended by their human designer. For example, if an AI program operating in the power grid decides to cut off energy in certain areas for better power utilization overall, it will however leave millions of people living without electricity, which consequently turns out to be an AI accidental failure. He also points out as false some common preventive solutions inside and outside the system which he believes are  basically insufficient to safeguard AI and robotics technologies. Even with “emergency buttons”, the system itself might finally set its own goal to prevent the shutdown set up previously by human.

According to Professor Matthias Scheutz, the best way to safeguard AI systems is to really build-in ethical provisions directly into the learning algorithms, the reasoning algorithms, the recognition algorithms, etc. so that the algorithm itself has such provisions. The simple form of “ethical testing” to catch and handle ethical violations was also demonstrated in his presentation.

 

Scheutz is a Professor in Cognitive and Computer Science in the Department of Computer Science, Director of the Human-Robot Interaction Laboratory and the new Human-Robot Interaction Ph.D. program, and Bernard M. Gordon Senior Faculty Fellow in the School of Engineering at Tufts University. He has more than 300 peer-reviewed publications in artificial intelligence, natural language processing, cognitive modeling, robotics, and human-robot interaction. His current research focuses on complex interactive autonomous systems with natural language and machine learning capabilities. He is a member of AIWS Standards and Practice Committee.

Watch Prof. Scheutz’s keynote speech below:

The first AIWS Conference about AI-Government in the world

The first AIWS Conference about AI-Government in the world

In the AIWS Conference on September 20, 2018, scholars and leaders from governments, businesses and universities gathered, discussed and exchanged ideas around the theme of AI Government.

Mr. Nguyen Anh Tuan, CEO of BGF, Director of MDI at the AIWS Conference

The concept of AI-Government was developed at the Michael Dukakis Institute for Leadership and Innovation through the collaboration of Governor Michael Dukakis, Mr. Nguyen Anh Tuan, Professor Nazli Choucri, and Professor Thomas Patterson and first presented in the AIWS Conference in 2018.

In the future, AI is believed to be able to transform the public sector by automating tasks. However, it cannot replace governance by humans or human decision-making processes but guides and informs them. Therefore, it needs ethical standards to prevent harmful intentions. Additionally, the AIWS Index about AI Ethics of Governments is also being studied by MDI and strategic partners.

Speaking of the challenges lying ahead, Prof. Marc Rotenberg, President of Electronic Privacy Information Center (EPIC), Member of AIWS Standards and Practice Committee brought up the rapidly increasing gap between informed government decision-making and the reality of our technology-driven world, which he warns, “governments may ultimately lose control of these systems” if they don’t take action.

Prof. Marc Rotenberg, President of EPIC on AI-Government