Professor Joseph Nye addressed the problem of norms for AI at AIWS Conference 2018

Professor Joseph Nye addressed the problem of norms for AI at AIWS Conference 2018

Professor Joseph Nye, Member of Boston Global Forum’s Board of Thinkers and Distinguished Service Professor of Harvard University, addressed the problem of norms for AI at AIWS Conference on September 20, 2018 at Harvard University Faculty Club.

Gov. Michael Dukakis, Prof. Joseph Nye, Nick Burns, and Nguyen Anh Tuan

Prof. Joseph Nye opened his speech by talking about the expansion of Chinese firms in the US market and their ambition to surpass the US in the field of AI. Prof. Nye believes the idea of an AI arms race and geopolitical competition in AI that can have profound effects on our society. However, he says prediction that China will be ahead of the US on AI by 2030 is “uncertain” and “indeterminate” since China’s only advantage is having more data and little concerns for privacy. Talking about the norms for AI, Prof. Nye thinks that as people unleashes AI, which leads to warfare and autonomy of offensives, we should have a treaty to control it. One of his suggestions is that we have international institutions, which would essentially monitor the various programs in AI in various countries.

A wary AI’s ethics discussion is essential to ensure the future of AI and robotics

A wary AI’s ethics discussion is essential to ensure the future of AI and robotics

On September 20, AIWS Conference with the theme ‘AI-Government and AI Arms Races and Norms’ was held at Harvard University Faculty Club by Michael Dukakis Institute for Leadership and Innovation (MDI). The key message of this conference lies in the importance of moral standard for AI to ensure humanity’s sake.

Reported by The AI Trends, the conference took place in Harvard University Faculty Club with the presence of scientists, researchers, and standard-setters. It aims to figure out the solution for the root of AI’s threat – its unconstrained machine learning mechanism.

According to Matthias Scheutz, Director of the Human-Robot Interaction Lab at Tufts University, “We would like to ensure that AI and robotics will be used for the good of humanity. The greatest danger I see is from unconstrained machine learning, where the system can define goals not intended by the designer.”

“The best way to safeguard AI systems is to build ethical mechanisms into the algorithms themselves,” adds Dr. Scheutz. “We need to do ethical testing of the system without the system knowing it. That requires specialized hardware and virtual machine architecture.”

Besides, Marc Rotenberg, President of the Electronic Privacy Information Center (EPIC) takes the position that, “Knowledge of AI algorithms is a fundamental right.”

Prof. Joseph Nye, Distinguished Service Professor of Harvard University, anticipated an AI arms race in the future at this pace while AI is thriving like never seen before, and AI’s ethics still are not researchers’ priority.

“It’s not part of the job description,” said Nazli Choucri. The effort to create standards need to be international similar to the restriction of nuclear weapon.

“Ethics is essential to what we are doing,” said Tom Creely, a professor at the US Naval War College. “It’s an important topic in the military. And national security is no longer just the Defense Department’s problem. We all need to be part of the conversation.” AI should be a valuable tool to make our life better as it’s full of potential. It will not be destructive if we follow rules to ensure our own protection.

At AIWS Conference 2018, MDI also introduced its partnership with the AI World Conference & Expo (including The AI Trends). The partnership has the aim of developing, measuring and tracking the progress of ethical AI policy-making and solution-adoption among governments and corporations.

Agreement to ban on Killer Robots has been passed by the European Parliament

Agreement to ban on Killer Robots has been passed by the European Parliament

The European Council recently brought out the resolution to ban killer robots. It has called on its Member States to adapt the resolution to ensure human’s future. On September 12, 2018, 82% of the votes agree to ban lethal autonomous weapon systems (LAW) internationally.

The resolution called for an urgent legal binding instrument to prohibit autonomous weapons. The need for the negotiation came after the United Nations discussion, where nations couldn’t reach the conclusion whether to ban or not to ban LAWS.

With the help of scientists, there were many letters signed by AI researchers around the world agree to the prohibition of LAWS.

Two sections of the resolution stated:

“Having regard to the open letter of July 2015 signed by over 3,000 artificial intelligence and robotics researchers and that of 21 August 2017 signed by 116 founders of leading robotics and artificial intelligence companies warning about lethal autonomous weapon systems, and the letter by 240 tech organizations and 3,089 individuals pledging never to develop, produce or use lethal autonomous weapon systems,” and

“Whereas in August 2017, 116 founders of leading international robotics and artificial intelligence companies sent an open letter to the UN calling on governments to ‘prevent an arms race in these weapons’ and ‘to avoid the establishing effects of these technologies.’”

This is a remarkable progress for AI developers. It is noticeable that scientists including members of AIWS Standards and Practice Committee of MDI, are paying more attention to ethics of AI; and this has been recognized by The European Parliament. The risks of arms race will reduce as we can find the common voice in nations.

Can a robot farm operate with human workers?

Can a robot farm operate with human workers?

An emerging autonomous farm with robots tends rows of leafy green under the control of a software named “The Brain”.

Recently, Iron Ox is opening its production line in San Francisco. The production line is set up in an 8,000-square foot hydroponic facility with the productivity of 26,000 heads of leafy greens a year. It hopes to run without human labor but filled with robotic arms and movers.

Iron Ox developed a software called “The Brain” to get machines collaborate; it watches over the farm, monitoring its condition and orchestrates robot and human when needed.

However, the human presence is still required for certain steps such as seeding and processing of crops, but Brandon Alexander, the firm’s co-founder, looks forward to automating these steps. The company is doing this in order to fill in the shortage of agricultural labor since farming industry has been witnessing a shortage of human resource.

The automation of agricultural processes will also require some monitoring regulations; the ethical framework for AI is something that MDI’s experts are actively researching and exploring.

BrainNet: A system can connect three people thoughts

BrainNet: A system can connect three people thoughts

A group of researchers at the University of Washington in Seattle has successfully connected human brains in the first brain-to-brain network.

The possibility of thoughts communication that used to be considered science fiction is now turned into reality. In 2015, Andrea Stocco and his colleagues at the University of Washington used his gear to connect people via a brain-to-brain interface. On September 29, 2018, he announced the success of the world first brain-to-brain network called BrainNet. The system allows a small group to play a puzzle like Tetris.

The tools run on the foundation of electroencephalograms (EEGs) to record electrical activity in the brain and transcranial magnetic stimulation (TMS) transmitted into the brain. BrainNet will measure number of electrodes placed on the skull and spot changes in brain signal for instance seeing a light flashing at 15 hertz causes the emitting of brain signal at the same frequency, when the light is switched to 17 Hz, the signal from the brain will change as well.

Stocco and his team have created a network allow three people to send and receive information from their brain using EEG and TMS, the experiment was carried by letting individuals in separated room without the capability to communicate conventionally. Two of them are senders wearing EEG can see the full screen, the game is designed so the descending block fit the row below, either it is rotated by 180 degrees or not, the senders have to make the decision on which shapes and broadcast to the receiver. See the senders control their brain signals by staring at LEDs on either side of the screen – one flashing at 15 Hz and the other at 17 Hz. The receiver is attached to an EEG and TMS can only see the upper half of Tetris and the block but not the way it is rotated. He can only decide by receiving signals via TMS saying “rotate” or “do not rotate”. The senders can see the two half can determine whether to rotate or not and transmit the signal to execute the action to the receiver.

As technology nowadays has more influential to our daily lives and function, we need to avoid accidental failures as it is attached to our safety, prosperity, and more. Researchers should guarantee the user’s safety by following ethical standards. Which is what AIWS is doing; one of their works is the AIWS 7-layer model for technology developers.

The US is aiming to make a national effort in protecting cyberspace

The US is aiming to make a national effort in protecting cyberspace

Under Trump administration, his advisers are in search for cybersecurity moonshot. Cyber Moonshot refers to a clear plan for securing the digital landscape over the next five years originated from the first moon landing of the US, but it lacked the vision for harnessing prowess and outcome.

Technology is developing in an unprecedented speed, while cybersecurity is not catching up with the pace due to several violations lately. “This current approach to cybersecurity isn’t working,” said Scott Charney, Vice Chairman of the President’s National Security Telecommunications Advisory Committee. “This is the beginning of a conversation.”

We can see many incidents such as the constant breaches in user’s information on Facebook and manipulation in election system in US. They don’t seem to slow down. So the call for Cyber Moonshot is extremely essential. We need to prepare for the worst can happen to its continual threat. By creating a systematic plan and cyber defenses, the moonshot can create a baseline level of confidence and readiness to cyberattack when it occurs.

Through his leadership, Scott is having a profound influence on current thinking in cybersecurity technology, policy, legal matters, and international relations. He was honored as the Business Leader in Cybersecurity by BGF in December 2016.

Global Governance for Information Integrity Roundtable in Riga: Addressing the information’s disruption on social media

Global Governance for Information Integrity Roundtable in Riga: Addressing the information’s disruption on social media

On September 27, 2018 at Riga, a roundtable on Global Governance for Information Integrity hosted by the WLA-CdM took place at the Latvian Ministry of Foreign Affairs on the occasion of the 100th anniversary of Latvia. Mr. Nguyen Anh Tuan, CEO of BGF and Director of MDI, introduced AIWS Initiative and AI-Government at this event.

According Director Nguyen Anh Tuan, AI may be a good solution to prevent disinformation – a type of untrue communication that is purposefully spread and represented as truth to elicit some response that serves the perpetrator’s purpose. The AIWS and the AI-Government are initiatives of MDI, aiming to create a society in which humans and AI citizens can co-exist peacefully, and AI will be used for good purposes under the strict control.

Global Governance for Information Integrity Roundtable focused on the first pathway to global action: protecting the integrity of political information through global governance. A discussion between global political leaders and international experts is needed to address the issue of fake news in the information space. In the era of thriving communication, social media has had a huge influence on politics. It brought about many opportunities as well as challenges concerning transparency and accountability of political information.

Human life will be improved by AI if it is controlled by standards – and humans need to prepare in advance

Human life will be improved by AI if it is controlled by standards – and humans need to prepare in advance

In the new century, people have more and more great innovations that can change the history of humanity, including the most brilliant inventions. And when it comes to intelligence, we will think of AI as the “hottest” trend of technology in the world in recent years.

AI or artificial intelligence is understood simply as the intelligence of machines created by humans. This intelligence can think and learn as human intelligence, process data at a broader, more systematic, more scientific level and faster than humans. But can AI completely replace humans?

Mr. Nguyen Anh Tuan, Director of The Michael Dukakis Institute for Leadership and Innovation (MDI), Founder and Editor-in-Chief of VietNamNet Newspaper, confirmed that in the “Coffee Morning” show of a popular Vietnamese television channel – VTV3 – that AI and robots cannot replace humans.

Although AI is increasingly being used in many fields and activities in daily life, humans are still irreplaceable, especially in the field of social management.

The AI-Government, an initiative launched by the MDI in June 2018, will help manage AI to serve citizens more intelligently, more automatically and more responsibly. The AI-Government is a government in which AI is widely and thoroughly applied in the management, decision making and policy making process of governing bodies rather than in just public services (human contact, streamlined payroll system, etc.).

For example, given the US-China trade war, the government of both countries needs to make the smartest decisions. Given the full data system, AI’s intelligent and optimized algorithms will recommend smart, convincing decisions. Furthermore, AI can help us make decisions very quickly.

But it is important that AI remains a tool, an “effective assistant” offering suggestions to people while people are the ones who will consider and make final decisions. Therefore, when using AI, human intelligence needs to be one level higher. Many people think that when there are robots, they will have nothing to do anymore. On the contrary, new and more demanding jobs will open up.

It is worth mentioning that when we put AI into application, we recognize that people have many good traits but also morally ambiguous traits, while AI is very honest. In Vietnam, Luu Quang Vu’s play Green Chrysanthemum on Marsh is a typical example: Nearly 40 years ago, Luu Quang Vu thought about robots that could help people look back and adjust themselves so that they became more honest and more warm-hearted. And so, humanity will need a standard to manage and control AI in general.

That is why the MDI developed the AI World Society Initiative (AIWS Initiative), published on November 29, 2017. According to Mr. Nguyen Anh Tuan, the basic purpose of this initiative is to establish a society with the best and most effective AI application, bringing good to humans.

To illustrate the need for this, he also reiterated the fact that cyber security is a “headache” to the world today. As we did not anticipate the development of the Internet and computers, we have left “holes” that are difficult to overcome. For AI, although the same problem has not been officially declared, we still need to prepare in advance; otherwise, as Prof. Stephen Hawking said, such “holes” would be a threat to humanity in the future.

Mr. Nguyen Anh Tuan also affirmed that the MDI and its associates who are experts from Harvard University, Massachusetts Institute of Technology (MIT), etc. agree to contribute their research, ideas, or initiatives on AIWS and AI-Government to serve humanity, creating a good society where AI is not harmful to humans.

Google forbids the development of AI-based software that can be used in weapons

Google forbids the development of AI-based software that can be used in weapons

While critics argued that Google was stepping closer to the “business of war” due to a contract with the US Defense Department, the company has responded by banning development for AI that could be used for weapons.

As AI becomes more and more powerful, Google’s leaders have shown their concerns by preventing the creation of AI software which can be used for weapons. The action is considered to set a new ethical guideline to technology companies around the world seeking for superiority in self-driving cars, automated assistants, robotics and military AI.

According to the Independent, to prevent AI from becoming harmful for international law and human rights, Google asserted that the company will not persist in developing AI. The Independent states, however, that cybersecurity, training, veterans’ health care, search and rescue, and military recruitment are some spheres in which Google is going to cooperate with governments.

It is unclear how the company would seek to follow its rules under the principle. Seven core tenets for AI application are referenced by chief executive Sundar Pichai, consisting of being socially beneficial, being built and tested for safety, and avoiding creating or reinforcing unfair bias. The company is to evaluate projects by examining how closely the technology developed can be adapted to harmful use.

In fact, Google’s Web tools are largely developed based on the use of AI, such as image searching or automatic translation. There are possibilities that the tools themselves could easily violate the ethical principles. For example, users of Google Duplex can use it to mimic someone’s voice over the phone to make dinner reservations.

However, the Pentagon’s technological researchers and engineers say other contractors will still compete to help develop technology for the military and national defense. According to John Everett, Deputy Director of Information Innovation Office of the Defense Advanced Research Projects Agency, organizations are free to choose to participate in the AI exploration.

AI should be used for good purposes. Toward this aim, MDI has built the AIWS Initiative to establish a society with the best and most effective AI application, bringing the best to humans.