A wary AI’s ethics discussion is essential to ensure the future of AI and robotics

A wary AI’s ethics discussion is essential to ensure the future of AI and robotics

On September 20, AIWS Conference with the theme ‘AI-Government and AI Arms Races and Norms’ was held at Harvard University Faculty Club by Michael Dukakis Institute for Leadership and Innovation (MDI). The key message of this conference lies in the importance of moral standard for AI to ensure humanity’s sake.

Reported by The AI Trends, the conference took place in Harvard University Faculty Club with the presence of scientists, researchers, and standard-setters. It aims to figure out the solution for the root of AI’s threat – its unconstrained machine learning mechanism.

According to Matthias Scheutz, Director of the Human-Robot Interaction Lab at Tufts University, “We would like to ensure that AI and robotics will be used for the good of humanity. The greatest danger I see is from unconstrained machine learning, where the system can define goals not intended by the designer.”

“The best way to safeguard AI systems is to build ethical mechanisms into the algorithms themselves,” adds Dr. Scheutz. “We need to do ethical testing of the system without the system knowing it. That requires specialized hardware and virtual machine architecture.”

Besides, Marc Rotenberg, President of the Electronic Privacy Information Center (EPIC) takes the position that, “Knowledge of AI algorithms is a fundamental right.”

Prof. Joseph Nye, Distinguished Service Professor of Harvard University, anticipated an AI arms race in the future at this pace while AI is thriving like never seen before, and AI’s ethics still are not researchers’ priority.

“It’s not part of the job description,” said Nazli Choucri. The effort to create standards need to be international similar to the restriction of nuclear weapon.

“Ethics is essential to what we are doing,” said Tom Creely, a professor at the US Naval War College. “It’s an important topic in the military. And national security is no longer just the Defense Department’s problem. We all need to be part of the conversation.” AI should be a valuable tool to make our life better as it’s full of potential. It will not be destructive if we follow rules to ensure our own protection.

At AIWS Conference 2018, MDI also introduced its partnership with the AI World Conference & Expo (including The AI Trends). The partnership has the aim of developing, measuring and tracking the progress of ethical AI policy-making and solution-adoption among governments and corporations.

Agreement to ban on Killer Robots has been passed by the European Parliament

Agreement to ban on Killer Robots has been passed by the European Parliament

The European Council recently brought out the resolution to ban killer robots. It has called on its Member States to adapt the resolution to ensure human’s future. On September 12, 2018, 82% of the votes agree to ban lethal autonomous weapon systems (LAW) internationally.

The resolution called for an urgent legal binding instrument to prohibit autonomous weapons. The need for the negotiation came after the United Nations discussion, where nations couldn’t reach the conclusion whether to ban or not to ban LAWS.

With the help of scientists, there were many letters signed by AI researchers around the world agree to the prohibition of LAWS.

Two sections of the resolution stated:

“Having regard to the open letter of July 2015 signed by over 3,000 artificial intelligence and robotics researchers and that of 21 August 2017 signed by 116 founders of leading robotics and artificial intelligence companies warning about lethal autonomous weapon systems, and the letter by 240 tech organizations and 3,089 individuals pledging never to develop, produce or use lethal autonomous weapon systems,” and

“Whereas in August 2017, 116 founders of leading international robotics and artificial intelligence companies sent an open letter to the UN calling on governments to ‘prevent an arms race in these weapons’ and ‘to avoid the establishing effects of these technologies.’”

This is a remarkable progress for AI developers. It is noticeable that scientists including members of AIWS Standards and Practice Committee of MDI, are paying more attention to ethics of AI; and this has been recognized by The European Parliament. The risks of arms race will reduce as we can find the common voice in nations.

Can a robot farm operate with human workers?

Can a robot farm operate with human workers?

An emerging autonomous farm with robots tends rows of leafy green under the control of a software named “The Brain”.

Recently, Iron Ox is opening its production line in San Francisco. The production line is set up in an 8,000-square foot hydroponic facility with the productivity of 26,000 heads of leafy greens a year. It hopes to run without human labor but filled with robotic arms and movers.

Iron Ox developed a software called “The Brain” to get machines collaborate; it watches over the farm, monitoring its condition and orchestrates robot and human when needed.

However, the human presence is still required for certain steps such as seeding and processing of crops, but Brandon Alexander, the firm’s co-founder, looks forward to automating these steps. The company is doing this in order to fill in the shortage of agricultural labor since farming industry has been witnessing a shortage of human resource.

The automation of agricultural processes will also require some monitoring regulations; the ethical framework for AI is something that MDI’s experts are actively researching and exploring.

BrainNet: A system can connect three people thoughts

BrainNet: A system can connect three people thoughts

A group of researchers at the University of Washington in Seattle has successfully connected human brains in the first brain-to-brain network.

The possibility of thoughts communication that used to be considered science fiction is now turned into reality. In 2015, Andrea Stocco and his colleagues at the University of Washington used his gear to connect people via a brain-to-brain interface. On September 29, 2018, he announced the success of the world first brain-to-brain network called BrainNet. The system allows a small group to play a puzzle like Tetris.

The tools run on the foundation of electroencephalograms (EEGs) to record electrical activity in the brain and transcranial magnetic stimulation (TMS) transmitted into the brain. BrainNet will measure number of electrodes placed on the skull and spot changes in brain signal for instance seeing a light flashing at 15 hertz causes the emitting of brain signal at the same frequency, when the light is switched to 17 Hz, the signal from the brain will change as well.

Stocco and his team have created a network allow three people to send and receive information from their brain using EEG and TMS, the experiment was carried by letting individuals in separated room without the capability to communicate conventionally. Two of them are senders wearing EEG can see the full screen, the game is designed so the descending block fit the row below, either it is rotated by 180 degrees or not, the senders have to make the decision on which shapes and broadcast to the receiver. See the senders control their brain signals by staring at LEDs on either side of the screen – one flashing at 15 Hz and the other at 17 Hz. The receiver is attached to an EEG and TMS can only see the upper half of Tetris and the block but not the way it is rotated. He can only decide by receiving signals via TMS saying “rotate” or “do not rotate”. The senders can see the two half can determine whether to rotate or not and transmit the signal to execute the action to the receiver.

As technology nowadays has more influential to our daily lives and function, we need to avoid accidental failures as it is attached to our safety, prosperity, and more. Researchers should guarantee the user’s safety by following ethical standards. Which is what AIWS is doing; one of their works is the AIWS 7-layer model for technology developers.

The US is aiming to make a national effort in protecting cyberspace

The US is aiming to make a national effort in protecting cyberspace

Under Trump administration, his advisers are in search for cybersecurity moonshot. Cyber Moonshot refers to a clear plan for securing the digital landscape over the next five years originated from the first moon landing of the US, but it lacked the vision for harnessing prowess and outcome.

Technology is developing in an unprecedented speed, while cybersecurity is not catching up with the pace due to several violations lately. “This current approach to cybersecurity isn’t working,” said Scott Charney, Vice Chairman of the President’s National Security Telecommunications Advisory Committee. “This is the beginning of a conversation.”

We can see many incidents such as the constant breaches in user’s information on Facebook and manipulation in election system in US. They don’t seem to slow down. So the call for Cyber Moonshot is extremely essential. We need to prepare for the worst can happen to its continual threat. By creating a systematic plan and cyber defenses, the moonshot can create a baseline level of confidence and readiness to cyberattack when it occurs.

Through his leadership, Scott is having a profound influence on current thinking in cybersecurity technology, policy, legal matters, and international relations. He was honored as the Business Leader in Cybersecurity by BGF in December 2016.