Rethink Robotics suddenly closes their business

Rethink Robotics suddenly closes their business

Rethink failed a deal with a Chinese company and fell into a bad financial situation.

Rethink Robotics was a pioneer in building robot that could be able to coordinate with humans in real work with assured safety. However, due to a business dilemma, the company closed down on Wednesday without any statements.

Founded in 2008 by Professor Rodney Brooks, Rethink used to be one of the world’s most powerful business in robotics. Rethink led the way in the sphere of developing “cobots”, or collaborative robots, which are designed to safely work alongside humans. The software of this robot was developed towards simplification for people to program and to use, making it suitable for even people who barely receive any training in robotics. The cobots are well equipped with sensor and software that help prevent them from accidentally causing harm to users.

Baxter and Sawyer are the names of two outstanding products of the company. They are designed with the ability to perform highly repetitive rote tasks. They received a big deal from China, which led them to a cash crisis since their customer suddenly denied the order and withdraw. Scott Eckert, Rethink chief executive, refused to give the name of this Chinese company. Sawyer robot was customized for Chinese market, and now when the deal couldn’t be closed, Rethink was left with unsold robots and unpaid bills.

Simultaneously, Rethink had to confront its strong rivals. One of them is Universal Robots, a Danish company owned by North Reading-based Teradyne Inc., who announced the sale of its 25,000th collaborative robot last month. “It’s tough to compete with Universal,” said Jeff Burnstein, president of the Association for Advancing Automation.

Rethink will begin to sell off its patent portfolio and other intellectual property, and the company’s 91 employees are expected to be in strong demand from other robotics firms.

Education for Shared Societies Policy Dialogue in Lisbon, Portugal

Education for Shared Societies Policy Dialogue in Lisbon, Portugal

The Education for Shared Societies (E4SS) Policy Dialogue will take place on October 16-17th 2018 in Lisbon, Portugal with the participation of about 40 democratic formers leaders of states and governments. The dialogue is organized by the World Leadership Alliance – Club de Madrid (WLA-CdM) in partnership with the Calouste Gulbenkian Foundation.

The WLA-CdM began its Shared Societies Project (SSP) a decade ago, which aims to build peace and democracy including political, social, economic and environmental dimensions. The project has been making efforts to raise a sense of belonging and shared responsibility from everyone in the shared society. This year, the organization focuses on educational engagement for all.

Three mains of the E4SS will indicate and demonstrate some current top global issues including Refugees, Migrants and Internally Displaced People (IDPs); Preventing Violent Extremism (PVE); and Digital Resilience. The policy changes for each area will be discussed by educators, policy-makers, WLA-CdM Members, and experts during the event.

The outcomes of this international dialogue will be presented to the E4SS Joint Steering Committee to produce an E4SS Agenda by 2019, ensuring the continuity of the project globally.

At the same time, WLA-CdM has also been working closely with MDI to develop the AIWS 7-Layer Model to build Next Generation Democracy. This initiative is hoped to provide a baseline for guiding AI development to ensure positive outcomes and to reduce the risks of pervasive and realistic risks and the related harms that AI could pose to humanity.

Professor Nazli Choucri on the subject of AI-Government at the AIWS Conference

Professor Nazli Choucri on the subject of AI-Government at the AIWS Conference

On September 20, the AIWS Conference on the theme: AI- Government and AI Arms Races and Norms took place. In the conference, Professor Nazli Choucri, Cyber Politics Director of MDI, Professor of Political Science at MIT, Member of MDI’s AIWS Standards and Practice Committee shared her ideas on AI Government and how to make it work.

In the context of AI’s emergence, the use of AI for government has great potential, as AI can help bring about a great deal of efficiency and consistency in monitoring and alignment. Aside from the benefits, the feasibility of AI governance poses many challenges to human.

We have a long history of human governance; it is no longer a strange concept to people everywhere. However, cultures in countries are diverse, whereas AI works based on data and knowledge it learned, hence, it is extremely difficult to have one AI for every institution as we would need to come up with a common concept for every system to make this work. Prof. Nazli Chourci emphasized two important aspects of AI world that are fundamental for our purposes for governance: data and algorithms which require a huge amount of time and effort for AI to work properly and retain transparency.

From the perspective of government, there are a several tasks that need to be done well to achieve a functional government. The government needs to be capable of being regulative, extractive, distributive, responsive, and symbolic and ensuring its people’s security. She also mentions that the government would have to deal with the stress of the ratio between the loads on it and its capabilities to perform functions. “If we are applying AI to government, these are the generic functions. Consider that is the matter of rules, rules have to be made, to be communicated, there have to be agencies to implement them on the operational level. The interface between AI and the government abilities becomes the first stage that keeps them both together,” said Prof. Chourci.

Due to the connectivity of the internet to everything, AI could be an essential tool for governance. However, there are limitations as well: AI is very good at analysis, targeting and execution but poor in interpretation and considering consequences. Especially when it comes to malfunctions and accidental failures, people can be in great danger if the outcomes are not carefully considered. As a result, Ethics of AI is the primary focus of innovators and practitioners in building AI-Government.

According to Prof. Chourci, there are some ethical imperatives of AI for government that need to be carefully considered:

  • Responsibility in Use
  • Accountability in Performance
  • Avoidance of oppression
  • Prevention technological conflict – no AI race
  • Provision of human oversight for critical AI operations

And most importance is the improvement of responsiveness through constant feedback for policy and check on excessive use of AI for control.

Experts anticipate the threat of AI resulting in research lockdown

Experts anticipate the threat of AI resulting in research lockdown

According to AI News, world leaders in innovation are giving warnings of potential AI catastrophea which may lead to lockdown of research.

Recently, autonomous robotic industries have been developing at a remarkable speed, and at the same time, have caused considerable damage over multiple incidents. Autonomous vehicles  take up a chunk of incidents, such as the casualty involving an Uber self-driving vehicle. Soon, when there are more and more autonomous AI, there will be a lot of responsibility on researchers’ shoulders regarding the safety of users.

“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US,” said Andrew Moore, the new head of AI at Google Cloud at the Artificial Intelligence and Global Security Initiative.

It is agreed that AI should not be used for military weapons, however it seems to be inevitable since “there will always be players willing to step in.” “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” said Russian President Vladimir Putin.

Concerned about accidental and adversarial use of AI, Governor Michael Dukakis, Chairman of the Michael Dukakis Institute (MDI) believes global accord is needed to ensure the rapidly growing technology is used responsibly by governments around the world. For that reason, he co-founded Artificial Intelligence World Society, a project that aims to bring scientists, academics, government officials and industry leaders together to keep AI a benign force serving humanity’s best interests. At the moment, MDI has been developing the concept of AI-Government and AIWS Index in Ethics as two components of AIWS.

Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership

Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership

“Within the past several years, Chinese researchers have achieved a track record of consistent advances in basic research and in the development of quantum technologies. The quantum ambition is intertwined with China’s national strategic objective to become a science and technology superpower. The United States must recognize the trajectory of China’s advances in these technologies and the promise of their potential military and commercial applications.” This is part of the introduction of the Report “Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership” published in September, 2018.

In the newly published report, authors Elsa B. Kania and John Costello show the basics of quantum technology, China’s related efforts in the field, and what measures the United States should pursue to preserve its technological leadership.

As China realized the strategic potential of quantum science with the ambition to strengthen the nation’s economy and military, it has become China’s prioritization in technology. Despite being a latecomer in the race, China seems to be one step ahead of “the second quantum revolution”. In the past years, China has achieved breakthroughs in the development of quantum technologies, including quantum cryptography, communications, and computing, as well as reports of progress in quantum radar, sensing, imaging, metrology, and navigation.

According to the report, China’s advances in quantum science could impact the future military and strategic balance. In fact, by investing on navigation China is striving to be a true peer competitor in these technological frontiers of military to U.S. Under this context, it is necessary for the US to build upon and redouble existing efforts to protect its position. One of the recommendations is that the Department of Defense should consider further experimentation on these technologies to leverage its advantages in innovation.

Since technology is advancing in an unprecedented pace, people should not ignore the potential damages when it gets out of hand. Therefore, it’s needed to have a set of moral standards for developers to protect the safety and prosperity of human—which is the purpose of AIWS initiative’s establishment.

⇒ Read The Full Report Quantum Hegemony? China’s Ambitions and the Challenge to U.S. Innovation Leadership HERE

Authors: Elsa Kania, Adjunct Fellow, CNAS Technology & National Security Program; John K. Costello, Director, Office of Strategy, Policy, and Plans at the Department of Homeland Security’s (DHS) National Protection and Programs Directorate.