On 5th of October, Llewellyn King, Host of “White House Chronicle”, Member of Editorial Board of Boston Global Forum had a conversationwith John Savage –Professor of Computer Science at Brown University about the development of computer science, artificial intelligence and cybersecurity in our present world.
John Savage is the An Wang Professor of Computer Science at Brown University. His current research interests are cybersecurity technology and policy, reliable computation with unreliable components, computational nanotechnology, efficient cache management on multicore chips, and I/O complexity.
The talk’s topic focuses on the age of emerging technology with its development in an unprecedented pace. Almost everything, every device around us is now connected and in the future, they are all likely to be automated and computerized, which is the incoming revolution that people should be aware and ready for. In order to prepare people for such changes, it will require a global effort to make it feasible. This, according to Prof. Savage, is similar to the case in West Virginia where local miners are trained to become coders, learn to deal with the shift of global trend.
A further concern that Prof. Savage mentioned is about the importance of safety since most of AI system is programmed and computerized hence it can be messed with causing mixed signal, mistakes and accidental failures. It is necessary that we all work on addressing this issue and use AI with precision. As long as we can accomplish that, AI wouldn’t be bad news to human.
On December 2017, Professor John Savage was honored as the Distinguished Global Educator for Computer Science and Security by the Boston Global Forum and the Michael Dukakis Institute for his tireless dedication and contributions to computer education, both from technological and societal perspectives.
Professor Marc Rotenberg, President of Electronic Privacy Information Center (EPIC), Member of AIWS Standards and Practice Committee emphasized the importance of algorithmic transparency in policy formulation in the AI Age at AIWS Conference on September 20, 2018 at Harvard University Faculty Club.
“Many of the debates around the employment of AI techniques have the same focus with the debates associated with the use of computing technology and government agencies back in the 1960s and 1970s”, said Prof. Marc Rotenberg, President of Electronic Privacy Information Center (EPIC). In Prof. Rotenberg’s opinion, the core interest and the protection of privacy is not about secrecy or confidentiality, it is about the fairness of the processing concerning data on individuals. Part of the problem is that as these systems have become more sophisticated, they have also become more opaque. These systems are widespread and have enormous impact on the lives of individuals, for that reason, individuals have the right to know the movement of automated decisions. Together with EPIC, Prof. Rotenberg is sending the message to the United States Congress that algorithmic transparency will be key in the AI Age to foster public participation and policy formulation.
He mentioned that the OECD (The international Organization for Economic Cooperation and Development) has already commenced work on AI guidelines and the Japanese government – one of its members has put forward principles about the AI policies for R&D. Therefore, Prof. Rotenberg urges for deployment as there is a rapidly growing gap between informed government decision-making and the reality of our technology-driven world warning that “governments may ultimately lose control of these systems” if they don’t take action.
The Pentagon has planned to invest over two billion dollars in a program called “AI Next” to work on AI’s adaptive stance.
“Today, machines lack contextual reasoning capabilities and their training must cover every eventuality – which is not only costly – but ultimately impossible” said Steven Walker, the Director of the US Defense Advanced Research Projects Agency (DARPA).
Taking the approach into consideration, Dr. Walker desires to discover how machines can acquire human ability to improvise in an unexpected situation, The DAPRA is set to spend billions on the new AI program named “AI Next” to enable machines to be adaptive to situations.
The mentioned development of AI is defined as “the Third Wave”. The first wave of AI, explained by DARPA, allows machines reasoning over simple issues with low level of certainty; while the second wave enables to create models and train them on big data, but only with minimal reasoning. However, the third wave is claimed to permit machines to adapt to changing situations. For example, adaptive reasoning can help computer algorithm note the difference between 2 words ‘principal’ and ‘principle’ based on the analysis of surrounding words to define the context.
A survey conducted at the Joint Multi- Conference of Human Level Artificial Intelligence indicated that 37% of the respondents believe that “the Third Wave” will be achievable within five to ten years.
Giving AI the capability of reasoning and adaptability will be a breakthrough for the industry. but it also means we are giving AI a significant control of itself and its actions, which without thorough consideration, might lead to unexpected consequences. This problem requires monitoring regulations on AI. Currently, professors and researchers at MDI is working on building an ethical framework for AI to guarantee the safety of AI deployment.
From December 3rd to 5th, AI World Conference and Expo 2018 will take place in Boston, the three-day conference will discuss AI strategy and applications involved with several concerns such as Implementing Enterprise AI, AI in Healthcare, Pharma or Cognitive Computing, etc.
As AI becomes an essential part of our daily lives, it is important to keep track of the changes and developments of emerging technology. This year, expanding the collaboration with key international hosts and sponsors such as the Canadian government, the Michael Dukakis Institute, XPRIZE, MIT CSAIL, IDC, MIT Sloan Management Review, and many others, AI World Conference and Expo will be held for the third time with the aim of implementing and developing AI for businesses and leaders.
The conference is set out to inform business executives about AI innovations with its implementation, enabling leaders to build strategies for their companies, as well as optimize costs and grasp new opportunities.
The three-days conference is expected to give attendees the opportunity to explore various angles of AI implementation in healthcare, pharma, medicine, and specific business strategies.
Governor Michael Dukakis, Chairman of the Michael Dukakis Institute for Leadership and Innovation (MDI) and the Boston Global Forum will attend this special event and give an opening remark at AI World Conference and Expo 2018. Currently, MDI is collaborating with AI World to publish reports and programs on AI-Government, including AIWS Index and AIWS Products.
A team from The Institute of Cancer Research, London (ICR) and the University of Edinburgh has come up with a measure to foresee cancers expansion, which is believed to be a great support for cancer treatment.
The team developed a new method known as Revolver (Repeated Evolution of Cancer). The technique involves in identify patterns in DNA mutation within the tumour and using the information to forecast genetic changes.
One of the biggest obstacles in curing cancer is the nature of the tumour which could evolve its own resistance to drug. But if doctors can predict how a tumour will evolve, they could intervene earlier to increase the patient’s chances of survival. For example, when researchers examine breast tumours, which had sequence of errors in genetic material that codes for the tumour-suppressing protein p53, followed by mutations in chromosome 8, they realized these tumours survived in a shorter period than those with other similar trajectories of genetic changes.
In the research, 768 tumour samples from 178 patients were examined, the samples varied from lungs, breast, kidney, bowel… to accurately detect and compare changes in each type of cancer.
If the tumour developing progress follows a certain pattern, then this methodology could be a powerful tool to predict the future trajectory of tumour development.
Rethink failed a deal with a Chinese company and fell into a bad financial situation.
Rethink Robotics was a pioneer in building robot that could be able to coordinate with humans in real work with assured safety. However, due to a business dilemma, the company closed down on Wednesday without any statements.
Founded in 2008 by Professor Rodney Brooks, Rethink used to be one of the world’s most powerful business in robotics. Rethink led the way in the sphere of developing “cobots”, or collaborative robots, which are designed to safely work alongside humans. The software of this robot was developed towards simplification for people to program and to use, making it suitable for even people who barely receive any training in robotics. The cobots are well equipped with sensor and software that help prevent them from accidentally causing harm to users.
Baxter and Sawyer are the names of two outstanding products of the company. They are designed with the ability to perform highly repetitive rote tasks. They received a big deal from China, which led them to a cash crisis since their customer suddenly denied the order and withdraw. Scott Eckert, Rethink chief executive, refused to give the name of this Chinese company. Sawyer robot was customized for Chinese market, and now when the deal couldn’t be closed, Rethink was left with unsold robots and unpaid bills.
Simultaneously, Rethink had to confront its strong rivals. One of them is Universal Robots, a Danish company owned by North Reading-based Teradyne Inc., who announced the sale of its 25,000th collaborative robot last month. “It’s tough to compete with Universal,” said Jeff Burnstein, president of the Association for Advancing Automation.
Rethink will begin to sell off its patent portfolio and other intellectual property, and the company’s 91 employees are expected to be in strong demand from other robotics firms.
The Education for Shared Societies (E4SS) Policy Dialoguewill take place on October 16-17th 2018 in Lisbon, Portugal with the participation of about 40 democratic formers leaders of states and governments. The dialogue is organized by the World Leadership Alliance – Club de Madrid (WLA-CdM) in partnership with the Calouste Gulbenkian Foundation.
The WLA-CdM began its Shared Societies Project (SSP) a decade ago, which aims to build peace and democracy including political, social, economic and environmental dimensions. The project has been making efforts to raise a sense of belonging and shared responsibility from everyone in the shared society. This year, the organization focuses on educational engagement for all.
Three mains of the E4SS will indicate and demonstrate some current top global issues including Refugees, Migrants and Internally Displaced People (IDPs); Preventing Violent Extremism (PVE); and Digital Resilience. The policy changes for each area will be discussed by educators, policy-makers, WLA-CdM Members, and experts during the event.
The outcomes of this international dialogue will be presented to the E4SS Joint Steering Committee to produce an E4SS Agenda by 2019, ensuring the continuity of the project globally.
At the same time, WLA-CdM has also been working closely with MDI to develop the AIWS 7-Layer Model to build Next Generation Democracy. This initiative is hoped to provide a baseline for guiding AI development to ensure positive outcomes and to reduce the risks of pervasive and realistic risks and the related harms that AI could pose to humanity.
On September 20, the AIWS Conference on the theme: AI- Government and AI Arms Races and Norms took place. In the conference, Professor Nazli Choucri, Cyber Politics Director of MDI, Professor of Political Science at MIT, Member of MDI’s AIWS Standards and Practice Committee shared her ideas on AI Government and how to make it work.
In the context of AI’s emergence, the use of AI for government has great potential, as AI can help bring about a great deal of efficiency and consistency in monitoring and alignment. Aside from the benefits, the feasibility of AI governance poses many challenges to human.
We have a long history of human governance; it is no longer a strange concept to people everywhere. However, cultures in countries are diverse, whereas AI works based on data and knowledge it learned, hence, it is extremely difficult to have one AI for every institution as we would need to come up with a common concept for every system to make this work. Prof. Nazli Chourci emphasized two important aspects of AI world that are fundamental for our purposes for governance: data and algorithms which require a huge amount of time and effort for AI to work properly and retain transparency.
From the perspective of government, there are a several tasks that need to be done well to achieve a functional government. The government needs to be capable of being regulative, extractive, distributive, responsive, and symbolic and ensuring its people’s security. She also mentions that the government would have to deal with the stress of the ratio between the loads on it and its capabilities to perform functions. “If we are applying AI to government, these are the generic functions. Consider that is the matter of rules, rules have to be made, to be communicated, there have to be agencies to implement them on the operational level. The interface between AI and the government abilities becomes the first stage that keeps them both together,” said Prof. Chourci.
Due to the connectivity of the internet to everything, AI could be an essential tool for governance. However, there are limitations as well: AI is very good at analysis, targeting and execution but poor in interpretation and considering consequences. Especially when it comes to malfunctions and accidental failures, people can be in great danger if the outcomes are not carefully considered. As a result, Ethics of AI is the primary focus of innovators and practitioners in building AI-Government.
According to Prof. Chourci, there are some ethical imperatives of AI for government that need to be carefully considered:
Responsibility in Use
Accountability in Performance
Avoidance of oppression
Prevention technological conflict – no AI race
Provision of human oversight for critical AI operations
And most importance is the improvement of responsiveness through constant feedback for policy and check on excessive use of AI for control.
According to AI News, world leaders in innovation are giving warnings of potential AI catastrophea which may lead to lockdown of research.
Recently, autonomous robotic industries have been developing at a remarkable speed, and at the same time, have caused considerable damage over multiple incidents. Autonomous vehicles take up a chunk of incidents, such as the casualty involving an Uber self-driving vehicle. Soon, when there are more and more autonomous AI, there will be a lot of responsibility on researchers’ shoulders regarding the safety of users.
“If an AI disaster happens – and that would, for instance, be an autonomous car killing people due to serious bugs – then at some point AI systems are going to be locked down for development, at least in the US,” said Andrew Moore, the new head of AI at Google Cloud at the Artificial Intelligence and Global Security Initiative.
It is agreed that AI should not be used for military weapons, however it seems to be inevitable since “there will always be players willing to step in.” “[AI] comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world,” said Russian President Vladimir Putin.
Concerned about accidental and adversarial use of AI, Governor Michael Dukakis, Chairman of the Michael Dukakis Institute (MDI) believes global accord is needed to ensure the rapidly growing technology is used responsibly by governments around the world. For that reason, he co-founded Artificial Intelligence World Society, a project that aims to bring scientists, academics, government officials and industry leaders together to keep AI a benign force serving humanity’s best interests. At the moment, MDI has been developing the concept of AI-Government and AIWS Index in Ethics as two components of AIWS.