The Global Cybersecurity Day 2018: Cameron Hickey on fake news

Cameron Hickey, Technology Manager at the Information Disorder Lab (IDLab) from the Shorenstein Center, held a talk on Fake News during the Global Cybersecurity Day 2018 at Loeb House, Harvard.

Cameron Hickey is the Technology Manager at the Information Disorder Lab (IDLab) at the Shorenstein Center. He is an Emmy Award-winning journalist who has reported on science and technology for the PBS NewsHour and NOVA, and covered political, social, and economic issues for many outlets, including The New York Times, American Experience, Al Jazeera, and Bill Moyers.

The presentation of Cameroon Hickey at the Global Cybersecurity Day 2018 Symposium on fake news also received much of the audience’s attention. His report brought people conscience about the definition and classification of information disorder and its challenging impact to our society.

After 10 years working in the PBS Newshour reporting on technology and in the last two years, he shifted his focus to disinformation on social media. During these years, he found three types of this phenomenon occurred divided by its level, they are dis-information, mis-information, mal-information, rather than just the common term: “fake news”

He also explained and categorized information disorder into seven forms:

  • Satire or Parody
  • Misleading Content
  • Imposter Content
  • Fabricated Content
  • False Connection
  • False Context
  • Manipulated Content

In addition, there are a number of junk pages on the internet which contain hate speech, plagiarism, fake, clickbait, misleading content…

In order to deal with these domains, one of the key solutions is to track and identify the source of this false information. There are also common characteristics can be found on the junk websites, these characteristics are listed by Mr. Hickey after 6 months of intensive content monitoring:

  • Coordination comes in many flavors – bots vs real people
  • Information disorder is rarely obvious – nuanced content is equally pervasive and problematic
  • Hate and Bigotry driving the conversation – islamophobia, sexism, anti-immigrant…
  • The Network remains reactive

This issue is extremely challenging since many do not know to differ these false content with fact. Furthermore, they are convinced by the advertising systems run by other parties political who want votes or enterprises who want to generate revenue on social platform like Facebook and Google without knowing the news suggested for them has been manipulated.

Watch full speech of Cameron Hickey at the Global Cybersecurity Day 2018

Introduction of the AIWS Ethics and Practices Index

On December 12, 2018, the Global Cybersecurity Day 2018 took place at Loeb House, Harvard University, MA, organized by the Boston Global Forum (BGF) and the Michael Dukakis Institute (MDI). One of the most important parts of the event was the introduction of AIWS Ethics and Practice Index delivered by Dr. Thomas Creely, Member of the AIWS Standards and Practice Committee.

Dr. Thomas Creely is a member of The AIWS Standards and Practice Committee. He serves as an Associate Professor of Ethics, Director of Ethics & Emerging Military Technology Graduate Program.

On behalf of the authors group, Dr. Thomas Creely presented the AIWS Report about AI Ethics and published the Government AIWS Ethics Index. This index measures the extent to which a government in its Artificial Intelligence (AI) activities respects human values and contributes to the constructive use of AI in such unprecedented pace of development. The report has been conducted in effort to reach a common accord of respect for the norms, laws, and conventions in the AI world in a diversity approaches and frameworks in countries.

There are 4 main categories in the Index:

  1. Transparency: Substantially promotes and applies openness and transparency in the use and development of AI, including data sets, algorithms, intended impacts, goals, and purposes.
  2. Regulation: Has laws and regulations that require government agencies to use AI responsibly; that are aimed at requiring private parties to use AI humanely and that restricts their ability to engage in harmful AI practices; and that prohibit the use of AI by government to disadvantage political opponents.
  3. Promotion: Invests substantially in AI initiatives that promote shared human values; refrains from investing in harmful uses of AI (e.g., autonomous weapons, propaganda creation and dissemination).
  4. Implementation: How governments seriously execute their regulations, law in AI toward good things. Respects and commits to widely accepted principles, rules of international law.

Further discussion took place after Dr. Creely’s presentation, there are questions on the future of AI. Though it is difficulty to anticipate how AI will change humanity, it’s believed that we have great scientists and scholars who are continuously working and preparing us for whatever is coming.

Watch full speech of Dr. Thomas Creely at the Global Cybersecurity Day 2018

An innovative design of neural network can be a solution to many challenges in AI

David Duvenaud – an AI researcher in the University of Toronto and his collaborators at the university and the Vector Institute design a brand-new prototype neural network that can overcome its previous models.

At first, his idea was to create a deep-learning algorithm that could predict a person’s health over time. However, the data given by medical’s record is a bit complicated since each check-up gives different record with different reasons and measurements. Conventional machine-learning method finds it struggling to model continuous processes, especially those are not measured often. Because this method finds the patterns in data by stacking layers of simple computational nodes, the discrete layers keep it from providing the exact outcome. To be more specific, traditional machine-learning model follow its common process, known as supervised learning, which means collecting a lot of data’s layer to figure out a formula to solve other issues with similar traces. For instance, it mistakes a cat for a dog due to the fact that they both have floppy ears. However, there are various types of dog and cat with diverse features. Hence, it might produce inaccurate results.

In response to the difficulty, they allowed the network find formulas match the description of each stage of the process, each stage represents a layer of data. Taking the example of differentiating the two pets above, the first stage might take in all the pixels and use a formula to find out which ones are most similar for cats versus dogs. A second stage might use another to construct larger patterns from groups of data to tell whether the picture has whiskers or ears. Each subsequent stage would identify a feature of the animal, after collecting a sufficient data of layers, it will identify the animal’s picture. This step-by-step breakdown of the process allows a neural net to build more sophisticated models in order to produce a more accurate prediction.

Yet, in terms of the medical field, it will require us to classify health records over time into discrete steps for instance period of years or months. So, the only way to model these medical records is to specify it even more, it might encounter the same problems as the traditional model does. To make actual breakthroughs, they still need to dig deeper into the method with more experiments and research.

“The paper will likely spur a whole range of follow-up work, particularly in time-series models, which are foundational in AI applications such as health care,” said Richard Zemel, the research director at the Vector Institute.

No matter how far the algorithm has been advanced, there is still further risk that the rate at which AI advances will outpace the continuing development of ethical and regulatory frameworks. Layer 4 of the AIWS 7-Layer Model developed by MDI focuses on policies, laws and legislation, nationally and internationally, that govern the creation and use of AI and which are necessary to ensure that AI is never used for malicious purposes.

The first AI World Society House and AI World Society Innovation Program in Vietnam

On December 16, 2018, Nguyen Anh Tuan, Director of the Michael Dukakis Institute for Leadership and Innovation (MDI), Co-founder and Chief Executive Officer of the Boston Global Forum (BGF) discussed with leaders and scholars from Dalat University (Vietnam) to establish the AI World Society House and AI World Society Innovation Program at Dalat University.

On November 22, 2017, the Artificial Intelligence World Society (AIWS) was established by the Michael Dukakis Institute for Leadership and Innovation (MDI) with the goal to advance the peaceful development of AI to improve the quality of life for all humanity. Ever since, MDI has constantly working to fulfill the mission of the AIWS.

Recently, the Michael Dukakis Institute for Leadership and Innovation (MDI) has announced its partnership with Dalat University (DLU) to build the AIWS House and design the AIWS Innovation Program at DLU, as a “nucleus” for DLU to become a pioneer in research, teaching and application of AI in Vietnam. In this collaborative support mechanism, DLU will operate and manage the activities of the AIWS House and MDI will advise and supervise to ensure quality, efficiency and achievement of goals. Nguyen Anh Tuan, Director of MDI represented MDI to work with DLU’s leaders and will be in charge of this project.

Leaders from Dalat University expressed their gratitude to the assistance of MDI. They hoped that the establishment and of the AIWS House and the AIWS Innovation Program at Dalat University will attract AI-leading professors and scientists to teach and share knowledge for lecturers as well as students in the university in particular; develop the AI application programs and initiatives for socio-economic development in Vietnam in general.

EU’s first draft on AI ethics guidelines

The European Commission (EC) published their first document on technology guidelines and policy and is looking for people’s feedback.

The draft composed by a group of 52 experts from academia, business, and civil society, serves as a guideline for AI developers to follow.

“AI can bring major benefits to our societies, from helping diagnose and cure cancers to reducing energy consumption. But, for people to accept and use AI-based systems, they need to trust them, know that their privacy is respected, that decisions are not biased.” said EC vice-president and commissioner for the Digital Single Market, Andrus Ansip.

There are two key elements in the guideline to create trustworthy AI: one is to respect rights and regulation, ensuring an ‘ethical purpose’; the other is the robustness and reliability of the AI. The 37-page document contains issues of bias and the importance of human values. It points out the potential benefits as well as threats AI brings about.

The draft guidelines are open for comment for one month until 18 January 2019 and will finally be presented in March.

Developing rules and orders for AI are also the aim that the AIWS is working on. The AIWS has continuously build the AIWS 7-Layer Model, a set of ethical standards for AI to guarantee that this technology is safe, humanistic, and beneficial to society.

Minister Taro Kono, Ministry of Foreign Affairs of Japan at the Global Cybersecurity Day 2018

On December 12, 2018 The Global Cybersecurity Day 2018 was held at Loeb House, Harvard University, by Boston Global Forum (BGF) and Michael Dukakis Institute (MDI). In the event, MDI owned the honor to have Japanese Minister for Foreign Affairs Taro Kono as a guest speaker.

Taro Kono is a Japanese politician belonging to the Liberal Democratic Party. He is a member of the House of Representatives, and has served as Minister for Foreign Affairs since a Cabinet re-shuffle by Prime Minister Shinzo Abe on 3 August 2017.

Despite his absence at Loeb House, he delivered his speech virtually to the audiences. Mr. Taro Kono congratulated the Boston Global Forum for their achievement and was excited about the Global Cybersecurity Day this year. In addition, he gave his opinion on the current situation; with a lot of changes bring about benefits as well as threats of emerging technologies especially in term of cybersecurity.

In his speech, he emphasized the need of ethical standards in technology innovation. If ethics are not prioritized, it might lead to unexpected loss to the economy and the whole society since the technology itself can be misused for malicious purposes by bad actors. Minister Taro Kono mentioned that Japan is placing cybersecurity as one of its top priorities to protect safe trade and transfer on cyberspace. He hopes to join a global effort in protecting people’s safety on cyberspace.

Vaira Vike-Freiberga’s Statement on the Imperial Springs International Forum

On the occasion of the 40th anniversary of China’s opening up and reform process, the 2018 Imperial Springs International Forum (ISIF) under the theme of “Advancing Reform and Opening-Up, Promoting Win-Win Cooperation” occurred in Guangdong on December 10-11th with the presence of Vice-President of the People’s Republic of China, Wang Qishan and around 30 prominent leaders, distinguished experts from over the world.

“I am proud to report that the 2018 Imperial Springs International Forum (ISIF) was a great success.” This is an official statement from Vaira Vike-Freiberga, President of the World Leadership Alliance – Club de Madrid and former President of Latvia, on the 2018 Imperial Springs International Forum

The Imperial Springs International Forum has become an important platform for dialogue between China and the rest of the world, where leaders and experts constructively discuss ways to enhance global governance.

The Forum was jointly organized by the Chinese People’s Association for Friendship with Foreign Countries (CPAFFC), the People´s Government of Guangdong Province, the Australia China Friendship Association (ACFEA), and the World Leadership Alliance – Club de Madrid. The event left the attendees impressed by its dialogue.

“I strongly believe that in our global system, it is important for China to understand more about the world and our partners, and also for the world to understand China,” said Dr. Chau Chak Wing, Chair of the Asia-Pacific Region World Leadership Alliance – Club de Madrid President’s Circle.

“As an Australian businessman doing business in China, I am proud to play a role in supporting the ‘opening up’ of China as it means more opportunities for Australia and the world.” He added.

The concept of the AI World Society Cultural Value

The AI Age will bring with it an AI Age Culture.

With this in mind, the mission of AI World Society (AIWS) is to bring out the best and minimize the worst traits for humanity.

The AIWS strives to foster positive cultural values of the AI Age:

  • Humanity, tolerance, sincerity, integrity and honesty.
  • Create a life in which people can live honestly with others and themselves.
  • Arouse emotions of the heart, in each person, thereby, encouraging a good spiritual life, noble and beautiful soul, and human love, away from the cruelty and narrow-mindedness hidden among us.
  • Arouse individual responsibility so that we may develop a world in which power and money is not used to subjugate other nations and individuals.
  • Those with good and noble intentions, such as intellectuals, creators and volunteers who contribute their time, effort, and dedication to society will always have a good material life.
  • There will be equal opportunities for access to information and knowledge for every citizen, and equal opportunities to dedicate, contribute and maximize the contributions of each citizen.
  • The human evaluation scales are creative intelligence, humanity, dedication, and contribution to society.
  • Respect all life on the earth, especially AI citizens as good friends and powerful assistants, considering AI as a part of the life, intelligence and soul of humanity.
  • Respect and honor the highest values of human beings: creation, creativity, tectonics, invention, noble hearts and a willingness to live wholeheartedly for the people. For the community the values are charity and benevolence as well as dedicating intelligence, time, effort, and wealth, to contribute to a prosperous, loving and civilized society.
  • Appreciate the creativity of AI citizens in all areas, in accordance with AIWS principles, charters and ethical standards.
  • Encourage and respect policies, laws, conventions, solutions, initiatives, and cultural and artistic works that can turn the values of the AIWS into a reality where the majority of citizens and governments accept the AIWS 7-Layer Model Standards.

Dukakis credits Bush with helping to end Cold War

AP Photo: Bob Jordan, File

Former Massachusetts Gov. Michael Dukakis, who lost to George H.R. Bush in the 1988 presidential election, said Saturday that his former political foe’s legacy was his effort to help end the Cold War.

“Obviously we disagreed pretty strongly on domestic policy and I wasn’t thrilled with the kind of campaign he ran, but I think his greatest contribution was in negotiating the end of the Cold War with (Soviet leader) Mikhail Gorbachev,” Dukakis told The Associated Press in a telephone interview.

“What’s ironic and so troubling, just as he’s passing on, we’re heading into another stupid Cold War again,” Dukakis noted.

He also credited Bush, who died Friday night at age 94, with working with other countries and the United Nations in the first Gulf War.

“When it came to the international side of things, he was a very wise and thoughtful man,” said Dukakis, adding that he’s read Bush’s memoir, which addresses why his administration didn’t ultimately try to topple Iraqi dictator Saddam Hussein.

Dukakis, 85, blames himself for his election loss as the Democratic nominee, saying he didn’t respond aggressively to a Bush campaign ad featuring a convicted murderer named Willie Horton who raped a woman and stabbed her partner while out of prison on a Massachusetts furlough program.

In hindsight, Dukakis said he failed during the campaign to draw attention to the leniency of the federal furlough program that was in place while Bush was vice president.

“Look, it was my fault for not mounting a very strong defense to that and I don’t blame anybody but myself for that,” he said. “I should have done a much, much better job with dealing with that.”

Dukakis said he and Bush never became friends, but met a handful of times after the election, including in the December of 1988 at the vice president’s residence. Dukakis said he never raised the issue of the Willie Horton ad with Bush.

Dukakis praised Bush for being willing to work with Democrats — unlike, he said, fellow Republican President Donald Trump. He recalled how Bush called governors from both parties to the University of Virginia for three days to try to craft a consensus public education program. The chairman of the National Governor’s Association at the time was then-Arkansas Gov. Bill Clinton, who later defeated Bush in the 1992 presidential election.

“The interplay between Clinton and Bush was really kind of interesting,” Dukakis said. “I think probably most of us knew we were looking at the two candidates in the next presidential election.”

By Susan Haigh

AP News

Professor Joseph Nye addressed the problem of norms for AI at AIWS Conference 2018

Professor Joseph Nye, Member of Boston Global Forum’s Board of Thinkers and Distinguished Service Professor of Harvard University, addressed the problem of norms for AI at AIWS Conference on September 20, 2018 at Harvard University Faculty Club.

Gov. Michael Dukakis, Prof. Joseph Nye, Nick Burns, and Nguyen Anh Tuan

Prof. Joseph Nye opened his speech by talking about the expansion of Chinese firms in the US market and their ambition to surpass the US in the field of AI. Prof. Nye believes the idea of an AI arms race and geopolitical competition in AI that can have profound effects on our society. However, he says prediction that China will be ahead of the US on AI by 2030 is “uncertain” and “indeterminate” since China’s only advantage is having more data and little concerns for privacy. Talking about the norms for AI, Prof. Nye thinks that as people unleashes AI, which leads to warfare and autonomy of offensives, we should have a treaty to control it. One of his suggestions is that we have international institutions, which would essentially monitor the various programs in AI in various countries.

A wary AI’s ethics discussion is essential to ensure the future of AI and robotics

On September 20, AIWS Conference with the theme ‘AI-Government and AI Arms Races and Norms’ was held at Harvard University Faculty Club by Michael Dukakis Institute for Leadership and Innovation (MDI). The key message of this conference lies in the importance of moral standard for AI to ensure humanity’s sake.

Reported by The AI Trends, the conference took place in Harvard University Faculty Club with the presence of scientists, researchers, and standard-setters. It aims to figure out the solution for the root of AI’s threat – its unconstrained machine learning mechanism.

According to Matthias Scheutz, Director of the Human-Robot Interaction Lab at Tufts University, “We would like to ensure that AI and robotics will be used for the good of humanity. The greatest danger I see is from unconstrained machine learning, where the system can define goals not intended by the designer.”

“The best way to safeguard AI systems is to build ethical mechanisms into the algorithms themselves,” adds Dr. Scheutz. “We need to do ethical testing of the system without the system knowing it. That requires specialized hardware and virtual machine architecture.”

Besides, Marc Rotenberg, President of the Electronic Privacy Information Center (EPIC) takes the position that, “Knowledge of AI algorithms is a fundamental right.”

Prof. Joseph Nye, Distinguished Service Professor of Harvard University, anticipated an AI arms race in the future at this pace while AI is thriving like never seen before, and AI’s ethics still are not researchers’ priority.

“It’s not part of the job description,” said Nazli Choucri. The effort to create standards need to be international similar to the restriction of nuclear weapon.

“Ethics is essential to what we are doing,” said Tom Creely, a professor at the US Naval War College. “It’s an important topic in the military. And national security is no longer just the Defense Department’s problem. We all need to be part of the conversation.” AI should be a valuable tool to make our life better as it’s full of potential. It will not be destructive if we follow rules to ensure our own protection.

At AIWS Conference 2018, MDI also introduced its partnership with the AI World Conference & Expo (including The AI Trends). The partnership has the aim of developing, measuring and tracking the progress of ethical AI policy-making and solution-adoption among governments and corporations.

Agreement to ban on Killer Robots has been passed by the European Parliament

The European Council recently brought out the resolution to ban killer robots. It has called on its Member States to adapt the resolution to ensure human’s future. On September 12, 2018, 82% of the votes agree to ban lethal autonomous weapon systems (LAW) internationally.

The resolution called for an urgent legal binding instrument to prohibit autonomous weapons. The need for the negotiation came after the United Nations discussion, where nations couldn’t reach the conclusion whether to ban or not to ban LAWS.

With the help of scientists, there were many letters signed by AI researchers around the world agree to the prohibition of LAWS.

Two sections of the resolution stated:

“Having regard to the open letter of July 2015 signed by over 3,000 artificial intelligence and robotics researchers and that of 21 August 2017 signed by 116 founders of leading robotics and artificial intelligence companies warning about lethal autonomous weapon systems, and the letter by 240 tech organizations and 3,089 individuals pledging never to develop, produce or use lethal autonomous weapon systems,” and

“Whereas in August 2017, 116 founders of leading international robotics and artificial intelligence companies sent an open letter to the UN calling on governments to ‘prevent an arms race in these weapons’ and ‘to avoid the establishing effects of these technologies.’”

This is a remarkable progress for AI developers. It is noticeable that scientists including members of AIWS Standards and Practice Committee of MDI, are paying more attention to ethics of AI; and this has been recognized by The European Parliament. The risks of arms race will reduce as we can find the common voice in nations.

Can a robot farm operate with human workers?

An emerging autonomous farm with robots tends rows of leafy green under the control of a software named “The Brain”.

Recently, Iron Ox is opening its production line in San Francisco. The production line is set up in an 8,000-square foot hydroponic facility with the productivity of 26,000 heads of leafy greens a year. It hopes to run without human labor but filled with robotic arms and movers.

Iron Ox developed a software called “The Brain” to get machines collaborate; it watches over the farm, monitoring its condition and orchestrates robot and human when needed.

However, the human presence is still required for certain steps such as seeding and processing of crops, but Brandon Alexander, the firm’s co-founder, looks forward to automating these steps. The company is doing this in order to fill in the shortage of agricultural labor since farming industry has been witnessing a shortage of human resource.

The automation of agricultural processes will also require some monitoring regulations; the ethical framework for AI is something that MDI’s experts are actively researching and exploring.

BrainNet: A system can connect three people thoughts

A group of researchers at the University of Washington in Seattle has successfully connected human brains in the first brain-to-brain network.

The possibility of thoughts communication that used to be considered science fiction is now turned into reality. In 2015, Andrea Stocco and his colleagues at the University of Washington used his gear to connect people via a brain-to-brain interface. On September 29, 2018, he announced the success of the world first brain-to-brain network called BrainNet. The system allows a small group to play a puzzle like Tetris.

The tools run on the foundation of electroencephalograms (EEGs) to record electrical activity in the brain and transcranial magnetic stimulation (TMS) transmitted into the brain. BrainNet will measure number of electrodes placed on the skull and spot changes in brain signal for instance seeing a light flashing at 15 hertz causes the emitting of brain signal at the same frequency, when the light is switched to 17 Hz, the signal from the brain will change as well.

Stocco and his team have created a network allow three people to send and receive information from their brain using EEG and TMS, the experiment was carried by letting individuals in separated room without the capability to communicate conventionally. Two of them are senders wearing EEG can see the full screen, the game is designed so the descending block fit the row below, either it is rotated by 180 degrees or not, the senders have to make the decision on which shapes and broadcast to the receiver. See the senders control their brain signals by staring at LEDs on either side of the screen – one flashing at 15 Hz and the other at 17 Hz. The receiver is attached to an EEG and TMS can only see the upper half of Tetris and the block but not the way it is rotated. He can only decide by receiving signals via TMS saying “rotate” or “do not rotate”. The senders can see the two half can determine whether to rotate or not and transmit the signal to execute the action to the receiver.

As technology nowadays has more influential to our daily lives and function, we need to avoid accidental failures as it is attached to our safety, prosperity, and more. Researchers should guarantee the user’s safety by following ethical standards. Which is what AIWS is doing; one of their works is the AIWS 7-layer model for technology developers.

The US is aiming to make a national effort in protecting cyberspace

Under Trump administration, his advisers are in search for cybersecurity moonshot. Cyber Moonshot refers to a clear plan for securing the digital landscape over the next five years originated from the first moon landing of the US, but it lacked the vision for harnessing prowess and outcome.

Technology is developing in an unprecedented speed, while cybersecurity is not catching up with the pace due to several violations lately. “This current approach to cybersecurity isn’t working,” said Scott Charney, Vice Chairman of the President’s National Security Telecommunications Advisory Committee. “This is the beginning of a conversation.”

We can see many incidents such as the constant breaches in user’s information on Facebook and manipulation in election system in US. They don’t seem to slow down. So the call for Cyber Moonshot is extremely essential. We need to prepare for the worst can happen to its continual threat. By creating a systematic plan and cyber defenses, the moonshot can create a baseline level of confidence and readiness to cyberattack when it occurs.

Through his leadership, Scott is having a profound influence on current thinking in cybersecurity technology, policy, legal matters, and international relations. He was honored as the Business Leader in Cybersecurity by BGF in December 2016.

Global Governance for Information Integrity Roundtable in Riga: Addressing the information’s disruption on social media

On September 27, 2018 at Riga, a roundtable on Global Governance for Information Integrity hosted by the WLA-CdM took place at the Latvian Ministry of Foreign Affairs on the occasion of the 100th anniversary of Latvia. Mr. Nguyen Anh Tuan, CEO of BGF and Director of MDI, introduced AIWS Initiative and AI-Government at this event.

According Director Nguyen Anh Tuan, AI may be a good solution to prevent disinformation – a type of untrue communication that is purposefully spread and represented as truth to elicit some response that serves the perpetrator’s purpose. The AIWS and the AI-Government are initiatives of MDI, aiming to create a society in which humans and AI citizens can co-exist peacefully, and AI will be used for good purposes under the strict control.

Global Governance for Information Integrity Roundtable focused on the first pathway to global action: protecting the integrity of political information through global governance. A discussion between global political leaders and international experts is needed to address the issue of fake news in the information space. In the era of thriving communication, social media has had a huge influence on politics. It brought about many opportunities as well as challenges concerning transparency and accountability of political information.

Human life will be improved by AI if it is controlled by standards – and humans need to prepare in advance

In the new century, people have more and more great innovations that can change the history of humanity, including the most brilliant inventions. And when it comes to intelligence, we will think of AI as the “hottest” trend of technology in the world in recent years.

AI or artificial intelligence is understood simply as the intelligence of machines created by humans. This intelligence can think and learn as human intelligence, process data at a broader, more systematic, more scientific level and faster than humans. But can AI completely replace humans?

Mr. Nguyen Anh Tuan, Director of The Michael Dukakis Institute for Leadership and Innovation (MDI), Founder and Editor-in-Chief of VietNamNet Newspaper, confirmed that in the “Coffee Morning” show of a popular Vietnamese television channel – VTV3 – that AI and robots cannot replace humans.

Although AI is increasingly being used in many fields and activities in daily life, humans are still irreplaceable, especially in the field of social management.

The AI-Government, an initiative launched by the MDI in June 2018, will help manage AI to serve citizens more intelligently, more automatically and more responsibly. The AI-Government is a government in which AI is widely and thoroughly applied in the management, decision making and policy making process of governing bodies rather than in just public services (human contact, streamlined payroll system, etc.).

For example, given the US-China trade war, the government of both countries needs to make the smartest decisions. Given the full data system, AI’s intelligent and optimized algorithms will recommend smart, convincing decisions. Furthermore, AI can help us make decisions very quickly.

But it is important that AI remains a tool, an “effective assistant” offering suggestions to people while people are the ones who will consider and make final decisions. Therefore, when using AI, human intelligence needs to be one level higher. Many people think that when there are robots, they will have nothing to do anymore. On the contrary, new and more demanding jobs will open up.

It is worth mentioning that when we put AI into application, we recognize that people have many good traits but also morally ambiguous traits, while AI is very honest. In Vietnam, Luu Quang Vu’s play Green Chrysanthemum on Marsh is a typical example: Nearly 40 years ago, Luu Quang Vu thought about robots that could help people look back and adjust themselves so that they became more honest and more warm-hearted. And so, humanity will need a standard to manage and control AI in general.

That is why the MDI developed the AI World Society Initiative (AIWS Initiative), published on November 29, 2017. According to Mr. Nguyen Anh Tuan, the basic purpose of this initiative is to establish a society with the best and most effective AI application, bringing good to humans.

To illustrate the need for this, he also reiterated the fact that cyber security is a “headache” to the world today. As we did not anticipate the development of the Internet and computers, we have left “holes” that are difficult to overcome. For AI, although the same problem has not been officially declared, we still need to prepare in advance; otherwise, as Prof. Stephen Hawking said, such “holes” would be a threat to humanity in the future.

Mr. Nguyen Anh Tuan also affirmed that the MDI and its associates who are experts from Harvard University, Massachusetts Institute of Technology (MIT), etc. agree to contribute their research, ideas, or initiatives on AIWS and AI-Government to serve humanity, creating a good society where AI is not harmful to humans.