The world first AI news anchor from China is called into question

The world first AI news anchor from China is called into question

Appeared on Xinhua News Agency, with the appearance and voice of Zhang Zhao – a real anchor of Xinhua, the “world’s first” artificial intelligence (AI) news anchor has gone viral online. However, there are doubts about whether this China’s news anchor is really AI.

The technology which mimics a person voice and look was powered by Sogou, a search engine company, with the help of China’s state press agency, Xinhua. Modeled on the agency’s Zhang Zhao presenter, the machine learns from a real footage of news and has the ability to respond to text. “’He’ learns from live broadcasting videos by himself and can read texts as naturally as a professional news anchor,” claimed Xinghua. They also mentioned that they “will work 24 hours a day on its official website and various social media platforms, reducing news production costs and improving efficiency”.

Though appearing quite impressive, this news anchor is called into question whether it’s a true example of AI. Will Knight, Senior Editor for AI at MIT Technology Review casts doubt on it. He considers this technology just a digital puppet that reads a script. “That’s certainly impressive, but it’s a very narrow example of machine learning. You can call it an “AI anchor,” but that’s a little confusing.”

This kind of technology is believed to support animation, special effects and video games, but it also raises worries about the threat of being used to spread misinformation or create bad reputation. To get ahead and prevent the scourge of disinformation in the digital world, Boston Global Forum believes that government, businesses, social organizations, individuals and the community need to work together to identify vulnerabilities and build strategic solutions. On December 12, 2018, the Fourth Annual Global Cybersecurity Day with the theme of AI solve Disinformation will be held at Harvard University with the participation of influential delegates and cybersecurity leaders.

Congress needs to know the potential and risks of artificial intelligence

Congress needs to know the potential and risks of artificial intelligence

A new AI policy initiative was launched with the focus of expanding the legal and academic scholarship around AI ethics and regulation. It will also host a boot camp for US Congress members to help them learn more about the technology.

During the trial with Mark Zuckerberg testimony in front of the Congress, the necessity to understand thoroughly about AI soon comes to the recognition of experts who were presented in the court. The questioners of Mark Zuckerberg had little knowledge on technology, which resulted in the chief executive of Facebook getting away with his convincing claims about the AI deal with their problems.

For that reason, Dipayan Ghosh, a research fellow at the Harvard Kennedy School (HKS), put an emphasis on the need of schooling US politicians about major technology issues—and AI in particularly. On November 14th, Ghosh launched an initiative known as New AI Policy Initiative, with the help of Tom Wheeler, a senior research fellow at HKS and the chairman of the US Federal Communications Commission under Obama. The project is sponsored by HKS’s Shorenstein Center on Media, Politics, and Public Policy, with the aim to expand the legal and academic scholarship on AI ethics and regulation as well as providing enough information on technology to the Congress to equip them with knowledge for effective decision making and appropriate strategy for AI.

According to Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund, “They hold, in many ways, the responsibility of communicating history to the corporations and other companies that are developing these technologies.” However, their knowledge on this subject is insufficient to make effective decision. Ghosh believed if a politician were asked whether AI is a part of the disinformation problem, they would deny it.

His initiative will take form of a boot camp in Washington DC in the next February exclusively for Congress members and their staff to create for a better policy discussion. The course will focus on AI ethics and regulation, to prevent potential risks and foster its benefits.

It is essential for decision makers to know more about the emerging technology to formulate appropriate policies and regulation for AI. According to Layer 3 of the AIWS 7-Layer Model being developed by the AIWS, the AIWS Standards and Practice Committee will engage with governments, corporations, universities, and other relevant organizations to facilitate understanding of AI threats and challenges.

Bruce Schneier: Real IoT security can only be achieved through regulation by the government

Bruce Schneier: Real IoT security can only be achieved through regulation by the government

At the Aspen Cyber Summit on November 8, 2018, the cybersecurity guru Bruce Schneier emphasized the need of penalties by the government to ensure people’s safety online.

During a panel discussion at the Aspen Cyber Summit, renowned technologist Bruce Schneier argued that without government regulation, there is little likelihood that the security of companies will be protected. “Looking at every other industry, we don’t get security unless it is done by the government,” said Schneier. “I challenge you to find an industry in the last 100 years that has improved security without being told by the government.” His opinion was supported by other panelists, including Johnson & Johnson CISO Marene Allison. He added the problem of lacking information transparency existing between businesses and customers about how their data is processed and used. Moreover, a number of logistical hurdles in the process of data security will arise both short and long term.

“The lifespan for consumer goods is much more than our phones and computers, this is a very different way of maintaining lifecycle,” Schneier said. “We have no way of maintaining consumer software for 40 years.”

The IoT security question can only be answered by the government, but, as the panelists noted, any long-term solution will require a shift in culture and perception from manufacturers, retailers and consumers.

Photo: Security Expert Bruce Schneier

Bruce Schneier is an internationally renowned security technologist, a “security guru” according the Economist. He is also the author of 14 books on general security topics, computer security and cryptography. In 2015, Schneier was honored as the Business Leader in Cybersecurity Award by Boston Global Forum (BGF) for dedicating his career to the betterment of technology security and privacy during Global Cybersecurity Day, which is observed on December 12 annually.

In 2018, the Fourth Annual Global Cybersecurity Day Symposium will be held at Harvard University. The theme of this year revolves around the current state of cyber issues and the threat posed by disinformation, as well as effective defense mechanisms (by Artificial Intelligence – AI) against these activities. Delegates, cybersecurity leaders, and other citizens of the world who participate in the day’s programs will be linked together online in real time.

 

Club de Madrid supports the Paris Call for Trust and Security in Cyberspace

Club de Madrid supports the Paris Call for Trust and Security in Cyberspace

On November 12th at the UNESCO Internet Governance Forum (IGF), President Emmanuel Macron launched the Paris Call for Trust and Security in Cyberspace. The WLA-Club de Madrid is one of the early supporters of the Paris Call, among other civil society organizations, private companies and States.

During the emergence of IoT and Big Data, many incidents occurred concerning the problem of users’ information protection. Hence, calls for treaties, regulations, and codes are needed and many of them are making progress. The recent high-level declaration of the President of France on developing common principles for securing cyberspace, ensuring the safety of the people has received the backing of many States, as well as private companies and civil society organizations.

The Call emphasizes the necessity of a strengthened multi-stakeholder approach and of additional efforts to reduce risks to the stability of cyberspace and to build-up confidence, capacity and trust. Supporters of the Paris Call are therefore committed to work together to:

– Prevent and recover from malicious cyber activities that threaten or cause significant, indiscriminate or systemic harm to individuals and critical infrastructure;

– Prevent activity that intentionally and substantially damages the general availability or integrity of the public core of the Internet;

– Strengthen our capacity to prevent malign interference by foreign actors aimed at undermining electoral processes through malicious cyber activities;

– Prevent ICT-enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sector;

– Develop ways to prevent the proliferation of malicious ICT tools and practices intended to cause harm;

– Strengthen the security of digital processes, products and services, throughout their lifecycle and supply chain;

– Support efforts to strengthen an advanced cyber hygiene for all actors;

– Take steps to prevent non-State actors, including the private sector, from hacking-back, for their own purposes or those of other non-State actors;

– Promote the widespread acceptance and implementation of international norms of responsible behavior as well as confidence-building measures in cyberspace.

Link for Full Text of the Paris Call for Trust and Security in Cyberspace

In 2014, the World Leadership Alliance-Club de Madrid (WLA-CdM) launched The Next Generation Democracy (NGD) Project with the goal of “enabling democracy to meet the expectations and needs of all citizens and preserve their freedom and dignity while securing a sustainable future.” Since 2017, WLA-CdM has been partnering with AIWS to promote the development of AI in concert with the goal of the Next Generation Democracy. To align the development of AI with the NGD initiative, the AIWS has developed the AIWS 7-Layer Model. This model establishes a set of responsible norms and best practices for the development, management, and uses of AI so that this technology is safe, humanistic and beneficial to society.

Concepts of AIWS and AI-Government

Concepts of AIWS and AI-Government

By Michael Dukakis, Nguyen Anh Tuan, Nazli Choucri, Thomas Patterson, David Silbersweig, John Savage

For the 13th International Symposium “Intelligent Systems – 2018”,
St. Petersburg, Russia, October 22–24, 2018

The Artificial Intelligence World Society (AIWS) is a set of values, ideas, concepts and protocols for standards and norms whose goal is to advance the peaceful development of AI to improve the quality of life for all humanity. It was conceived by the Michael Dukakis Institute for Leadership and Innovation (MDI) and established on November 22, 2017.

AIWS has developed the AIWS 7-Layer Model. This model establishes a set of norms and best practices for the development, management, and uses of AI so that this technology is safe, humane, and beneficial to society. AIWS recognizes that we live in a chaotic world with differing, and sometimes conflicting, goals, values and norms. Hence, the 7-Layer Model is aspirational and even idealistic. Nonetheless, it provides a baseline for guiding AI development to ensure positive outcomes and to reduce pervasive and realistic risks and related harms that AI could pose to humanity. The Model is based on the assumption that humans are ultimately accountable for the development and use of AI, and must therefore preserve that accountability. Hence, it stresses transparency of AI reasoning, applications, and decision making, which will lead to auditability and validation of the uses of AI systems.

  • Layer 1: Charter and Principles: To create a society of AI for a better world and to ensure peace, security, and prosperity
  • Layer 2: Ethical Frameworks: Guidelines for the Role of AI in Building the Next Generation Democracy
  • Layer 3: Standards: Standards for the Management of AI Resources and Development 
  • Layer 4: Laws and Legislation: Laws for the Role of AI in Building the Next Generation Democracy
  • Layer 5: International Policies, Conventions, and Norms: Global Consensus
  • Layer 6: Public Services and Policymaking: Engage and Assist Political Leaders
  • Layer 7: Business Applications for All of Society: Engage and Assist Businesses

AI-Government is a component of the AIWS. Two of the layers of the 7-Layer Model pertain to AI-Government. E-Government is the use of communication and information technology for improving the performance of public sector agencies. AI-Government transcends E-Government by applying AI to assist decision making for all critical public sector functions – notably provision of public services, performance of civic functions, and evaluation of public officials. At the core of AIGovernment is the National Decision making and Data Center (NDMD). NDMD collects, stores, analyzes, and applies massive amounts of data relevant to the provision of public services and the evaluation of public programs and officials. It does not replace governance by humans or human decisional processes but guides and informs them, while providing an objective basis for service provision and evaluation.

The concepts of AIWS and AI-Government