International Relations in the Cyber Age: The Co-Evolution Dilemma

International Relations in the Cyber Age: The Co-Evolution Dilemma

Professor Nazli Choucri, MIT, Board Member of the Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation, also a very active member of AIWS Standards and Practice Committee, has launched a new book:” International Relations in the Cyber Age “.

“The international system of sovereign states has evolved slowly since the seventeenth century, but the transnational global cyber system has grown rapidly in just a few decades. Two respected scholars – a computer scientist and a political scientist-have joined their complementary talents in a bold and important exploration of this crucial co-evolution.”

– Joseph S. Nye, Harvard Kennedy School and author of The Future of Power

“Many have observed that the explosive growth of the Internet and digital technology have reshaped longstanding global structures of governance and cooperation. International Relations in the Cyber Age astutely recasts that unilateral narrative into one of co-evolution, exploring the mutually transformational relationship between international relations and cyberspace.”

– Jonathan Zittrain, George Bemis Professor of International Law and Professor of Computer Science, Harvard University

“Cyber architecture is now a proxy for political power. A leading political scientist and pioneering Internet designer masterfully explain how ‘high politics’ intertwine with Internet control points that lack any natural correspondence to the State. This book is a wake-up call about the collision and now indistinguishability between two worlds.”

– Laura Denardis, Professor. American University and author, The Global War for Internet Governance

“This book uniquely combines the perspectives of an Internet pioneer (Clark) and a leading political scientist with expertise in cybersecurity (Choucri) to produce a very rich account of how cyberspace impacts international relations, and vice versa. It is a valuable contribution to our understanding of Internet governance.”

– Jack Goldsmith, Henry Shattuck Professor, Harvard Law School

 

About this book

A foundational analysis of the co-evolution of the internet and international relations, examining resultant challenges for individuals, organizations, firms, and states.

In our increasingly digital world, data flows define the international landscape as much as the flow of materials and people. How is cyberspace shaping international relations, and how are international relations shaping cyberspace? In this book, Nazli Choucri and David D. Clark offer a foundational analysis of the co-evolution of cyberspace (with the internet as its core) and international relations, examining resultant challenges for individuals, organizations, and states.

The authors examine the pervasiveness of power and politics in the digital realm, finding that the internet is evolving much faster than the tools for regulating it. This creates a “co-evolution dilemma”—a new reality in which digital interactions have enabled weaker actors to influence or threaten stronger actors, including the traditional state powers. Choucri and Clark develop new methods of analysis. For example, one method is about control in the internet age, “control point analysis,” and apply it to a variety of situations, including major actors in the international and digital realms: the United States, China, and Google.  Another is about network analysis of international law for cyber operations. A third method is to measure the propensity of states to expand their influence in the “real” world compared to expansion in the cyber domain. In so doing so they lay the groundwork for a new international relations theory that reflects the reality in which we live—one in which the international and digital realms are inextricably linked and evolving together.

Authors

Nazli Choucri

Nazli Choucri is Professor of Political Science at MIT, Faculty Affiliate at the MIT institute for Data Science and Society, Director of the Global System for Sustainable Development (GSSD), and the author of Cyberpolitics in International Relations (MIT Press).

David D. Clark

David D. Clark is a Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Lab and a leader in the design of the Internet since the 1970s.

TECH COMPANIES SHAPING THE RULES GOVERNING AI

TECH COMPANIES SHAPING THE RULES GOVERNING AI

IN EARLY APRIL, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.”

One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI.

Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says.

When a formal draft was released in December, uses that had been suggested as requiring “red lines” were presented as examples of “critical concerns.” That shift appeared to please Microsoft. The company didn’t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoft’s senior director for EU government affairs, said the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’” Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted toward industry. “We need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.”

The brouhaha over Europe’s guidelines for AI was an early skirmish in a debate that’s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interest—and in some cases appear to be trying to steer construction of any new guardrails to their own benefit.

Harvard law professor Yochai Benkler warned in the journalNature this month that “industry has mobilized to shape the science, morality and laws of artificial intelligence.”

Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into “Fairness in Artificial Intelligence” that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed.

Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,” he says.

Microsoft used some of its power when Washington state considered proposals to restrict facial recognition technology. The company’s cloud unit offers such technology, but it has also said that technology should be subject to new federal regulation.

In February, Microsoft loudly supported a privacy bill being considered in Washington’s state Senate that reflected its preferred rules, which included a requirement that vendors allow outsiders to test their technology for accuracy or biases. The company spoke against a stricter bill that would have placed a moratorium on local and state government use of the technology.

By April, Microsoft found itself fighting against a House version of the bill it had supported, after the addition of firmer language on facial recognition. The House bill would have required that companies obtain independent confirmation that their technology worked equally well for all skin tones and genders before deploying it. Irene Plenefisch, Microsoft’s director of government affairs, testified against that version of the bill, saying it “would effectively ban facial recognition technology [which] has many beneficial uses.” The house bill stalled. With lawmakers unable to reconcile differing visions for the legislation, Washington’s attempt to pass a new privacy law collapsed.

In a statement, a Microsoft spokesperson said that the company’s actions in Washington sprang from its belief in “strong regulation of facial recognition technology to ensure it is used responsibly.”

Shankar Narayan, director of the technology and liberty project of the ACLU’s Washington chapter, says the episode shows how tech companies are trying to steer legislators toward their favored, looser, rules for AI. But, Narayan says, they won’t always succeed. “My hope is that more policymakers will see these companies as entities that need to be regulated and stand up for consumers and communities,” he says. On Tuesday, San Francisco supervisors voted to ban the use of facial recognition by city agencies.

Washington lawmakers—and Microsoft—hope to try again for new privacy and facial recognition legislation next year. By then, AI may also be a subject of debate in Washington, DC.

Last month, Senators Cory Booker (D-New Jersey) and Ron Wyden (D-Oregon) and Representative Yvette Clarke (D-New York) introduced bills dubbed the Algorithmic Accountability Act. It includes a requirement that companies assess whether AI systems and their training data have built-in biases, or could harm consumers through discrimination.

Mutale Nkonde, a fellow at the Data and Society research institute, participated in discussions during the bill’s drafting. She is hopeful it will trigger discussion in DC about AI’s societal impacts, which she says is long overdue.

The tech industry will make itself a part of any such conversations. Nkonde says that when talking with lawmakers about topics such as racial disparities in face analysis algorithms, some have seemed surprised, and said they have been briefed by tech companies on how AI technology benefits society.

Google is one company that has briefed federal lawmakers about AI. Its parent Alphabet spent $22 million, more than any other company, on lobbying last year. In January, Google issued a white paper arguing that although the technology comes with hazards, existing rules and self-regulation will be sufficient “in the vast majority of instances.”

Metzinger, the German philosophy professor, believes the EU can still break free from industry influence over its AI policy. The expert group that produced the guidelines is now devising recommendations for how the European Commission should invest billions of euros it plans to spend in coming years to strengthening Europe’s competitiveness.

Metzinger wants some of it to fund a new center to study the effects and ethics of AI, and similar work throughout Europe. That would create a new class of experts who could keep evolving the EU’s AI ethics guidelines in a less industry-centric direction, he says.

Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?

Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?

It takes just 3.7 seconds of audio to clone a voice. This impressive—and a bit alarming—feat was announced by Chinese tech giant Baidu. A year ago, the company’s voice cloning tool called Deep Voice required 30 minutes of audio to do the same. This illustrates just how fast the technology to create artificial voices is accelerating. In just a short time, the capabilities of AI voice generation have expanded and become more realistic which makes it easier for the technology to be misused.

Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?

Capabilities of AI Voice Generation

Like all artificial intelligence algorithms, the more data voice cloning tools such as Deep Voice receive to train with the more realistic the results. When you listen to several cloning examples, it’s easier to appreciate the breadth of what the technology can do including being able to switch the gender of the voice as well as alter accents and styles of speech.

Google unveiled Tacotron 2, a text-to-speech system that leverages the company’s deep neural network and speech generation methodWaveNet. WaveNet analyzes a visual representation of audio called a spectrogram to generate audio. It is used to generate the voice for Google Assistant. This iteration of the technology is so good; it’snearly impossible to tell what’s AI generated and what voice is human generated. The algorithm has learned how to pronounce challenging words and names that would have been a tell-tale sign of a machine as well as how to better enunciate words.

These advances in Google’s voice generation technology have allowed for Google Assistant to offer celebrity cameos. John Legend’s voice is now an option on any device in the United States with Google Assistant such as Google Home, Google Home Hub, and smartphones. The crooner’s voice will only respond to certain questions such as “What’s the weather” and “How far away is the moon” and is available to sing happy birthday on command. Google anticipates that we’ll soon have more celebrity cameos to choose from.

Another example of just how precise the technology has become, a Jordan Peterson (the author of 12 Rules for Life) AI model sounds just like him rapping Eminem’s “Lose Yourself” song. The creator of the AI algorithm used just six hours of Peterson talking (taken from readily available recordings of him online) to train the machine learning algorithm to create the audio. It takes short audio clips and learns how to synthesize speech in the style of the speaker. Take a listen, and you’ll see just how successful it was.

This advanced technology opens the door for companies such asLyrebird to provide new services and products. Lyrebird uses artificial intelligence to create voices for chatbots, audiobooks, videos games, text readers and more. They acknowledge on their website that “with great innovation comes great responsibility” underscoring the importance of pioneers of this technology to take great care to avoid misuse of the technology.

How This Technology Could Be Misused

Similar to other new technologies, artificial voice can have many benefits but can be also be used to mislead individuals as well. As the AI algorithms get better and it becomes difficult to discern what’s real and what’s artificial, there will be more opportunities to use it to fabricate the truth.

According to research, our brains don’t register significant differences between real and artificial voices. In fact, it’s harder for our brains to distinguish fake voices than to detect fakes images.

Now that these AI systems only require a short amount of audio to train in order to create a viable artificial voice that mimics the speaking style and tone of an individual, the opportunity for abuse increases. So far, researchers weren’t able to identify a neural distinction for how a brain can distinguish between real and fake. Consider how artificial voices might be used in an interview, news segment or press conference to make listeners believe they are listening to an authority figure in the government or a CEO of a company.

Raising awareness that this technology exists and how sophisticated it is will be the first step to safeguard listeners from falling for artificial voices when they are used to mislead us. The real fear is that people can be fooled to act on something that is fake because it sounds like it’s coming from somebody real. Some people are attempting to find a technical solution to safeguard us. However, a technical solution will not be 100% foolproof. Our ability to critically assess a situation, evaluate the source of information and verify its validity will become increasingly important.

Can optical computing be the next breakthrough in AI acceleration?

Can optical computing be the next breakthrough in AI acceleration?

As neural networks and machine learning continue to take on new challenges from analyzing images posted on social media to driving cars, there’s increasing interest in creating hardware that are tailored for AI computation.

The limits of current computer hardware has triggered a quasi-arms race, enlisting a growing array of small and large tech companies that want to create specialized hardware for artificial intelligence computation. But one startup aims to create the next breakthrough in AI acceleration by changing the most fundamental technology that has been powering computers in the past several decades.

A team of scientists at Boston-based Lightelligence are ditching electronics for photonic computing to achieve orders-of-magnitude improvement in the speed, latency and power consumption of computing AI models. Photonic or optical computing, the science of using laser and light to store, transfer and process information, has been around for decades. But until now, it has been mostly limited to optical fiber cables used in networking.

The folks at Lightelligence believe that optical computing will solve AI’s current hardware hurdles. And they have an optical AI chip to prove it.

The limits of electronic AI hardware

Artificial intelligence is one of the fastest growing sectors of the tech industry. After undergoing several “winters,” AI has now become the bread and butter of many of applications in the digital and physical world. The AI industry is projected to be worth more than $150 billion by 2024.

“There has been a huge explosion of artificial intelligence innovation in the past five years,” says Dr. Yichen Shen, co-founder and CEO of Lightelligence. “And what we think will happen in the next ten years is that there will be an explosion of use cases and application scenarios for machine learning and artificial neural networks.”

Deep learning and neural networks, the current dominating subset of AI, relies on analyzing large sets of data and performing expensive computations at fast speeds. But current hardware structures are struggling to keep up with the growing demands of this expanding sector of the AI industry. Chips and processors aren’t getting faster at the same pace that AI models and algorithms are progressing.

Lightelligence is one of many companies developing AI accelerator hardware. But as Shen says, other companies working in the field are basing their work on electronics, which is bound by Moore’s Law.

“This means their performance still relies on Boolean algebra transistors to do AI computation. We think that in the long run, it will still be bound by Moore’s Law, and it’s not the best of solutions,” Shen says.

Established by Intel co-founder Gordon Moore, Moore’s Law maintains that technological advances will continuously enable us to reduce the size and price of transistors, the main component of computing chips, at every 1.5-2 years. This basically means that you can pack more computing power in the same space at a lower price. This is the principle that has making phones and laptops stronger and faster in the past few decades.

But Moore’s Law is hitting a wall. “Now it takes five years to get to a smaller node and down the road it can take longer—ten or twenty years—to get to three nanometers or smaller nanometers,” Shen says. “We want to change the way we do computing by replacing electronics by photonics. In principle we can do the calculations much faster and in a much more power efficient way.”

The optical AI accelarator

In 2017, Shen, then a PhD student doing research on nano-photonics and artificial intelligence under MIT professor Marin Soljacic, co-authored a paper that introduced the concept neural networks that ran fully on optical computing hardware. The proposition promised to enhance the speed of AI models.

A few months later, Shen founded Lightelligence with the help of Soljacic. The prototype of the optical AI accelerator, which Lightelligence released earlier this year, is the size of a printed circuit board (PCB).

“Instead of using digital electronics, we use optical signals to do AI computation. Our main purpose is to accelerate AI computing by orders of magnitude in terms of latency, throughput and power efficiency,” Shen says.

The company has designed the device to be compatible with current hardware and AI software. The accelerator can be installed on servers and devices that support the PCI-e interface and supports popular AI software frameworks, including Google’s TensorFlow and Facebook’s PyTorch.

There are still no plans to create fully fledged optical computers, but the technology is certainly suitable for specific types of computation. “One of the algorithms that photonics is very good at implementing is matrix multiplication,” says Soljacic.

Matrix multiplication is one of the key calculations involved in neural networks, and being able to speed it up will help create faster AI models. According to Shen, the optical AI accelerator can perform any matrix multiplication, regardless of the size, in one CPU clock, while electronic chips take at least a few hundred clocks to perform the same operation.

“With neural networks, depending on the algorithm, there might be other components and operations involved. It depends on the AI algorithm and application, but we’ll be able to improve the performance from one to two orders of magnitude, ten to hundred times faster,” Shen says.

The company tested the optical AI accelerator on MNIST, a dataset of handwritten digits used to benchmark the performance of machine learning algorithms. The hardware performed much faster than other state-of-the art AI accelerator chips.

Soljacic explains that the proposition of optical neural networks was set forth decades ago, but at the time it didn’t get traction. However, the past years have seen two important changes.

“First, neural networks became hugely important. That provides a large motivation to people to develop specialized hardware for neural networks which didn’t exist thirty years ago. And more importantly now finally we have the same fabrication that made electronics manufacturing so successful, the CMOS processing, also available for photonics,” Soljacic says.

This means that you can integrate thousands of optical devices on the same chip at the same cost that it would take to integrate electronic devices. “That is something we didn’t have five to ten years ago, and that is what is enabling the technology for all of this, the ability to mass produce at a very low cost,” Soljacic says.

Manufacturing optical chips is also less expensive than electronic devices, Shen adds. “For photonics, you don’t need a 7nm or a 3nm node to do it. We can use older and cheaper nodes to manufacture it,” he says.

Solving the latency problem of AI hardware

mirrors line
Image credit: Depositphotos.com

Why is it important to improve the speed of AI calculations? In many settings, neural networks must perform their tasks in a time-critical fashion. This is especially true at the edge, where AI models must respond to real-time changes to their environment.

One of the use cases where low-latency AI computation is critical is self-driving cars. Autonomous vehicles rely on neural networks to make sense of their environment, detect objects, find their way on roads and streets and avoid collisions.

“We humans can drive a car only using only our eyes as guide, purely vision-based, and we can drive easily at 90-100 mph. At this point for autonomous vehicles, they’re not able to drive the car at that speed if they only rely on cameras. One of the main reasons is that the AI models that process video information are not as fast as needed,” Shen says. “Since we can decrease the time it takes to process data and run AI computations on the videos, we think we’ll be able to allow the cars to drive at a much faster speed. Potentially, we’ll be able to catch up with human performance.”

To be clear, it takes more than super-fast neural networks to solve the challenges of self-driving cars in open environments. Namely, unlike human drivers, neural networks are very limited in their ability to deal with situations they haven’t been trained for. But being able to improve the speed of processing neural networks will certainly put the industry at a better position than it currently is.

There are plenty of other areas where low-latency neural network computations can become a game changer, especially where AI models are deployed in the physical world. Examples include drone operations and robotic surgery, both of which involve safety issues that need real-time response from the AI models that control the hardware.

“Low latency AI is critical at the edge, but also has applications in the cloud. Our earlier models are more likely to be deployed in the cloud, not only hyperscale data centers, but also the local data centers or local server rooms in buildings,” Shen says.

Servers that can run AI models with super-low latency will become more prominent as our network infrastructure evolves. Namely, the development of 5G networks will pave the way for plenty of real-time AI applications that perform their tasks at the edge but run their neural network computations in local servers. Augmented reality apps, transportation and medicine are some of the areas that stand to gain from super-fast AI servers.

Making AI calculations power-efficient

Evolving Digital Space

Current AI technologies are very electricity-hungry, a problem that is manifesting itself both in the cloud and at the edge. Cloud servers and data centers currently account for around 2 percent of power consumption in the U.S. According to some forecasts, data centers will consume one fifth of the world’s electricity by 2025.

substantial part of the cloud’s power goes into neural network computation, giving the AI industry an environmental problem.

“People already upload billions of photos per day to the cloud, where AI algorithms scan these photos for pornography, violence and similar content. Today, it’s one billion photos. Tomorrow it’s going to be one billion movies. That part of the energy consumption and the cost of systems running on artificial intelligence is going to be growing very rapidly, and that’s what we’re going to help with,” Soljacic says.

Switching from electronic to optical hardware can reduce the energy consumption of AI models considerably. “The reason that electronic chips generate heat is that the electronic signals go through copper wires and cause heat loss. That’s where the major power costs are. In contrast, light doesn’t heat up things like electronics do,” Shen says.

“In addition to the heat loss through the interconnect, just thinking about traditional electronic circuits, a portion of the power consumption is just leakage. It produces no work other than heat. That could easily be a third of a chip’s power budget. There’s really no benefit from that. The photonic circuit has none of that,” says Maurice Steinman, VP of engineering at Lightelligence.

At the edge, optical AI computing can help take some of the burden off devices where weight is a constraint.

“For instance, if you want to have substantial artificial intelligence to a drone, then you’ll have to add a GPU that consumes 1 kWh and requires a huge and heavy battery,” Soljacic says.

“Right now, the power costs of the AI chips account for about 20 percent of the electricity consumption in self-driving cars. This increases the size of the batteries, which in increases the power consumption for the cars,” Shen adds.

Using optical AI accelerators will help reduce the power consumption and the weight of these devices.

The future of AI hardware accelerators

Optical computing is not the only technique that can possibly address the hardware painpoints of current AI models. Other technologies that might help improve the speed and efficiency of AI models are quantum computing and neuromorphic chips.

Quantum computers are still years away, but once they become a reality, they will change not only the AI industry but many other aspect of digital life, including finance and cybersecurity.

As for neuromorphic chips, they are computing devices that try to imitate the structure of the brain to specialize for AI tasks. Neuromorphic chips are slowly gaining traction.

It will be interesting to see which one of these trends will manage to clinch the main spot for the future of AI computing.

International Relations in the Cyber Age, the first book ever by a political scientist and a computer scientist

International Relations in the Cyber Age, the first book ever by a political scientist and a computer scientist

Professor Nazli Choucri, MIT, and Board Member of the Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation, also a very active member of AIWS Standards and Practice Committee, and David Clark, a pioneer of the Internet, MIT, have launched a new book:” International Relations in the Cyber Age “.

This is a first book ever by a political scientist and a computer scientist.

Contents
I      Cyberspace and International Relations

  1. Context and Co-Evolution
  2. Cyberspace: Layers and Interconnections
  3. International Relations: Levels of Analysis
  4. The Cyber-IR system: Integrating Cyberspace and     International Relations
  5. Co-Evolution and Complexity in Twenty-First-Century International Relations

II     Complexities of Co-Evolution

  1. Control Point Analysis: Locating Power and Leverage
  2. The Power over Control Points: Cases in Context
  3. Cybersecurity and International Complexities
  4. Distributed Internet Governance: Private Authority and International Order
  5. The Co-Evolution Dilemma: Complexity of Transformation     and Change
  6. Alternative Futures: Trends and Contingencies
  7. Imperatives of Co-Evolution

Excerpts on interactions between cyberspace and international relations
From Chapter 1 of International Relations in the Cyber Age,

Almost everyone recognizes that cyberspace is a fact of daily life. Given its ubiquity, scale and scope, cyberspace – with the Internet at its core and the experiences it enables – has become a central feature of the world we live in and has created a fundamentally new reality for almost everyone and everywhere.  Today, the influence of cyberspace is evident in all aspects of contemporary society, in almost all parts of the world.

One result is a powerful disconnect between 20th century theories of international relations and the realities of the 21st century. Theories of international relations are largely anchored in the post-World War II era of the last century. Today, we see increasing interconnections and joint evolution of two domains fundamental to the 21st century world, the international system with its actors and entities, structures and processes, and cyberspace, with its rapidly growing uses and users, and the formal and informal institutions that seek to provide forms of order in the new arena. The pace of evolution is rapid, and has not received sufficient attention as a topic of research.

If knowledge is power, as is commonly argued, then harnessing the power of knowledge becomes an intensely political activity. The flow of data now defines the international landscape as much as the flow of material and people. But the implications of data flows have not become an important issue in world politics, nor central to theory, policy, and practice. More often than not, communications in world politics harbor some form of contention over how data—knowledge, commercial content, criminal content, or speech—flow in cyberspace. There are, of course, many ways that nations contend to excel at the creation and exploitation of knowledge. But today, almost all of these activities center on, and to a considerable extent are directly acted out in, cyberspace.
Until recently cyberspace was considered largely a matter of “low politics”, the term used to denote background conditions and routine decisions and processes. By contrast, “high politics” is about national security, core institutions, and decision systems that are critical to the state, its interests, and its underlying values. Nationalism, political participation, political contentions, conflict, violence, and war are among the most often cited aspects of high politics. But low politics do not always remain below the surface. If the cumulative effects of normal activities shift the established dynamics of interaction, then the seemingly routine becomes increasingly politicized.
Cyberspace is now a matter of high politics. This new domain of interaction is a source of vulnerability, a potential threat to national security, and a disturber of the familiar international order. So critical has cyberspace become that the United States has created a Cyber Command in the U.S. Department of Defense in recognition of potential cyber threats that can undermine the security and welfare of the nation. The new practice of turning off the Internet during times of unrest in various countries, the effective leakage of confidential government documents on WikiLeaks, the cyberattacks that accompanied past conflict in Georgia and Estonia, the use of cyber-based attacks to degrade Iran’s nuclear capabilities, and the Russian interference with the United States presidential elections in 2016, all illustrate that state actors cannot ignore the salience of cyberspace and its capabilities. We see many incidents of power and politics, conflict and competition, violence and war—all central features of world politics—increasingly manifested via cyber venues.
Both cyberspace and international affairs are defined by their own principles and characterized by distinct features of structure and process. The cyber domain in now being shaped by power and politics, as well as modes of leverage and control. Invariably when issues of power and control arise, propensities for conflict and contention are not far behind. In addition, traditional views of world politics, including notions of deterrence and defense, are not readily portable to cyberspace. How can we connect cyberspace and international relations in theory, policy, and practice?  How can we track who does what, when, how and with what impacts?

If the reality of cyberspace is changing the character of international relations, so are the concerns of various states changing the character of cyberspace. It is already apparent that political pressures impinge upon the current Internet to render its function more in line with power and politics. Threats to cyber security are only one side of the proverbial coin. The other side consists of cooperation and the challenges associated with international governance, especially governance of cyberspace.

Today, the growing politicization of the two domains is creating a system of interlocking and mutual influence that may well shape all aspects of the human experience. As these domains become more interwoven, a core dilemma emerges: the two systems are changing at different rates, and elements of each are also changing at different rates. Cyberspace is evolving much faster than are the tools the state has to regulate it. The consequence is a set of ongoing challenges that are difficult to anticipate or manage – let alone regulate.

To appreciate the importance of this dilemma, consider some examples. Why do damaging uses of the Internet seem to grow much faster than our ability to identify, let alone to control or prevent them? Why is it that the power of the state—with its monopoly over the use of force, in theory at least—seems inadequate for responding to threats from the cyber domain? Do states have the same propensity to behave in the cyber domain as they do in the traditional international arena? What is the possibility that cyberspace can “out-evolve” the tools of the state, leaving the state poorly equipped to address its needs. Then, too, how is the overall cyber domain managed? Will critical cyber-centered organizations and institutional practices evolve at the rate that cyberspace itself evolves, or will they themselves be “out-evolved”?

One start to understanding these two domains is to look at their core structuring principles. Cyberspace is typically explained as a series of layers—for example physical technology, data transfer, applications, information and users. International relations is typically explained using levels of analysis—the individual, the state, and the international system, adding in the global level. One can combine these two frameworks into an integrated Cyber-IR model that seeks to provide a combined view of these two domains. By positioning specific challenges—forms of cybercrime, for example—within this model, it is possible to draw some conclusions about how these challenges can best be met. As an example, as we look at the layers of cyberspace, the lower layers usually manifest more generality. Generality makes contention between parties more difficult, since the parties can exploit that generality to maneuver around each other. So a problem that arises at one layer of cyberspace needs to be addressed at that layer, not by shifting the burden of mitigation to a lower layer.

Models of this sort do not allow us to predict the future. The interplay of the complex forces shaping the future preclude any simple predictions. But a catalog of these forces, organized around structural models (both of cyberspace and international relations) can allow us to explain the range of realistic options for the future.  Framing them in the context of a joint reality provides added understanding. Further, they can alert us to potentials for powerful change. These, at the highest level, are the objectives of this book.

Authors
Nazli Choucri
Nazli Choucri is Professor of Political Science at MIT, Faculty Affiliate at the MIT institute for Data Science and Society, Director of the Global System for Sustainable Development (GSSD), and the author of Cyberpolitics in International Relations (MIT Press).
David D. Clark
David D. Clark is a Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Lab and a leader in the design of the Internet since the 1970s.

WILL AI ENHANCE OR HACK HUMANITY?

WILL AI ENHANCE OR HACK HUMANITY?

THIS WEEK, I interviewed Yuval Noah Harari, the author of three best-selling books about the history and future of our species, and Fei-Fei Li, one of the pioneers in the field of artificial intelligence. The event was hosted by the Stanford Center for Ethics and Societythe Stanford Institute for Human-Centered Artificial Intelligence, and the Stanford Humanities Center. A transcript of the event follows, and a video is posted below.

Nicholas Thompson: Thank you, Stanford, for inviting us all here. I want this conversation to have three parts: First, lay out where we are; then talk about some of the choices we have to make now; and last, talk about some advice for all the wonderful people in the hall.

Yuval, the last time we talked, you said many, many brilliant things, but one that stuck out was a line where you said, “We are not just in a technological crisis. We are in a philosophical crisis.” So explain what you meant and explain how it ties to AI. Let’s get going with a note of existential angst.

Yuval Noah Harari: Yeah, so I think what’s happening now is that the philosophical framework of the modern world that was established in the 17th and 18th century, around ideas like human agency and individual free will, are being challenged like never before. Not by philosophical ideas, but by practical technologies. And we see more and more questions, which used to be the bread and butter of the philosophy department being moved to the engineering department. And that’s scary, partly because unlike philosophers who are extremely patient people, they can discuss something for thousands of years without reaching any agreement and they’re fine with that, the engineers won’t wait. And even if the engineers are willing to wait, the investors behind the engineers won’t wait. So it means that we don’t have a lot of time. And in order to encapsulate what the crisis is,maybe I can try and formulate an equation to explain what’s happening. And the equation is: B times C times D equals HH, which means biological knowledge multiplied by computing power, multiplied by data equals the ability to hack humans. And the AI revolution or crisis is not just AI, it’s also biology. It’s biotech. There is a lot of hype now around AI and computers, but that is just half the story. The other half is the biological knowledge coming from brain science and biology. And once you link that to AI, what you get is the ability to hack humans. And maybe I’ll explain what it means, the ability to hack humans: to create an algorithm that understands me better than I understand myself, and can therefore manipulate me, enhance me, or replace me. And this is something that our philosophical baggage and all our belief in, you know, human agency and free will, and the customer is always right, and the voter knows best, it just falls apart once you have this kind of ability.

NT: Once you have this kind of ability, and it’s used to manipulate or replace you, not if it’s used to enhance you?

YNH: Also when it’s used to enhance you, the question is, who decides what is a good enhancement and what is a bad enhancement? So our immediately, our immediate fallback position is to fall back on the traditional humanist ideas, that the customer is always right, the customers will choose the enhancement. Or the voter is always right, the voters will vote, there will be a political decision about the enhancement. Or if it feels good, do it. We’ll just follow our heart, we’ll just listen to ourselves. None of this works when there is a technology to hack humans on a large scale. You can’t trust your feelings, or the voters, or the customers on that. The easiest people to manipulate are the people who believe in free will, because they think they cannot be manipulated. So how do you how do you decide what to enhance if, and this is a very deep ethical and philosophical question—again that philosophers have been debating for thousands of years—what is good? What are the good qualities we need to enhance? So if you can’t trust the customer, if you can’t trust the voter, if you can’t trust your feelings, who do you trust? What do you go by?

NT: All right, Fei-Fei, you have a PhD, you have a CS degree, you’re a professor at Stanford, does B times C times D equals HH? Is Yuval’s theory the right way to look at where we’re headed?

Fei-Fei Li: Wow. What a beginning! Thank you, Yuval. One of the things—I’ve been reading Yuval’s books for the past couple of years and talking to you—and I’m very envious of philosophers now because they can propose questions but they don’t have to answer them. Now as an engineer and scientist, I feel like we have to now solve the crisis. And I’m very thankful that Yuval, among other people, have opened up this really important question for us. When you said the AI crisis, I was sitting there thinking, this is a field I loved and feel passionate about and researched for 20 years, and that was just a scientific curiosity of a young scientist entering PhD in AI. What happened that 20 years later it has become a crisis? And it actually speaks of the evolution of AI that, that got me where I am today and got my colleagues at Stanford where we are today with Human-Centered AI, is that this is a transformative technology. It’s a nascent technology. It’s still a budding science compared to physics, chemistry, biology, but with the power of data, computing, and the kind of diverse impact AI is making, it is, like you said, is touching human lives and business in broad and deep ways. And responding to those kinds of questions and crisis that’s facing humanity, I think one of the proposed solutions, that Stanford is making an effort about is, can we reframe the education, the research and the dialog of AI and technology in general in a human-centered way? We’re not necessarily going to find a solution today, but can we involve the humanists, the philosophers, the historians, the political scientists, the economists, the ethicists, the legal scholars, the neuroscientists, the psychologists, and many more other disciplines into the study and development of AI in the next chapter, in the next phase.

NT: Don’t be so certain we’re not going to get an answer today. I’ve got two of the smartest people in the world glued to their chairs, and I’ve got 72 more minutes. So let’s let’s give it a shot.

FL: He said we have thousands of years!

NT: Let me go a little bit further on Yuval’s opening statement. There are a lot of crises about AI that people talk about, right? They talk about AI becoming conscious and what will that mean. They talk about job displacement; they talk about biases. And Yuval has very clearly laid out what he thinks is the most important one, which is the combination of biology plus computing plus data leading to hacking. Is that specific concern what people who are thinking about AI should be focused on?

FL: Absolutely. So any technology humanity has created starting with fire is a double-edged sword. So it can bring improvements to life, to work, and to society, but it can bring the perils, and AI has the perils. You know, I wake up every day worried about the diversity, inclusion issue in AI. We worry about fairness or the lack of fairness, privacy, the labor market. So absolutely we need to be concerned and because of that, we need to expand the research, and the development of policies and the dialog of AI beyond just the codes and the products into these human rooms, into the societal issues. So I absolutely agree with you on that, that this is the moment to open the dialog, to open the research in those issues.

NT: Okay.

YNH: Even though I will just say that again, part of my fear is the dialog. I don’t fear AI experts talking with philosophers, I’m fine with that. Historians, good. Literary critics, wonderful. I fear the moment you start talking with biologists. That’s my biggest fear. When you and the biologists realize, “Hey, we actually have a common language. And we can do things together.” And that’s when the really scary things, I think…

FL: Can you elaborate on what is scaring you? That we talk to biologists?

YNH: That’s the moment when you can really hack human beings, not by collecting data about our search words or our purchasing habits, or where do we go about town, but you can actually start peering inside, and collect data directly from our hearts and from our brains.

FL: Okay, can I be specific? First of all the birth of AI is AI scientists talking to biologists, specifically neuroscientists, right. The birth of AI is very much inspired by what the brain does. Fast forward to 60 years later, today’s AI is making great improvements in healthcare. There’s a lot of data from our physiology and pathology being collected and using machine learning to help us. But I feel like you’re talking about something else.

YNH: That’s part of it. I mean, if there wasn’t a great promise in the technology, there would also be no danger because nobody would go along that path. I mean, obviously, there are enormously beneficial things that AI can do for us, especially when it is linked with biology. We are about to get the best healthcare in the world, in history, and the cheapest and available for billions of people by their smartphones. And this is why it is almost impossible to resist the temptation. And with all the issues of privacy, if you have a big battle between privacy and health, health is likely to win hands down. So I fully agree with that. And you know, my job as a historian, as a philosopher, as a social critic is to point out the dangers in that. Because, especially in Silicon Valley, people are very much familiar with the advantages, but they don’t like to think so much about the dangers. And the big danger is what happens when you can hack the brain and that can serve not just your healthcare provider, that can serve so many things for a crazy dictator.

NT: Let’s focus on what it means to hack the brain. Right now, in some ways my brain is hacked, right? There’s an allure of this device, it wants me to check it constantly, like my brain has been a little bit hacked. Yours hasn’t because you meditate two hours a day, but mine has and probably most of these people have. But what exactly is the future brain hacking going to be that it isn’t today?

YNH: Much more of the same, but on a much larger scale. I mean, the point when, for example, more and more of your personal decisions in life are being outsourced to an algorithm that is just so much better than you. So you know, you have we have two distinct dystopias that kind of mesh together. We have the dystopia of surveillance capitalism, in which there is no like Big Brother dictator, but more and more of your decisions are being made by an algorithm. And it’s not just decisions about what to eat or where to shop, but decisions like where to work and where to study, and whom to date and whom to marry and whom to vote for. It’s the same logic. And I would be curious to hear if you think that there is anything in humans which is by definition unhackable. That we can’t reach a point when the algorithm can make that decision better than me. So that’s one line of dystopia, which is a bit more familiar in this part of the world. And then you have the full fledged dystopia of a totalitarian regime based on a total surveillance system. Something like the totalitarian regimes that we have seen in the 20th century, but augmented with biometric sensors and the ability to basically track each and every individual 24 hours a day.

And you know, which in the days of Stalin or Hitler was absolutely impossible because they didn’t have the technology, but maybe might be possible in 20 years, 30 years. So, we can choose which dystopia to discuss but they are very close…

NT: Let’s choose the liberal democracy dystopia. Fei-Fei, do you want to answer Yuval’s specific question, which is, Is there something in Dystopia A, liberal democracy dystopia, is there something endemic to humans that cannot be hacked?

FL: So when you asked me that question, just two minutes ago, the first word that came to my mind is Love. Is love hackable?

YNH: Ask Tinder, I don’t know.

FL: Dating!

YNH: That’s a defense…

FL: Dating is not the entirety of love, I hope.

YNH: But the question is, which kind of love are you referring to? if you’re referring to Greek philosophical love or the loving kindness of Buddhism, that’s one question, which I think is much more complicated. If you are referring to the biological, mammalian courtship rituals, then I think yes. I mean, why not? Why is it different from anything else that is happening in the body?

FL: But humans are humans because we’re—there’s some part of us that is beyond the mammalian courtship, right? Is that part hackable?

YNH: So that’s the question. I mean, you know, in most science fiction books and movies, they give your answer. When the extraterrestrial evil robots are about to conquer planet Earth, and nothing can resist them, resistance is futile, at the very last moment, humans win because the robots don’t understand love.

FL: The last moment is one heroic white dude that saves us. But okay so the two dystopias, I do not have answers to the two dystopias. But what I want to keep saying is, this is precisely why this is the moment that we need to seek for solutions. This is precisely why this is the moment that we believe the new chapter of AI needs to be written by cross-pollinating efforts from humanists, social scientists, to business leaders, to civil society, to governments, to come at the same table to have that multilateral and cooperative conversation. I think you really bring out the urgency and the importance and the scale of this potential crisis. But I think, in the face of that, we need to act.

YNH: Yeah, and I agree that we need cooperation that we need much closer cooperation between engineers and philosophers or engineers and historians. And also from a philosophical perspective, I think there is something wonderful about engineers, philosophically—

FL: Thank you!

YNH: — that they really cut the bullshit. I mean, philosophers can talk and talk, you know, in cloudy and flowery metaphors, and then the engineers can really focus the question. Like I just had a discussion the other day with an engineer from Google about this, and he said, “Okay, I know how to maximize people’s time on the website. If somebody comes to me and tells me, ‘Look, your job is to maximize time on this application.’ I know how to do it because I know how to measure it. But if somebody comes along and tells me, ‘Well, you need to maximize human flourishing, or you need to maximize universal love.’ I don’t know what it means.” So the engineers go back to the philosophers and ask them, “What do you actually mean?” Which, you know, a lot of philosophical theories collapse around that, because they can’t really explain that—and we need this kind of collaboration.

FL: Yeah. We need an equation for that.

NT: But Yuval, is Fei-Fei right? If we can’t explain and we can’t code love, can artificial intelligence ever recreate it, or is it something intrinsic to humans that the machines will never emulate?

YNH: I don’t think that machines will feel love. But you don’t necessarily need to feel it, in order to be able to hack it, to monitor it, to predict it, to manipulate it. So machines don’t like to play Candy Crush, but they can still—

NT: So you think this device, in some future where it’s infinitely more powerful than it is right now, it could make me fall in love with somebody in the audience?

YNH: That goes to the question of consciousness and mind, and I don’t think that we have the understanding of what consciousness is to answer the question whether a non-organic consciousness is possible or is not possible, I think we just don’t know. But again, the bar for hacking humans is much lower. The machines don’t need to have consciousness of their own in order to predict our choices and manipulate our choices. If you accept that something like love is in the end and biological process in the body, if you think that AI can provide us with wonderful healthcare, by being able to monitor and predict something like the flu, or something like cancer, what’s the essential difference between flu and love? In the sense of is this biological, and this is something else, which is so separated from the biological reality of the body, that even if we have a machine that is capable of monitoring or predicting flu, it still lacks something essential in order to do the same thing with love.

FL: So I want to make two comments and this is where my engineering, you know, personally speaking, we’re making two very important assumptions in this part of the conversation. One is that AI is so omnipotent, that it’s achieved to a state that it’s beyond predicting anything physical, it’s getting to the consciousness level, it’s getting to even the ultimate love level of
capability. And I do want to make sure that we recognize that we’re very, very, very far from that. This technology is still very nascent. Part of the concern I have about today’s AI is that super-hyping of its capability. So I’m not saying that that’s not a valid question. But I think that part of this conversation is built upon that assumption that this technology has become that powerful and I don’t even know how many decades we are from that. Second related assumption, I feel our conversation is being based on this that we’re talking about the world or state of the world that only that powerful AI exists, or that small group of people who have produced the powerful AI and is intended to hack humans exists. But in fact, our human society is so complex, there’s so many of us, right? I mean humanity in its history, have faced so much technology if we left it in the hands of a bad player alone, without any regulation, multinational collaboration, rules, laws, moral codes, that technology could have, maybe not hacked humans, but destroyed humans or hurt humans in massive ways. It has happened, but by and large, our society in a historical view is moving to a more civilized and controlled state. So I think it’s important to look at that greater society and bring other players and people into this dialog. So we don’t talk like there’s only this omnipotent AI deciding it’s gonna hack everything to the end. And that brings me to your topic that in addition to hacking humans at that level that you’re talking about, there are some very immediate concerns already: diversity, privacy, labor, legal changes, you know, international geopolitics. And I think it’s, it’s critical to to tackle those now.

NT: I love talking to AI researchers, because five years ago, all the AI researchers were saying it’s much more powerful than you think. And now they’re like, it’s not as powerful as you think. Alright, so I’ll just let me ask—

FL: It’s because five years ago, you had no idea what AI is, now you’re extrapolating too much.

NT: I didn’t say it was wrong. I just said it was the thing. I want to go into what you just said. But before we do that, I want to take one question here from the audience, because once we move into the second section we’ll be able to answer it. So the question is for Yuval, How can we avoid the formation of AI powered digital dictatorships? So how do we avoid dystopia number two, let’s enter that. And then let’s go, Fei-Fei, into what we can do right now, not what we can do in the future.

YNH: The key issue is how to regulate the ownership of data. Because we won’t stop research in biology, and we won’t stop researching computer science and AI. So from the three components of biological knowledge, computing power and data, I think data is is the easiest, and it’s also very difficult, but still the easiest kind to regulate, to protect. Let’s place some protections there. And there are efforts now being made. And they are not just political efforts, but you know, also philosophical efforts to really conceptualize, What does it mean to own data or to regulate the ownership of data? Because we have a fairly good understanding of what it means to own land. We had thousands of years of experience with that. We have a very poor understanding of what it what it actually means to own data and how to regulate it. But this is the very important front that we need to focus on in order to prevent the worst dystopian outcomes.

And I agree that AI is not nearly as powerful as some people imagine. But this is why I think we need to place the bar low, to reach a critical threshold. We don’t need the AI to know us perfectly, which will never happen. We just need the AI to know us better than we know ourselves, which is not so difficult because most people don’t know themselves very well and often make huge mistakes in critical decisions. So whether it’s finance or career or love life, to have this shifting authority from humans to algorithm, they can still be terrible. But as long as they are a bit less terrible than us, the authority will shift to them.

NT: In your book, you tell a very illuminating story about your own self and your own coming to terms with who you are and how you could be manipulated. Will you tell that story here about coming to terms with your sexuality and the story you told about Coca-Cola in your book? Because I think that will make it clear what you mean here very well.

YNH: Yes. So I I said, I only realized that I was gay when I was 21. And I look back at the time and I was I don’t know 15, 17 and it should have been so obvious. It’s not like I’m a stranger. I’m with myself 24 hours a day. And I just don’t notice any of like the screaming signs that are saying, “You are gay.” And I don’t know how, but the fact is, I missed it. Now in AI, even a very stupid AI today, will not miss it.

FL: I’m not so sure!

YNH: So imagine, this is not like a science fiction scenario of a century from now, this can happen today that you can write all kinds of algorithms that, you know, they’re not perfect, but they are still better, say, than the average teenager. And what does it mean to live in a world in which you learn about something so important about yourself from an algorithm? What does it mean, what happens if the algorithm doesn’t share the information with you, but it shares the information with advertisers? Or with governments? So if you want to, and I think we should, go down from the cloud, the heights, of you know, the extreme scenarios, to the practicalities of day-to-day life. This is a good example, because this is already happening.

NT: Well, let’s take the elevator down to the more conceptual level. Let’s talk about what we can do today, as we think about the risks of AI, the benefits of AI, and tell us you know, sort of your your punch list of what you think the most important things we should be thinking about with AI are.

FL: Oh boy, there’s so many things we could do today. And I cannot agree more with Yuval, that this is such an important topic. Again, I’m gonna try to speak about all the efforts that have been made at Stanford because I think this is a good representation of what we believed are so many efforts we can do. So in human-centered AI, in which this is the overall theme, we believe that the next chapter of AI should be human-centered, we believe in three major principles. One principle is to invest in the next generation of AI technology that reflects more of the kind of human intelligence we would like. I was just thinking about your comment about as dependence on data and how the policy and governance of data should emerge in order to regulate and govern the AI impact. Well, we should be developing technology that can explain AI, we call it explainable AI, or AI interpretability studies; we should be focusing on technology that has a more nuanced understanding of human intelligence. We should be investing in the development of less data-dependent AI technology, that will take into considerations of intuition, knowledge, creativity and other forms of human intelligence. So that kind of human intelligence inspired AI is one of our principles.

The second principle is to, again, welcome in the kind of multidisciplinary study of AI. Cross-pollinating with economics, with ethics, with law, with philosophy, with history, cognitive science and so on. Because there is so much more we need to understand in terms of a social, human, anthropological, ethical impact. And\ we cannot possibly do this alone as technologists. Some of us shouldn’t even be doing this. It’s the ethicists, philosophers who should participate and work with us on these issues. So that’s the second principle. And within this, we work with policymakers. We convene the kind of dialogs of multilateral stakeholders.

Then the third, last but not least, I think, Nick, you said that at the very beginning of this conversation, that we need to promote the human-enhancing and collaborative and argumentative aspect of this technology. You have a point. Even there, it can become manipulative. But we need to start with that sense of alertness, understanding, but still promote the kind of benevolent application and design of this technology. At least, these are the three principles that Stanford’s Human-centered AI Institute is based on. And I just feel very proud, within the short few months since the birth of this institute, there are more than 200 faculty involved on this campus in this kind of research, dialog, study, education, and that number is still growing.

NT: Of those three principles, let’s start digging into them. So let’s go to number one, explainability, because this is a really interesting debate in artificial intelligence. So there’s some practitioners who say you should have algorithms that can explain what they did and the choices they made. Sounds eminently sensible. But how do you do that? I make all kinds of decisions that I can’t entirely explain. Like, why did I hire this person, not that person? I can tell a story about why I did it. But I don’t know for sure. If we don’t know ourselves well enough to always be able to truthfully and fully explain what we did, how can we expect a computer, using AI, to do that? And if we demand that here in the West, then there are other parts of the world that don’t demand that who may be able to move faster. So why don’t I ask you the first part of that question, and Yuval all the second part of that question. So the first part is, can we actually get explainability if it’s super hard even within ourselves?

FL: Well, it’s pretty hard for me to multiply two digits, but, you know, computers can do that. So the fact that something is hard for humans doesn’t mean we shouldn’t try to get the machines to do it. Especially, you know, after all these algorithms are based on very simple mathematical logic. Granted, we’re dealing with neural networks these days that have millions of nodes and billions of connections. So explainability is actually tough. It’s ongoing research. But I think this is such fertile ground. And it’s so critical when it comes to healthcare decisions, financial decisions, legal decisions. There’s so many scenarios where this technology can be potentially positively useful, but with that kind of explainable capability, so we’ve got to try and I’m pretty confident with a lot of smart minds out there, this is a crackable thing.

On top of that, I think you have a point that if we have technology that can explain the decision-making process of algorithms, it makes it harder for it to manipulate and cheat. Right? It’s a technical solution, not the entirety of the solution, that will contribute to the clarification of what this technology is doing.

YNH: But because, presumably, the AI makes decisions in a radically different way than humans, then even if the AI explains its logic, the fear is it will make absolutely no sense to most humans. Most humans, when they are asked to explain a decision, they tell a story in a narrative form, which may or may not reflect what is actually happening within them. In many cases, it doesn’t reflect, it’s just a made up rationalization and not the real thing. Now an AI could be much different than a human in telling me, like I applied to the bank for loans. And the bank says no. And I asked why not? And the bank says okay, we will ask our AI. And the AI gives this extremely long statistical analysis based not on one or two salient feature of my life, but on 2,517 different data points, which it took into account and gave different weights. And why did you give this this weight? And why did you give… Oh, there is another book about that. And most of the data points to a human would seem completely irrelevant. You applied for a loan on Monday, and not on Wednesday, and the AI discovered that for whatever reason, it’s after the weekend, whatever, people who apply for loans on a Monday are 0.075 percent less likely to repay the loan. So it goes into into the equation. And I get this book of the real explanation. And finally, I get a real explanation. It’s not like sitting with a human banker that just bullshits me.

FL: So are you rooting for AI? Are you saying AI is good in this case?

YNH: In many cases, yes. I mean, I think in many cases, it’s two sides of the coin. I think that in many ways, the AI in this scenario will be an improvement over the human banker. Because for example, you can really know what the decision is based on presumably, right, but it’s based on something that I as a human being just cannot grasp. I just don’t—I know how to deal with simple narrative stories. I didn’t give you a loan because you’re gay. That’s not good. Or because you didn’t repay any of your previous loans. Okay, I can understand that. But my mind doesn’t know what to do with the real explanation that the AI will give, which is just this crazy statistical thing …

FL: So there’s two layers to your comment. One is how do you trust and be able to comprehend AI’s explanation? Second is actually can AI be used to make humans more trustful or be more trustworthy as humans. The first point, I agree with you, if AI gives you 2,000 dimensions of potential features with probability, it’s not understandable, but the entire history of science in human civilization is to be able to communicate the results of science in better and better ways. Right? Like I just had my annual physical and a whole bunch of numbers came to my cell phone. And, well, first of all my doctors, the experts, can help me to explain these numbers. Now even Wikipedia can help me to explain some of these numbers, but the technological improvements of explaining these will improve. It’s our failure as a technologists if we just throw 200 or 2,000 dimensions of probability numbers at you.

YNH: But this is the explanation. And I think that the point you raised is very important. But I see it differently. I think science is getting worse and worse in explaining its theories and findings to the general public, which is the reason for things like doubting climate change, and so forth. And it’s not really even the fault of the scientists, because the science is just getting more and more complicated. And reality is extremely complicated. And the human mind wasn’t adapted to understanding the dynamics of climate change, or the real reasons for refusing to give somebody a loan. But that’s the point when you have an — and let’s put aside the whole question of manipulation and how can I trust. Let’s assume the AI is benign. And let’s assume there are no hidden biases and everything is ok. But still, I can’t understand.

FL: But that’s why people like Nick, the storyteller, has to explain… What I’m saying, You’re right. It’s very complex.

NT: I’m going to lose my job to a computer like next week, but I’m happy to have your confidence with me!

FL: But that’s the job of the society collectively to explain the complex science. I’m not saying we’re doing a great job at all. But I’m saying there is hope if we try.

YNH: But my fear is that we just really can’t do it. Because the human mind is not built for dealing with these kinds of explanations and technologies. And it’s true for, I mean, it’s true for the individual customer who goes to the bank and the bank refused to give them a loan. And it can even be on the level, I mean, how many people today on earth understand the financial system? How many presidents and prime ministers understand the financial system?

NT: In this country, it’s zero.

YNH: So what does it mean to live in a society where the people who are supposed to be running the business… And again, it’s not the fault of a particular politician, it’s just the financial system has become so complicated. And I don’t think that economists are trying on purpose to hide something from the general public. It’s just extremely complicated. You have some of the wisest people in the world, going to the finance industry, and creating these enormously complex models and tools, which objectively you just can’t explain to most people, unless first of all, they study economics and mathematics for 10 years or whatever. So I think this is a real crisis. And this is again, this is part of the philosophical crisis we started with. And the undermining of human agency. That’s part of what’s happening, that we have these extremely intelligent tools that are able to make perhaps better decisions about our healthcare, about our financial system, but we can’t understand what they are doing and why they’re doing it. And this undermines our autonomy and our authority. And we don’t know as a society how to deal with that.

NT: Ideally, Fei-Fei’s institute will help that. But before we leave this topic, I want to move to a very closely related question, which I think is one of the most interesting, which is the question of bias in algorithms, which is something you’ve spoken eloquently about. And let’s start with the financial system. So you can imagine an algorithm used by a bank to determine whether somebody should get a loan. And you can imagine training it on historical data and historical data is racist. And we don’t want that. So let’s figure out how to make sure the data isn’t racist, and that it gives loans to people regardless of race. And we probably all, everybody in this room agrees that that is a good outcome.

But let’s say that analyzing the historical data suggests that women are more likely to repay their loans than men. Do we strip that out? Or do we allow that to stay in? If you allow it to stay in, you get a slightly more efficient financial system? If you strip it out, you have a little more equality before between men and women. How do you make decisions about what biases you want to strip and which ones are okay to keep?

FL: Yeah, that’s an excellent question, Nick. I mean, I’m not going to have the answers personally, but I think you touch on the really important question, which is, first of all, machine learning system bias is a real thing. You know, like you said, it starts with data, it probably starts with the very moment we’re collecting data and the type of data we’re collecting all the way through the whole pipeline, and then all the way to the application. But biases come in very complex ways. At Stanford, we have machine learning scientists studying the technical solutions of bias, like, you know, de-biasing data or normalizing certain decision making. But we also have humanists debating about what is bias, what is fairness, when is bias good, when is bias bad? So I think you just opened up a perfect topic for research and debate and conversation in this in this topic. And I also want to point out that you’ve already used a very closely related example, a machine learning algorithm has a potential to actually expose bias. Right? You know, one of my favorite studies was a paper a couple of years ago analyzing Hollywood movies and using a machine learning face-recognition algorithm, which is a very controversial technology these days, to recognize Hollywood systematically gives more screen time to male actors than female actors. No human being can sit there and count all the frames of faces and whether there is gender bias and this is a perfect example of using machine learning to expose. So in general there’s a rich set of issues we should study and again, bring the humanists, bring the ethicist, bring the legal scholars, bring the gender study experts.

NT: Agreed. Though, standing up for humans, I knew Hollywood was sexist even before that paper. but yes, agreed.

FL: You’re a smart human.

NT: Yuval, on that question of the loans, do you strip out the racist data, you strip out the gender data? What biases you get rid of what biases do you not?

YNH: I don’t think there is a one size fits all. I mean, it’s a question we, again, we need this day-to-day collaboration between engineers and ethicists and psychologists and political scientists

NT: But not biologists, right?

YNH: And increasingly, also biologists! And, you know, it goes back to the question, what should we do? So, we should teach ethics to coders as part of the curriculum, that the people today in the world that most need a background in ethics, are the people in the computer science departments. So it should be an integral part of the curriculum. And also in the big corporations, which are designing these tools, should be embedded within the teams, people with backgrounds in things like ethics, like politics, that they always think in terms of what biases might we inadvertently be building into our system? What could be the cultural or political implications of what we’re building? It shouldn’t be a kind of afterthought that you create this neat technical gadget, it goes into the world, something bad happens, and then you start thinking, “Oh, we didn’t see this one coming. What do we do now?” From the very beginning, it should be clear that this is part of the process.

FL: I do want to give a shout out to Rob Reich, who introduced this whole event. He and my colleagues, Mehran Sahami and a few other Stanford professors have opened this course called Computers, Ethics and Public Policy. This is exactly the kind of class that’s needed. I think this quarter the offering has more than 300 students signed up for that.

NT: Fantastic. I wish that course has existed when I was a student here. Let me ask an excellent question from the audience that ties into this. How do you reconcile the inherent trade-offs between explainability and efficacy and accuracy of algorithms?

FL: Great question. This question seems to be assuming if you can explain that you’re less good or less accurate?

NT: Well, you can imagine that if you require explainability, you lose some level of efficiency, you’re adding a little bit of complexity to the algorithm. So, okay, first of all, I don’t necessarily believe in that. There’s no mathematical logic to this assumption. Second, let’s assume there is a possibility that an explainable algorithm suffers in efficiency. I think this is a societal decision we have to make. You know, when we put the seatbelt in our car driving, that’s a little bit of an efficiency loss because I have to do the seat belt movement instead of just hopping in and driving. But as a society, we decided we can afford that loss of efficiency because we care more about human safety. So I think AI is the same kind of technology. As we make these kind of decisions going forward in our solutions, in our products, we have to balance human well-being and societal well-being with efficiency.

NT: So Yuval, let me ask you the global consequences of this. This is something that a number of people have asked about in different ways and we’ve touched on but we haven’t hit head on. There are two countries, imagine you have Country A and you have Country B. Country A says all of you AI engineers, you have to make it explainable. You have to take ethics classes, you have to really think about the consequences and what you’re doing. You got to have dinner with biologists, you have to think about love, and you have to like read John Locke, that’s Country A. Country B says, just go build some stuff, right? These two countries at some point are going to come in conflict, and I’m going to guess that Country B’s technology might be ahead of Country A’s. Is that a concern?

YNH: Yeah, that’s always the concern with arms races, which become a race to the bottom in the name of efficiency and domination. I mean, what is extremely problematic or dangerous about the situation now with AI, is that more and more countries are waking up to the realization that this could be the technology of domination in the 21st century. So you’re not talking about just any economic competition between the different textile industries or even between different oil industries, like one country decides to we don’t care about the environment at all, we’ll just go full gas ahead and the other countries are much more environmentally aware. The situation with AI is potentially much worse, because it could be really the technology of domination in the 21st century. And those left behind could be dominated, exploited, conquered by those who forge ahead. So nobody wants to stay behind. And I think the only way to prevent this kind of catastrophic arms race to the bottom is greater global cooperation around AI. Now, this sounds utopian because we are now moving in exactly the opposite direction of more and more rivalry and competition. But this is part of, I think, of our job, like with the nuclear arms race, to make people in different countries realize that this is an arms race, that whoever wins, humanity loses. And it’s the same with AI. If AI becomes an arms race, then this is extremely bad news for all humans. And it’s easy for, say, people in the US to say we are the good guys in this race, you should be cheering for us. But this is becoming more and more difficult in a situation when the motto of the day is America First. How can we trust the USA to be the leader in AI technology, if ultimately it will serve only American interests and American economic and political domination? So I think, most people when they think arms race in AI, they think USA versus China, but there are almost 200 other countries in the world. And most of them are far, far behind. And when they look at what is happening, they are increasingly terrified. And for a very good reason.

NT: The historical example you’ve made is a little unsettling. Because, if I heard your answer correctly, it’s that we need global cooperation. And if we don’t, we’re going to need an arms race. In the actual nuclear arms race, we tried for global cooperation from, I don’t know, roughly 1945 to 1950. And then we gave up and then we said, We’re going full throttle in the United States. And then, Why did the Cold War end the way it did? Who knows but one argument would be that the United States and its relentless buildup of nuclear weapons helped to keep the peace until the Soviet Union collapsed. So if that is the parallel, then what might happen here is we’ll try for global cooperation and 2019, 2020, and 2021 and then we’ll be off in an arms race. A, is that likely and B, if it is, would you say well, then the US needs to really move full throttle on AI because it will be better for the liberal democracies to have artificial intelligence than totalitarian states?

YNH: Well, I’m afraid it is very likely that cooperation will break down and we will find ourselves in an extreme version of an arms race. And in a way it’s worse than the nuclear arms race because with nukes, at least until today, countries developed them, but never use them. AI will be used all the time. It’s not something you have on the shelf for some Doomsday war. It will be used all the time to create potentially total surveillance regimes and extreme totalitarian systems, in one way or the other. And so, from this perspective, I think the danger is far greater. You could say that the nuclear arms race actually saved democracy and the free market and, you know, rock and roll and Woodstock and then the hippies and they all owe a huge debt to nuclear weapons. Because if nuclear weapons weren’t invented, there would have been a conventional arms race and conventional military buildup between the Soviet bloc and the American bloc. And that would have meant total mobilization of society. If the Soviets are having total mobilization, the only way the Americans can compete is to do the same.

Now what actually happened was that you had an extreme totalitarian mobilized society in the communist bloc. But thanks to nuclear weapons, you didn’t have to do it in the United States or in Western Germany, or in France, because we relied on nukes. You don’t need millions of conscripts in the army.

And with AI it is going to be just the opposite, that the technology will not only be developed, it will be used all the time. And that’s a very scary scenario.

FL: Wait, can I just add one thing? I don’t know history like you do, but you said AI is different from nuclear technology. I do want to point out, it is very different because at the same time as you’re talking about these scarier situations, this technology has a wide international scientific collaboration that is being used to make transportation better, to improve healthcare, to improve education. And so it’s a very interesting new time that we haven’t seen before because while we have this kind of competition, we also have massive international scientific community collaboration on these benevolent uses and democratization of this technology. I just think it’s important to see both sides of this.

YNH: You’re absolutely right here. There are some, as I said, there’s also enormous benefits to this technology.

FL: And in a in a globally collaborative way, especially between and among scientists.

YNH: The global aspect is is more complicated, because the question is, what happens if there is a huge gap in abilities between some countries and most of the world? Would we have a rerun of the 19th century Industrial Revolution when the few industrial powers conquer and dominate and exploit the entire world, both economically and politically? What’s to prevent that from repeating? So even in terms of, you know, without this scary war scenario, we might still find ourselves with global exploitation regime, in which the benefits, most of the benefits, go to a small number of countries at the expense of everybody else.

FL: So students in the audience will laugh at this but we are in a very different scientific research climate. The kind of globalization of technology and technique happens in a way that the 19th century, even the 20th century, never saw before. Any paper that is a basic science research paper in AI today or technical technique that is produced, let’s say this week at Stanford, it’s easily globally distributed through this thing called arXiv or GitHub repository or—

YNH: The information is out there. Yeah.

FL: The globalization of this scientific technology travels in a different way from the 19th and 20th century. I don’t doubt there is confined development of this technology, maybe by regimes. But we do have to recognize that this global reach, the differences are pretty sharp now. And we might need to take that into consideration. That the scenario you’re describing is harder, I’m not saying impossible, but harder to happen.

YNH: I’ll just say that it’s not just the scientific papers. Yes, the scientific papers are there. But if I live in Yemen, or in Nicaragua, or in Indonesia or in Gaza, yes, I can connect to the internet and download the paper. What will I do with that? I don’t have the data, I don’t have the infrastructure. I mean, you look at where the big corporations are coming from, that hold all the data of the world, they’re basically coming from just two places. I mean, even Europe is not really in the competition. There is no European Google, or European Amazon, or European Baidu, of European Tencent. And if you look beyond Europe, you think about Central America, you think about most of Africa, the Middle East, much of Southeast Asia, it’s, yes, the basic scientific knowledge is out there, but this is just one of the components that go to creating something that can compete with Amazon or with Tencent, or with the abilities of governments like the US government or like the Chinese government. So I agree that the dissemination of information and basic scientific knowledge are in a completely different place than the 19th century.

NT: Let me ask you about that, because it’s something three or four people have asked in the questions, which is, it seems like there could be a centralizing force of artificial intelligence that will make whoever has the data and the best computer more powerful and it could then accentuate income inequality, both within countries and within the world, right? You can imagine the countries you’ve just mentioned, the United States, China, Europe lagging behind, Canada somewhere behind, way ahead of Central America, it could accentuate global income inequality. A, do you think that’s likely and B, how much does it worry you?

YNH: As I said, it’s very likely it’s already happening. And it’s extremely dangerous. Because the economic and political consequences could be catastrophic. We are talking about the potential collapse of entire economies and countries, countries that depend on cheap manual labor, and they just don’t have the educational capital to compete in a world of AI. So what are these countries going to do? I mean, if, say, you shift back most production from, say, Honduras or Bangladesh to the USA and to Germany, because the human salaries are no longer part of the equation and it’s cheaper to produce the shirt in California than in Honduras, so what will the people there do? And you can say, okay, but there will be many more jobs for software engineers. But we are not teaching the kids in Honduras to be software engineers. So maybe a few of them could somehow immigrate to the US. But most of them won’t and what will they do? And we, at present, we don’t have the economic answers and the political answers to these questions.

FL: I think that’s fair enough, I think Yuval definitely has laid out some of the critical pitfalls of this and, and that’s why we need more people to be studying and thinking about this. One of the things we over and over noticed, even in this process of building the community of human-centered AI and also talking to people both internally and externally, is that there are opportunities for businesses around the world and governments around the world to think about their data and AI strategy. There are still many opportunities outside of the big players, in terms of companies and countries, to really come to the realization that it’s an important moment for their country, for their region, for their business, to transform into this digital age. And I think when you talk about these potential dangers and lack of data in parts of the world that haven’t really caught up with this digital transformation, the moment is now and we hope to, you know, raise that kind of awareness and encourage that kind of transformation.

YNH: Yeah, I think it’s very urgent. I mean, what we are seeing at the moment is, on the one hand, what you could call some kind of data colonization, that the same model that we saw in the 19th century that you have the imperial hub, where they have the advanced technology, they grow the cotton in India or Egypt, they send the raw materials to Britain, they produce the shirts, the high tech industry of the 19th century in Manchester, and they send the shirts back to sell them in in India and outcompete the local producers. And we, in a way, might be beginning to see the same thing now with the data economy, that they harvest the data in places also like Brazil and Indonesia, but they don’t process the data there. The data from Brazil and Indonesia, goes to California or goes to eastern China being processed there. They produce the wonderful new gadgets and technologies and sell them back as finished products to the provinces or to the colonies.

Now it’s not a one-to-one. It’s not the same, there are differences. But I think we need to keep this analogy in mind. And another thing that maybe we need to keep in mind in this respect, I think, is the reemergence of stone walls—originally my speciality was medieval military history. This is how I began my academic career with the Crusades and castles and knights and so forth. And now I’m doing all these cyborgs and AI stuff. But suddenly, there is something that I know from back then, the walls are coming back. I try to kind of look at what’s happening here. I mean, we have virtual realities. We have 3G, AI and suddenly the hottest political issue is building a stone wall. Like the most low-tech thing you can imagine. And what is the significance of a stone wall in a world of interconnectivity and and all that? And it really frightens me that there is something very sinister there. The combination of data is flowing around everywhere so easily, but more and more countries and also my home country of Israel, it’s the same thing. You have the, you know, the startup nation, and then the wall. And what does it mean this combination?

NT: Fei-Fei, you want to answer that?

FL: Maybe we can look at the next question!

NT: You know what? Let’s go to the next question, which is tied to that. And the next question is: you have the people here at Stanford who will help build these companies, who will either be furthering the process of data colonization, or reversing it or who will be building, you know, the efforts to create a virtual wall and world based on artificial intelligence are being created, or funded at least by a Stanford graduate. So you have all these students here in the room, how do you want them to be thinking about artificial intelligence? And what do you want them to learn? Let’s, let’s spend the last 10 minutes of this conversation talking about what everybody here should be doing.

FL: So if you’re a computer science or engineering student, take Rob’s class. If you’re humanists take my class. And all of you read Yuval’s books.

NT: Are his books on your syllabus?

FL: Not on mine. Sorry! I teach hardcore deep learning. His book doesn’t have equations. But seriously, what I meant to say is that Stanford students, you have a great opportunity. We have a proud history of bringing this technology to life. Stanford was at the forefront of the birth of AI. In fact, our Professor John McCarthy coined the term artificial intelligence and came to Stanford in 1963 and started this nation’s, one of the two oldest, AI labs in this country. And since then, Stanford’s AI research has been at the forefront of every wave of AI changes. And in 2019 we’re also at the forefront of starting the human-centered AI revolution or the writing of the new AI chapter. And we did all this for the past 60 years for you guys, for the people who come through the door and who will graduate and become practitioners, leaders, and part of the civil society and that’s really what the bottom line is about. Human-centered AI needs to be written by the next generation of technologists who have taken classes like Rob’s class, to think about the ethical implications, the human well being. And it’s also going to be written by those potential future policymakers who came out of Stanford’s humanities studies and Business School, who are versed in the details of the technology, who understand the implications of this technology, and who have the capability to communicate with the technologists. That is, no matter how we agree and disagree, that’s the bottom line, is that we need this kind of multilingual leaders and thinkers and practitioners. And that is what Stanford’s Human-centered AI Institute is about.

NT: Yuval, how do you answer that question?

YNH: On the individual level, I think it’s important for every individual whether in Stanford, whether an engineer or not, to get to know yourself better, because you’re now in a competition. It’s the oldest advice in all the books in philosophies is know yourself. We’ve heard it from Socrates, from Confucius, from Buddha: get to know yourself. But there is a difference, which is that now you have competition. In the day of Socrates or Buddha, if you didn’t make the effort, okay, so you missed on enlightenment. But still, the king wasn’t competing with you. They didn’t have the technology. Now you have competition. You’re competing against these giant corporations and governments. If they get to know you better than you know yourself, the game is over. So you need to buy yourself some time and the first way to buy yourself some time is to get to know yourself better, and then they have more ground to cover. For engineers and students, I would say—I’ll focus on it on engineers maybe—the two things that I would like to see coming out from the laboratories and and the engineering departments, is first, tools that inherently work better in a decentralized system than in a centralized system. I don’t know how to do it. But I hope this is something that engineers can can work with. I heard that blockchain is like the big promise in in that area, I don’t know. But whatever it is, part of when you start designing the tool, part of the specification of what this tool should be like, I would say, this tool should work better in a decentralized system than in a centralized system. That’s the best defense of democracy.

NT: I don’t want to cut you off, because I want you to get to the second thing. But how do you make a tool work better in a democracy?

YNH: I’m not an engineer, I don’t know.

NT: Okay. Go to part two. Someone in this room, figure that out, because it’s very important.

YNH: And I can give you historical examples of tools that work better in this way or in that way. But I don’t know how to translate it into present day technology.

NT: Go to part two because I got a few more questions from the audience.

YNH: Okay, so the other thing I would like to see coming is an AI sidekick that serves me and not some corporation or government. I mean, we can’t stop the progress of this kind of technology, but I would like to see it serving me. So yes, it can hack me but it hacks me in order to protect me. Like my computer has an antivirus but by brain hasn’t. It has a biological antivirus against the flu or whatever, but not against hackers and trolls and so forth. So, one project to work on is to create an AI sidekick, which I paid for, maybe a lot of money and it belongs to me, and it follows me and it monitors me and what I do in my interactions, but everything it learns, it learns in order to protect me from manipulation by other AIs, by other outside influencers. So this is something that I think with the present day technology, I would like to see more effort in in the direction.

FL: Not to get into technical terms, but I think you I think you would feel confident to know that the budding efforts in this kind of research is happening you know, trustworthy AI, explainable AI, security-motivated or aware AI. So I’m not saying we have the solution, but a lot of technologists around the world are thinking along that line and trying to make that happen.

YNH: It’s not that I want an AI that belongs to Google or to the government that I can trust. I want an AI that I’m its master. It’s serving me.

NT: And it’s powerful, it’s more powerful than my AI because otherwise my AI could manipulate your AI.

YNH: It will have the inherent advantage of knowing me very well. So it might not be able to hack you. But because it follows me around and it has access to everything I do and so forth, it gives it an edge in this specific realm of just me. So this is a kind of counterbalance to the danger that the people—

FL: But even that would have a lot of challenges in their society. Who is accountable, are you accountable for your actions or your sidekick?

YNH: This is going to be a more and more difficult question that we will have to deal with.

NT: Alright Fei-Fei, let’s go through a couple questions quickly. We often talk about top-down AI from the big companies, how should we design personal AI to help accelerate our lives and careers? The way I interpret that question is, so much of AI is being done at the big companies. If you want to have AI at a small company or personally, can you do that?

FL: So well, first of all, one of the solutions is what Yuval just said.

NT: Probably those things were built by Facebook.

FL: So first of all, it’s true, there is a lot of investment and effort and resource putting big companies in AI research and development, but it’s not that all the AI is happening there. I want to say that academia continues to play a huge role in AI’s research and development, especially in the long term exploration of AI. And what is academia? Academia is a worldwide network of individual students and professors thinking very independently and creatively about different ideas. So from that point of view, it’s a very grassroots kind of effort in AI research that continues to happen. And small businesses and independent research Institutes also have a role to play. There are a lot of publicly available data sets. It’s a global community that is very open about sharing and disseminating knowledge and technology. So yes, please, by all means, we want global participation in this.

NT: All right, here’s my favorite question. This is from anonymous, unfortunately. If I am in eighth grade, do I still need to study?

FL: As a mom, I will tell you yes. Go back to your homework.

NT:. Alright Fei-Fei, What do you want Yuval’s next book to be about?

FL: Wow, I need to think about that.

NT: Alright. Well, while you think about that, Yuval, what area of machine learning you want Fei-Fei to pursue next?

FL: The sidekick project.

YNH: Yeah, I mean, just what I said. Can we create the kind of AI which can serve individual people, and not some kind of big network? I mean, is that even possible? Or is there something about the nature of AI, which inevitably will always lead back to some kind of network effect, and winner takes all and so forth.

FL: Ok, his next book is going to be a science fiction book between you and your sidekick.

NT: Alright, one last question for Yuval, because we’ve got the top voted question. Without the belief in free will, what gets you up in the morning?

YNH: Without the belief in free will? I don’t think that’s the question … I mean, it’s very interesting, very central, it has been central in Western civilization because of some kind of basically theological mistake made thousands of years ago. But really it’s a misunderstanding of the human condition.

The real question is, how do you liberate yourself from suffering? And one of the most important steps in that direction is to get to know yourself better. For me, the biggest problem was the belief in free will, is that it makes people incurious about themselves and about what is really happening inside themselves because they basically say, “I know everything. I know why I make decisions, this is my free will.” And they identify with whatever thought or emotion pops up in their mind because this is my free will. And this makes them very incurious about what is really happening inside and what is also the deep sources of the misery in their lives. And so this is what makes me wake up in the morning, to try and understand myself better to try and understand the human condition better. And free will is just irrelevant for that.

NT: And if we lose your sidekick and get you up in the morning. Fei-Fei, 75 minutes ago, you said we weren’t gonna reach any conclusions Do you think we got somewhere?

FL: Well, we opened the dialog between the humanist and the technologist and I want to see more of that.

NT: Great. Thank you so much. Thank you, Fei Fei. Thank you, Yuval. wonderful to be here.

Wonderful meeting between the father of Internet, Vint Cerf, and Nguyen Anh Tuan

Wonderful meeting between the father of Internet, Vint Cerf, and Nguyen Anh Tuan

On May 8, 2019, on behalf of the Boston Global Forum, Mr. Nguyen Anh Tuan – the CEO of the Boston Global Forum, held a meeting with Mr. Vint Cerf, “the father of Internet”, Vice President, Chief Internet Evangelist of Google, to award him with the World Leader in AI World Society Award. Earlier on April 25, 2019, at the Artificial Intelligence World Society – G7 Summit Conference held at Loeb House, Harvard University, the Boston Global Forum has honored him. 

During the meeting which was held at Mr. Vint Cerf’s office, the two great minds had discussed about the big changes that are happening in the world in the late 20th and 21st century – the Age of the Enlightenment of Internet and Artificial Intelligence. Together, they talked about how AI and the Internet can be utilized to do great things, and how to minimize the negative aspects and risks that AI can pose to humanity. Mr. Nguyen Anh Tuan and Mr. Vint Cerf agreed sanctions and laws from the civilized and progressive world community are needed to prevent these threats and risks.

Currently, governments lag behind on creating laws that prevent the negative aspects of AI; therefore, there is an immediate need to connect like-minded thinkers, scholars, innovators, business leaders, and non-governmental organizations, etc., to build alliances in order to make the world a peaceful, safe, and new democracy with artificial intelligence and the Internet. Mr. Nguyen Anh Tuan and Mr. Vint Cerf have the enthusiasm, similar goals, and a common path with regards to the future of AI and the Internet and how these inventions can improve lives of people around the world. The meeting opens new initiatives and programs to turn their enthusiasm and ideas into reality. Mr. Nguyen Anh Tuan and Mr. Vint Cerf made appointments for the next meetings and discussions in Boston in July and in other cities to discuss about how AI World Society Summit can make meaningful contributions to humanity.

Mr. Nguyen Anh Tuan was the Director of Teltic Informatics Center, Khanh Hoa Post and Telecom of Vietnam Post and Telecom Corporation (VNPT). At that time, he applied Internet communication protocol TCP / IP invented by Vint Cerf, to build the VietNet Information Highway, Vietnam’s first public computer network using TCP / IP, providing services for the whole of Vietnam since January 1996, 2 years before Vietnam officially provided Internet services. Starting from VietNet network, and with VietNet, Mr. Nguyen Anh Tuan was honored as one of the Top Ten Outstanding Young Talents in 1996.

White House Started Developing AI Standards

White House Started Developing AI Standards

The administration wants public feedback to help shape the National Institute of Standards and Technology-led effort.

The Trump administration wants the public to weigh in on standards and tools needed to advance intelligent technology.

The White House Office of Science and Technology Policy seeks insight into developing technical standards around artificial intelligence, according to a request for informationlaunched Wednesday. The National Institute of Standards and Technology will coordinate the RFI and all AI-standards related endeavors, as directed by the February executive orderon AI leadership.

Deputy Assistant to the President for Technology Policy Michael Kratsios said in a statement that the RFI is a direct deliverable set forth by the president’s American AI Initiative.

“The information we receive will be critical to Federal engagement in the development of technical standards for AI and strengthening the public’s trust and confidence in the technology,” Kratsios said.

The executive order on AI directs NIST to issue a set of standards and tools that will guide the government in its adoption of the nascent tech and this RFI marks the beginning of the agency’s development of those standards. NIST said it aims to gain input “through an open process” that envelops both this new RFI and other opportunities, including a public workshop.

Through the comments received from the RFI, NIST ultimately aims to better understand the present state, plans, challenges and opportunities related to the development and availability of AI technical standards and related tools. The agency is also interested in gauging the priority areas for federal involvement in activities related to AI standards and the present and future roles agencies can play in helping develop AI standards and tools to meet America’s needs.

Some of the major areas about which NIST is seeking information include technical standards and guidance needed to advance transparency, privacy and other issues around the trustworthiness of AI tech; the urgency of U.S. need for AI standards; the degree of federal agencies’ current and needed involvement to address the governments’ needs; roadmaps and other documents about plans to develop AI and further information around AI technical standards and tools that have already been developed, as well as information on the organizations that have done so.

The document encourages respondents to define “tools” and “standards” as they wish.

The agency also defines AI technologies and systems broadly, noting in the RFI that they “are considered to be comprised of software and/or hardware that can learn to solve complex problems, make predictions or solve tasks that require human-like sensing (such as vision, speech, and touch), perception, cognition, planning, learning, communication, or physical action.”

“Examples are wide-ranging and expanding rapidly,” it said.

Comments in response to the notice must be sent to NIST via mail or email by May 31. The agency plans to post submissions on its website in the future