International Relations in the Cyber Age: The Co-Evolution Dilemma

Professor Nazli Choucri, MIT, Board Member of the Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation, also a very active member of AIWS Standards and Practice Committee, has launched a new book:” International Relations in the Cyber Age “.

“The international system of sovereign states has evolved slowly since the seventeenth century, but the transnational global cyber system has grown rapidly in just a few decades. Two respected scholars – a computer scientist and a political scientist-have joined their complementary talents in a bold and important exploration of this crucial co-evolution.”

– Joseph S. Nye, Harvard Kennedy School and author of The Future of Power

“Many have observed that the explosive growth of the Internet and digital technology have reshaped longstanding global structures of governance and cooperation. International Relations in the Cyber Age astutely recasts that unilateral narrative into one of co-evolution, exploring the mutually transformational relationship between international relations and cyberspace.”

– Jonathan Zittrain, George Bemis Professor of International Law and Professor of Computer Science, Harvard University

“Cyber architecture is now a proxy for political power. A leading political scientist and pioneering Internet designer masterfully explain how ‘high politics’ intertwine with Internet control points that lack any natural correspondence to the State. This book is a wake-up call about the collision and now indistinguishability between two worlds.”

– Laura Denardis, Professor. American University and author, The Global War for Internet Governance

“This book uniquely combines the perspectives of an Internet pioneer (Clark) and a leading political scientist with expertise in cybersecurity (Choucri) to produce a very rich account of how cyberspace impacts international relations, and vice versa. It is a valuable contribution to our understanding of Internet governance.”

– Jack Goldsmith, Henry Shattuck Professor, Harvard Law School

 

About this book

A foundational analysis of the co-evolution of the internet and international relations, examining resultant challenges for individuals, organizations, firms, and states.

In our increasingly digital world, data flows define the international landscape as much as the flow of materials and people. How is cyberspace shaping international relations, and how are international relations shaping cyberspace? In this book, Nazli Choucri and David D. Clark offer a foundational analysis of the co-evolution of cyberspace (with the internet as its core) and international relations, examining resultant challenges for individuals, organizations, and states.

The authors examine the pervasiveness of power and politics in the digital realm, finding that the internet is evolving much faster than the tools for regulating it. This creates a “co-evolution dilemma”—a new reality in which digital interactions have enabled weaker actors to influence or threaten stronger actors, including the traditional state powers. Choucri and Clark develop new methods of analysis. For example, one method is about control in the internet age, “control point analysis,” and apply it to a variety of situations, including major actors in the international and digital realms: the United States, China, and Google.  Another is about network analysis of international law for cyber operations. A third method is to measure the propensity of states to expand their influence in the “real” world compared to expansion in the cyber domain. In so doing so they lay the groundwork for a new international relations theory that reflects the reality in which we live—one in which the international and digital realms are inextricably linked and evolving together.

Authors

Nazli Choucri

Nazli Choucri is Professor of Political Science at MIT, Faculty Affiliate at the MIT institute for Data Science and Society, Director of the Global System for Sustainable Development (GSSD), and the author of Cyberpolitics in International Relations (MIT Press).

David D. Clark

David D. Clark is a Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Lab and a leader in the design of the Internet since the 1970s.

TECH COMPANIES SHAPING THE RULES GOVERNING AI

IN EARLY APRIL, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.”

One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI.

Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says.

When a formal draft was released in December, uses that had been suggested as requiring “red lines” were presented as examples of “critical concerns.” That shift appeared to please Microsoft. The company didn’t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoft’s senior director for EU government affairs, said the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’” Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted toward industry. “We need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.”

The brouhaha over Europe’s guidelines for AI was an early skirmish in a debate that’s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interest—and in some cases appear to be trying to steer construction of any new guardrails to their own benefit.

Harvard law professor Yochai Benkler warned in the journalNature this month that “industry has mobilized to shape the science, morality and laws of artificial intelligence.”

Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into “Fairness in Artificial Intelligence” that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed.

Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,” he says.

Microsoft used some of its power when Washington state considered proposals to restrict facial recognition technology. The company’s cloud unit offers such technology, but it has also said that technology should be subject to new federal regulation.

In February, Microsoft loudly supported a privacy bill being considered in Washington’s state Senate that reflected its preferred rules, which included a requirement that vendors allow outsiders to test their technology for accuracy or biases. The company spoke against a stricter bill that would have placed a moratorium on local and state government use of the technology.

By April, Microsoft found itself fighting against a House version of the bill it had supported, after the addition of firmer language on facial recognition. The House bill would have required that companies obtain independent confirmation that their technology worked equally well for all skin tones and genders before deploying it. Irene Plenefisch, Microsoft’s director of government affairs, testified against that version of the bill, saying it “would effectively ban facial recognition technology [which] has many beneficial uses.” The house bill stalled. With lawmakers unable to reconcile differing visions for the legislation, Washington’s attempt to pass a new privacy law collapsed.

In a statement, a Microsoft spokesperson said that the company’s actions in Washington sprang from its belief in “strong regulation of facial recognition technology to ensure it is used responsibly.”

Shankar Narayan, director of the technology and liberty project of the ACLU’s Washington chapter, says the episode shows how tech companies are trying to steer legislators toward their favored, looser, rules for AI. But, Narayan says, they won’t always succeed. “My hope is that more policymakers will see these companies as entities that need to be regulated and stand up for consumers and communities,” he says. On Tuesday, San Francisco supervisors voted to ban the use of facial recognition by city agencies.

Washington lawmakers—and Microsoft—hope to try again for new privacy and facial recognition legislation next year. By then, AI may also be a subject of debate in Washington, DC.

Last month, Senators Cory Booker (D-New Jersey) and Ron Wyden (D-Oregon) and Representative Yvette Clarke (D-New York) introduced bills dubbed the Algorithmic Accountability Act. It includes a requirement that companies assess whether AI systems and their training data have built-in biases, or could harm consumers through discrimination.

Mutale Nkonde, a fellow at the Data and Society research institute, participated in discussions during the bill’s drafting. She is hopeful it will trigger discussion in DC about AI’s societal impacts, which she says is long overdue.

The tech industry will make itself a part of any such conversations. Nkonde says that when talking with lawmakers about topics such as racial disparities in face analysis algorithms, some have seemed surprised, and said they have been briefed by tech companies on how AI technology benefits society.

Google is one company that has briefed federal lawmakers about AI. Its parent Alphabet spent $22 million, more than any other company, on lobbying last year. In January, Google issued a white paper arguing that although the technology comes with hazards, existing rules and self-regulation will be sufficient “in the vast majority of instances.”

Metzinger, the German philosophy professor, believes the EU can still break free from industry influence over its AI policy. The expert group that produced the guidelines is now devising recommendations for how the European Commission should invest billions of euros it plans to spend in coming years to strengthening Europe’s competitiveness.

Metzinger wants some of it to fund a new center to study the effects and ethics of AI, and similar work throughout Europe. That would create a new class of experts who could keep evolving the EU’s AI ethics guidelines in a less industry-centric direction, he says.

Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?

It takes just 3.7 seconds of audio to clone a voice. This impressive—and a bit alarming—feat was announced by Chinese tech giant Baidu. A year ago, the company’s voice cloning tool called Deep Voice required 30 minutes of audio to do the same. This illustrates just how fast the technology to create artificial voices is accelerating. In just a short time, the capabilities of AI voice generation have expanded and become more realistic which makes it easier for the technology to be misused.

Artificial Intelligence Can Now Copy Your Voice: What Does That Mean For Humans?

Capabilities of AI Voice Generation

Like all artificial intelligence algorithms, the more data voice cloning tools such as Deep Voice receive to train with the more realistic the results. When you listen to several cloning examples, it’s easier to appreciate the breadth of what the technology can do including being able to switch the gender of the voice as well as alter accents and styles of speech.

Google unveiled Tacotron 2, a text-to-speech system that leverages the company’s deep neural network and speech generation methodWaveNet. WaveNet analyzes a visual representation of audio called a spectrogram to generate audio. It is used to generate the voice for Google Assistant. This iteration of the technology is so good; it’snearly impossible to tell what’s AI generated and what voice is human generated. The algorithm has learned how to pronounce challenging words and names that would have been a tell-tale sign of a machine as well as how to better enunciate words.

These advances in Google’s voice generation technology have allowed for Google Assistant to offer celebrity cameos. John Legend’s voice is now an option on any device in the United States with Google Assistant such as Google Home, Google Home Hub, and smartphones. The crooner’s voice will only respond to certain questions such as “What’s the weather” and “How far away is the moon” and is available to sing happy birthday on command. Google anticipates that we’ll soon have more celebrity cameos to choose from.

Another example of just how precise the technology has become, a Jordan Peterson (the author of 12 Rules for Life) AI model sounds just like him rapping Eminem’s “Lose Yourself” song. The creator of the AI algorithm used just six hours of Peterson talking (taken from readily available recordings of him online) to train the machine learning algorithm to create the audio. It takes short audio clips and learns how to synthesize speech in the style of the speaker. Take a listen, and you’ll see just how successful it was.

This advanced technology opens the door for companies such asLyrebird to provide new services and products. Lyrebird uses artificial intelligence to create voices for chatbots, audiobooks, videos games, text readers and more. They acknowledge on their website that “with great innovation comes great responsibility” underscoring the importance of pioneers of this technology to take great care to avoid misuse of the technology.

How This Technology Could Be Misused

Similar to other new technologies, artificial voice can have many benefits but can be also be used to mislead individuals as well. As the AI algorithms get better and it becomes difficult to discern what’s real and what’s artificial, there will be more opportunities to use it to fabricate the truth.

According to research, our brains don’t register significant differences between real and artificial voices. In fact, it’s harder for our brains to distinguish fake voices than to detect fakes images.

Now that these AI systems only require a short amount of audio to train in order to create a viable artificial voice that mimics the speaking style and tone of an individual, the opportunity for abuse increases. So far, researchers weren’t able to identify a neural distinction for how a brain can distinguish between real and fake. Consider how artificial voices might be used in an interview, news segment or press conference to make listeners believe they are listening to an authority figure in the government or a CEO of a company.

Raising awareness that this technology exists and how sophisticated it is will be the first step to safeguard listeners from falling for artificial voices when they are used to mislead us. The real fear is that people can be fooled to act on something that is fake because it sounds like it’s coming from somebody real. Some people are attempting to find a technical solution to safeguard us. However, a technical solution will not be 100% foolproof. Our ability to critically assess a situation, evaluate the source of information and verify its validity will become increasingly important.

AI World Society Summit 2019

After the great success of the AIWS–G7 Summit Conference with the AIWS-G7 Summit Initiative, Boston Global Forum organize AI World Society (AIWS) Summit to engage governmental leaders, thought leaders, policymakers, scholars, civic-societies, and non-government organizations to build a peaceful, safe, and new democracy for the world with deeply applied AI,. Prominent figures often very busy, so many could not meet the same time and same place, therefore BGF gives a new format for AIWS Summit: combining online and offline.

Alliance of civic societies, non-government organizations, and thought leaders for a safe, peaceful, and Next Generation Democracy.

Mission:

A high level international discussion about AI governance for a safe, peaceful, and Next Generation Democracy.

Organized by Boston Global Forum, and World Leadership Alliance-Club de Madrid, and sponsored by the government of the Commonwealth of Massachusetts.

Outcome: recommendations, suggestions for initiatives, solutions, and policies to build a society and world more peaceful, safer, and democratic with AI; the new social and economy revolution with AI that will shape better and bright futures in equality of opportunities in contribution, transparency, openness, in which capital and wealth cannot corrupt democracy, citizens will be recognized, rewarded and have a good life.

Format:

Combine between online and offline.

Moderators: Governor Michael Dukakis, and Nguyen Anh Tuan

Speakers: leaders of governments, political leaders, business leaders, prominent professors, thought leaders. Governor Michael Dukakis will send invitation letters to speakers to introduce mission, topics, outcome of the AI World Society Summit 2019.

Speakers can send their talks by video clip (maximum 30 minutes) or text to Content Team of the AI World Society Summit 2019, then the Content Team will post to AI World Society Summit section of Boston Global Forum’s website and deliver to other speakers, and discussants, and then their talks will be submitted to G7 Summit 2019 as a part of AIWS-G7 Summit Initiative.

Time: start April 25, 2019 at AI World Society – G7 Summit Conference to August 5, 2019.

The first speaker is oneof Fathers of Internet, Vint Cerf, Vice Prresident and Chief Internet Evangalist of Google

The second speaker is Professor Neil Gershenfeld, MIT.

Can optical computing be the next breakthrough in AI acceleration?

As neural networks and machine learning continue to take on new challenges from analyzing images posted on social media to driving cars, there’s increasing interest in creating hardware that are tailored for AI computation.

The limits of current computer hardware has triggered a quasi-arms race, enlisting a growing array of small and large tech companies that want to create specialized hardware for artificial intelligence computation. But one startup aims to create the next breakthrough in AI acceleration by changing the most fundamental technology that has been powering computers in the past several decades.

A team of scientists at Boston-based Lightelligence are ditching electronics for photonic computing to achieve orders-of-magnitude improvement in the speed, latency and power consumption of computing AI models. Photonic or optical computing, the science of using laser and light to store, transfer and process information, has been around for decades. But until now, it has been mostly limited to optical fiber cables used in networking.

The folks at Lightelligence believe that optical computing will solve AI’s current hardware hurdles. And they have an optical AI chip to prove it.

The limits of electronic AI hardware

Artificial intelligence is one of the fastest growing sectors of the tech industry. After undergoing several “winters,” AI has now become the bread and butter of many of applications in the digital and physical world. The AI industry is projected to be worth more than $150 billion by 2024.

“There has been a huge explosion of artificial intelligence innovation in the past five years,” says Dr. Yichen Shen, co-founder and CEO of Lightelligence. “And what we think will happen in the next ten years is that there will be an explosion of use cases and application scenarios for machine learning and artificial neural networks.”

Deep learning and neural networks, the current dominating subset of AI, relies on analyzing large sets of data and performing expensive computations at fast speeds. But current hardware structures are struggling to keep up with the growing demands of this expanding sector of the AI industry. Chips and processors aren’t getting faster at the same pace that AI models and algorithms are progressing.

Lightelligence is one of many companies developing AI accelerator hardware. But as Shen says, other companies working in the field are basing their work on electronics, which is bound by Moore’s Law.

“This means their performance still relies on Boolean algebra transistors to do AI computation. We think that in the long run, it will still be bound by Moore’s Law, and it’s not the best of solutions,” Shen says.

Established by Intel co-founder Gordon Moore, Moore’s Law maintains that technological advances will continuously enable us to reduce the size and price of transistors, the main component of computing chips, at every 1.5-2 years. This basically means that you can pack more computing power in the same space at a lower price. This is the principle that has making phones and laptops stronger and faster in the past few decades.

But Moore’s Law is hitting a wall. “Now it takes five years to get to a smaller node and down the road it can take longer—ten or twenty years—to get to three nanometers or smaller nanometers,” Shen says. “We want to change the way we do computing by replacing electronics by photonics. In principle we can do the calculations much faster and in a much more power efficient way.”

The optical AI accelarator

In 2017, Shen, then a PhD student doing research on nano-photonics and artificial intelligence under MIT professor Marin Soljacic, co-authored a paper that introduced the concept neural networks that ran fully on optical computing hardware. The proposition promised to enhance the speed of AI models.

A few months later, Shen founded Lightelligence with the help of Soljacic. The prototype of the optical AI accelerator, which Lightelligence released earlier this year, is the size of a printed circuit board (PCB).

“Instead of using digital electronics, we use optical signals to do AI computation. Our main purpose is to accelerate AI computing by orders of magnitude in terms of latency, throughput and power efficiency,” Shen says.

The company has designed the device to be compatible with current hardware and AI software. The accelerator can be installed on servers and devices that support the PCI-e interface and supports popular AI software frameworks, including Google’s TensorFlow and Facebook’s PyTorch.

There are still no plans to create fully fledged optical computers, but the technology is certainly suitable for specific types of computation. “One of the algorithms that photonics is very good at implementing is matrix multiplication,” says Soljacic.

Matrix multiplication is one of the key calculations involved in neural networks, and being able to speed it up will help create faster AI models. According to Shen, the optical AI accelerator can perform any matrix multiplication, regardless of the size, in one CPU clock, while electronic chips take at least a few hundred clocks to perform the same operation.

“With neural networks, depending on the algorithm, there might be other components and operations involved. It depends on the AI algorithm and application, but we’ll be able to improve the performance from one to two orders of magnitude, ten to hundred times faster,” Shen says.

The company tested the optical AI accelerator on MNIST, a dataset of handwritten digits used to benchmark the performance of machine learning algorithms. The hardware performed much faster than other state-of-the art AI accelerator chips.

Soljacic explains that the proposition of optical neural networks was set forth decades ago, but at the time it didn’t get traction. However, the past years have seen two important changes.

“First, neural networks became hugely important. That provides a large motivation to people to develop specialized hardware for neural networks which didn’t exist thirty years ago. And more importantly now finally we have the same fabrication that made electronics manufacturing so successful, the CMOS processing, also available for photonics,” Soljacic says.

This means that you can integrate thousands of optical devices on the same chip at the same cost that it would take to integrate electronic devices. “That is something we didn’t have five to ten years ago, and that is what is enabling the technology for all of this, the ability to mass produce at a very low cost,” Soljacic says.

Manufacturing optical chips is also less expensive than electronic devices, Shen adds. “For photonics, you don’t need a 7nm or a 3nm node to do it. We can use older and cheaper nodes to manufacture it,” he says.

Solving the latency problem of AI hardware

mirrors line
Image credit: Depositphotos.com

Why is it important to improve the speed of AI calculations? In many settings, neural networks must perform their tasks in a time-critical fashion. This is especially true at the edge, where AI models must respond to real-time changes to their environment.

One of the use cases where low-latency AI computation is critical is self-driving cars. Autonomous vehicles rely on neural networks to make sense of their environment, detect objects, find their way on roads and streets and avoid collisions.

“We humans can drive a car only using only our eyes as guide, purely vision-based, and we can drive easily at 90-100 mph. At this point for autonomous vehicles, they’re not able to drive the car at that speed if they only rely on cameras. One of the main reasons is that the AI models that process video information are not as fast as needed,” Shen says. “Since we can decrease the time it takes to process data and run AI computations on the videos, we think we’ll be able to allow the cars to drive at a much faster speed. Potentially, we’ll be able to catch up with human performance.”

To be clear, it takes more than super-fast neural networks to solve the challenges of self-driving cars in open environments. Namely, unlike human drivers, neural networks are very limited in their ability to deal with situations they haven’t been trained for. But being able to improve the speed of processing neural networks will certainly put the industry at a better position than it currently is.

There are plenty of other areas where low-latency neural network computations can become a game changer, especially where AI models are deployed in the physical world. Examples include drone operations and robotic surgery, both of which involve safety issues that need real-time response from the AI models that control the hardware.

“Low latency AI is critical at the edge, but also has applications in the cloud. Our earlier models are more likely to be deployed in the cloud, not only hyperscale data centers, but also the local data centers or local server rooms in buildings,” Shen says.

Servers that can run AI models with super-low latency will become more prominent as our network infrastructure evolves. Namely, the development of 5G networks will pave the way for plenty of real-time AI applications that perform their tasks at the edge but run their neural network computations in local servers. Augmented reality apps, transportation and medicine are some of the areas that stand to gain from super-fast AI servers.

Making AI calculations power-efficient

Evolving Digital Space

Current AI technologies are very electricity-hungry, a problem that is manifesting itself both in the cloud and at the edge. Cloud servers and data centers currently account for around 2 percent of power consumption in the U.S. According to some forecasts, data centers will consume one fifth of the world’s electricity by 2025.

substantial part of the cloud’s power goes into neural network computation, giving the AI industry an environmental problem.

“People already upload billions of photos per day to the cloud, where AI algorithms scan these photos for pornography, violence and similar content. Today, it’s one billion photos. Tomorrow it’s going to be one billion movies. That part of the energy consumption and the cost of systems running on artificial intelligence is going to be growing very rapidly, and that’s what we’re going to help with,” Soljacic says.

Switching from electronic to optical hardware can reduce the energy consumption of AI models considerably. “The reason that electronic chips generate heat is that the electronic signals go through copper wires and cause heat loss. That’s where the major power costs are. In contrast, light doesn’t heat up things like electronics do,” Shen says.

“In addition to the heat loss through the interconnect, just thinking about traditional electronic circuits, a portion of the power consumption is just leakage. It produces no work other than heat. That could easily be a third of a chip’s power budget. There’s really no benefit from that. The photonic circuit has none of that,” says Maurice Steinman, VP of engineering at Lightelligence.

At the edge, optical AI computing can help take some of the burden off devices where weight is a constraint.

“For instance, if you want to have substantial artificial intelligence to a drone, then you’ll have to add a GPU that consumes 1 kWh and requires a huge and heavy battery,” Soljacic says.

“Right now, the power costs of the AI chips account for about 20 percent of the electricity consumption in self-driving cars. This increases the size of the batteries, which in increases the power consumption for the cars,” Shen adds.

Using optical AI accelerators will help reduce the power consumption and the weight of these devices.

The future of AI hardware accelerators

Optical computing is not the only technique that can possibly address the hardware painpoints of current AI models. Other technologies that might help improve the speed and efficiency of AI models are quantum computing and neuromorphic chips.

Quantum computers are still years away, but once they become a reality, they will change not only the AI industry but many other aspect of digital life, including finance and cybersecurity.

As for neuromorphic chips, they are computing devices that try to imitate the structure of the brain to specialize for AI tasks. Neuromorphic chips are slowly gaining traction.

It will be interesting to see which one of these trends will manage to clinch the main spot for the future of AI computing.