Turn your device to view content
Turn your device to view content
Scroll

Scroll to read more


  • 1Introduction about Shaping Futures Magazine, Nguyen Anh Tuan
  • 2The AIWS 7-Layer Model to Build Next Generation Democracy
  • 3AIWS Roundtable with Prof. Iyad Rahwan on AI Ethics and autonomous vehicles
  • 4AIWS Roundtable December 12, 2017, discuss with Max Tegmark
  • 5AIWS Roundtable in Tokyo Fostered AI for Good of Society
  • 6Remarks of Governor Michael Dukakis honors President Ilves
  • 7Keynote speech of Estonian President Toomas Hendrik Ilves
  • 8Talk of Joseph Nye at Global cybersecurity Day December 12, 2017
  • 9Introduction about Shaping Futures Magazine, Nguyen Anh Tuan
  • 10The AIWS 7-Layer Model to Build Next Generation Democracy
  • 11AIWS Roundtable with Prof. Iyad Rahwan on AI Ethics and autonomous vehicles
  • 12AIWS Roundtable December 12, 2017, discuss with Max Tegmark
  • 13AIWS Roundtable in Tokyo Fostered AI for Good of Society
  • 14Remarks of Governor Michael Dukakis honors President Ilves
Global Cybersecurity Day
Shaping Futures Magazine to Focus on
Artificial Intelligence and Cybersecurity

Michael Dukakis
Publisher

he Michael Dukakis Institute for Leadership and Innovation (MDI) has today launched Shaping Futures magazine. The new magazine will cover Artificial Intelligence and Cybersecurity innovations, initiatives, and solutions, as we with a focus on creating a better world for all mankind through the ethical implementation and expansion of technology.

Print and digital copies of the magazine will be published twice annually, with articles compiled from special events and editorial content created by members and delegates participating in the Michael Dukakis Institute events, symposia, as well as whitepapers and think-pieces.

As Editor-in-Chief of Shaping Futures, CEO of Boston Global Forum and co-founder of Artificial Intelligence World Society (AIWS), I encourage additional editorial contributions with the hope of generating great ideas in AI and cybersecurity that will positively impact world societies.

For this first issue of Shaping Futures, our editorial team has introduced the Artificial Intelligence World Society (AIWS) initiatives and some of the many achievements of Boston Global Forum (BGF) and MDI as we continue to work with top thinkers and leaders in AI and cybersecurity to create the best possible future in a world that increasingly relies on technology.

By way of background, AIWS is a collection of norms, ideas, and models regarding AI and how it will affect society. Founded on November 22, 2017, AIWS has since grown to include some of the top AI leaders, thinkers, and developers from around the world. We are working with representatives of MIT, Google, Hitachi, Harvard, Intel, and others to build the initiatives. AIWS also maintains a close working partnership with the World Leadership Alliance – Club de Madrid as it formulates its Next Generation Democracy initiative. MDI and BGF will partner with all of these institutions to create a model for AI and politics: the AIWS 7-Layer Model to Build Next Generation Democracy.

Also, 2018 marks the third annual BGF-G7 Summit Conference, assembled in collaboration with the host nation of the G7 Summit, which this will meet in Charlevoix, Quebec, Canada, this year. We are honored to work with representatives of the Canadian government as we focus our energy and resources on AI and the AIWS 7-Layer Model to Build Next Generation Democracy. MDI will also announce our first ever World Leader in Artificial Intelligence Award, on April 25, to be presented to honoree Secretary General of OECD Angel Gurria who has shown commendable leadership and forward-thinking in AI.

As human beings, we must take action to establish a better world with AI starting today, as our awareness and perception changes through interactions with AI citizens.

Tuan Anh Nguyen
Editor-in-Chief

The AIWS 7-Layer Model to Build
Next Generation Democracy

Michael Dukakis, Nazli Choucri, Allan Cytryn, Thomas Patterson,
Tuan Anh Nguyen, Derek Reveron, David Silbersweig

On November 22, 2017, the Michael Dukakis Institute for Leadership and Innovation established the Artificial World Intelligence Society (AIWS) which was then officially announced at the Cybersecurity Day Conference on December 12, 2017. AIWS is a loosely-structured cooperative, bringing together policymakers, technologists, and business leaders to consider concepts, norms, and standards for artificial intelligence (A.I.).

The goal of AIWS is to both minimize harm or malicious uses of A.I., and to consider the ways it can be used to benefit society. To that end, AIWS have established a number of beneficial partnerships and encourage others to join this new initiative.

The Artificial Intelligence World Society (AIWS) is a set of values, ideas, concepts and protocols for standards and norms whose goal is to advance the peaceful development of AI to improve the quality of life for all humanity. It was conceived by the Michael Dukakis Institute for Leadership and Innovation (MDI) and established on November 22, 2017. The World Leadership Alliance – Club de Madrid (WLA-CdM) and the Boston Global Forum (BGF) are partnered with the MDI to collaborate and develop the AIWS initiative. The President of WLA-CdM, Vaira Vike-Freiberga, serves as co-chair of AIWS activities and conferences along with Governor Michael Dukakis.

The Next Generation Democracy (NGD) is an initiative founded by WLA-CdM with the goal of “enabling democracy to meet the expectations and needs of all citizens and preserve their freedom and dignity while securing a sustainable future.” NGD is a collaboration and forum, coordinated by WLA-CdM. AIWS has partnered with WLA-CdM to promote the development of AI to support the Next Generation Democracy initiative.

To align the development of AI with the NGD initiative, the AIWS has developed the AIWS 7-Layer Model. This model establish a set of responsible norms and best practices for the development, management, and uses of AI so that this technology is safe, humanistic and beneficial to society. In developing the 7-Layer Model, the AIWS recognizes that we live in a chaotic world with differing, and sometimes conflicting, goals, values and concepts of norms. Hence, the Model is aspirational and even

In developing the 7-Layer Model, the AIWS recognizes that we live in a chaotic world with differing, and sometimes conflicting, goals, values and concepts of norms. Hence, the Model is aspirational and even idealistic. Nonetheless, it provides a baseline for guiding AI development to ensure positive outcomes and to reduce the risks of pervasive and realistic risks and the relate3d harms that AI could pose to humanity.

The Model is based on the assumption that humans ultimately accountable for the develop0ment and use of AI, and must therefore preserve that accountability. Hence, it stresses transparency of AI reasoning, applications, and decision making, which will lead to auditability and validation of the uses of AI systems.

Layer 1: Charter and Principles: To create a society of AI for a better world and to ensure peace, security, and prosperity

AI “society” is the society consisting of all objects that have the characteristics of Artificial Intelligence. Any object in this society is an AI Citizen. There must be rules that govern the behaviors of these AI Citizens, as there are rules that govern human members of society. The standards and requirements for an AI citizen must also include the need to manage and supervise them. AI citizens are to be transparent in structure and process, and all are to meet AIWS Standards of AI citizenship.

• AI Citizens cannot threaten or put at risk the health, safety, dignity and freedom of any human • AI Citizens cannot take actions which violate the law and social norms of the societies in which they are deployed.
• The design and performance of an AI Citizen must be sufficiently transparent so as to expose its behavior and ensure that its behavior will not imperil in any way other AI Citizens or humans, nor violate the law and social norms of the societies in they are deployed.
• The performance of AI Citizens must meet basic standarsd of auditability and subject to regular audits to facilitate compliance with the above.

Layer 1 establishes a responsible code of conduct for AI Citizens to ensure that AI is safely integrated into human society

Layer 2: Ethical Frameworks: Guidelines for the Role of AI in Building the Next Generation Democracy

The behavior of AI Citizens must be ethical by normal human and social standards. It must conform to the ethics codes of UNESCO and of the United Nations as whole. To be considered ethical, such behaviors must be:

- Honesty, open and transparent.

- People-centric: for the people, by the people, serving the people

- Respectful of the dignity of humans, their privacy, and the natural environment.

- Deployed in the service of individuals, groups, and governments that are themselves ethical.

- Promote and foster tolerance.

Layer 2 is based on the ethics of codes of the UN and UNESCO. Therefore, AI citizens must, first and foremost, respect human dignity, virtue and ethics. The ethics layer will also draw on best practices and ethics codes of top businesses and organizations involved in AI research such as IBM, IEEE, the Berkman Center, and MIT Media Lab.

Standards: Standards for the Management of AI Resources and Development

Establish the AIWS Standards and convene the Practice Committee to develop, manage, and promote standards and all critical requisites of an AI citizen.

The Committee will engage with governments, corporations, universities, and other relevant organizations to facilitatae understanding of AI threats, challenges etc. These entities are ultimately responsible for achieving ethical AI.

Layer 3 is focuses p[rimarilyh on AI development and resources, including data governance, accountability, development standards, and the responsibility for all practitioners involved directly or indirectly in creating AI.

Layer 4: Laws and Legislation: Laws for the Role of AI in Building Next Generation Democracy.

Advise political leaders in crafting the best possible rules, regulations, and legislation regarding AI technologies.

This layer will follow and apply Layers 1, 2, and 3 to to transform these into legal and legislative concepts.

Layer 4 focuses on policies, laws and legislation, nationally and internationally, that govern the creation and use of AI and which are necessary to ensure that AI is never used for malicious purposes.

There is great danger in the risk of AI development devoid of t appropriate ethical and regulatory frameworks. Public and private entities are already considering ways to regulate AI. Regardless of what they may accomplish, there is further risk that the rate at which AI advances will outpace the continuing development of these frameworks. The goal of this layer is to guide leaders in these endeavors so that their work is effective and timely.

International Policies, Conventions, and Norms: Global Consensus

To be effective, the development of AI in the support of humankind depends on a global consensus. International conventions, regulations, and agreements for AI development in support of Next Generation Democracy are therefore essential for the success of AIWS

This layer will promote the adoption of the AIWS ethics, standards and legislative proposals consistent with , and integrated in, international law through conventions, regulations, treaties, and agreements.

Layer 5 will focus on the global application and diffusion of AIWS-established norms and concepts. The responsible development and use of AI depends on acceptance by the global community. If even one state or actor uses AI irresponsibly or maliciously, the threat they could pose would be significant, and cannot be accepted.consensus

AIWS also calls upon the leaders of all G7 nations to sign an agreement on the Ethical Development and Deployment of AI. Such an agreement would prohibit the development of autonomous AI weapons and mandate that AI be developed only for peaceful purposes. The threat posed by a potential AI arms race is alarming and states must take action now to prevent such a possiblity

Layer 6: Public Services and Policymaking: Engage and Assist Political Leaders

AI can itself be used to aid in achieving the legislative and policy goals that promote its peaceful and constructive use. It can assist political leaders in effective and practical decision=makings by providing AI-based evaluations, data, and suggestions to solve social and political issues. This will ensure that all parties are informed and make the best decisions possible.

This layer will help to shape applications for Next Generation Democracy.

Layer 6 emphasizes the role AI should play in providing analysis and data to inform political leaders. While AI per se cannot perform the functions of leadership, they will prove invaluable assets by providing assistance for human leaders. Examples of current AI projects for policymaking include SAM, the world’s first AI politician, created and operating in New Zealand, and GROW360 in Japan.

Layer 7: Business Applications for All of Society: Engage and Assist Businesses

As AI is deployed to be used by business. Industry, and private citizens, it is essential to that AI technologies remain benevolent and free from risks of misuse, error, or loss of control.

Therefore it is imperative to work with the private sector in developing best practices for the applications of AI in society.

Layer 7 emphasizes the applications of AI and the services it can (and does) provide to citizens. AI is already being sold to, or tested for, consumer use in a variety of sectors. This includes fully autonomous vehicles, smart home assistants (e.g., Alexa and Google Home), and others. It also includes more subtle uses in social media, aviation, and other large sectors. With AI becoming more integrated in the lives of the average citizen, the technology will increasingly change our society.

Through Layer 7, and the Model as a whole, AIWS hopes to ensure that inviting AI into our lives will have positive effects.

Plan 2020 and the Future:

By February 20, 2020, all seven layers of the Model will be completed. Future applications and users of the final Model : AIWS International Court, AIWS University, AIWS Healthcare, AIWS Public Transportation, AIWS Policy Makers, and AIWS Political Leaders. These applications will ensure that AIWS norms for AI are adopted broadly and responsibly.

Next Steps: Actions to be Taken

The AIWS Standards and Practice Committee is established to:

• Update and collect information on threats and potential harm posed by AI
• Connect companies, universities, and governments to find ways to prevent threats and potential harm.
• Engage ine the audit of behaviors and decisions in the creation of AI
• Create both an Index and Report about AI threats - and iident6ifythe source of threat.
• Create a Report on respect for, and application of, ethics codes and standards of governments, companies, universities, individuals and all others.
• Work with the UN to call for an AI Peace Treaty, to AI, similar to the Chemical Weapons Convention that prohibits the creation, stockpiling, and use of those weapons
• Work with AI experts on a consensus announcement that “AI experts not engage in any work for, or participate in projects developing AI weapons”

AIWS Roundtable with
Prof. Iyad Rahwan on AI Ethics and
autonomous vehicles

Governor Michael Dukakis has recently had a discussion with Prof. Iyad Rahwan, associate professor of Media Arts & Sciences at the MIT Media Lab and a fellow of the Michael Dukakis Leadership Fellow Program on AI ethics and autonomous vehicles.

Prof. Rahwan has expressed his concern over sharing the common goals of the Artificial Intelligence World Society (AIWS) whose goal is to resolve AI-related ethic problems. Prof. Rahwan’s pioneering work on autonomous vehicles has underscored the ethical questions that arise from their development. His 2016 research showed that while people are in favor of autonomous vehicle which are utilitarian and minimize harm in case of unavoidable crash, they wanted others to purchase them, while preferring to ride in autonomous vehicles that protected their passengers at all costs. They also said they would not use self-driving vehicles unless it was imposed on them by law.

He underscored the difference between early society and today, noting that today we have machines that can learn through experiences and have minds of their own. This should require us to have certificates or comprehensive standards to regulate and control unpredictable disruptions stemming from Artificial Intelligence. Accordingly, Prof. Rahwan emphasized the importance of articulating ethics and social contracts that machines can understand as we pursue new governance algorithms.

Prof. Rahwan has also discussed the two major threats posed by AI to the society: proliferation of information and the calculus of wars changing substantially because of autonomous weapon. Given that context, AIWS would offer special benefits to manage the demand of modern life. Responding to Prof. Rahwan’s questions about changes in today politics, Governor Dukakis whose lifetime is devoted to politics, noted that the increase of public information from new sources and new advancements such as the Internet, AI, algorithm, and more. has caused exponential doubt in the public of when propaganda distorts the truth and people lose faith in informational institutions and governance. Gov. Dukakis agreed that what Prof. Rahwan called the media “bubble” information problem is creating “unbelievable diversity of pressure on communications.”

The discussion between Gov. Dukakis and Prof. Rahwan centered largely on current issues and the need for standards and norms to advance the peaceful development of AI—a view shared by AIWS, the BGF, the MDI and this brilliant young researcher.

Prof. Rahwan is the director and principal investigator of Scalable Cooperation Group, investigating what lies at the intersection of the computer and social sciences. He has investigated computational social science, collective intelligence, large-scale cooperation, and the social aspects of artificial intelligence. He is highly regarded for having led a winning team in the US State He is highly regarded for having led a winning team in the US State Department's Tag Challenge and his work appeared in major academic journals, including Science and PNAS. He is regularly featured in major media outlets such as The New York Times, The Economist and The Wall Street Journal. For his outstanding achievements and relentless pursuit of global peace and security, he was selected to be a member of Michael Dukakis Leadership Fellow Program in 2017-2018. The Fellowship program is established as an effort of the Boston Global Forum (BGF) and Michael Dukakis Forum (MDI) to enrich the participants’ leadership competency and integrity in initiating solutions to global problems; engage youth in promotion of peace and security of the world; provide opportunities for self-development; and facilitate dialogue among young high-profile leaders and policymakers around the world.

AIWS Roundtable December 12, 2017,
discuss with Max Tegmark

Author of Life 3.0, on Cybersecurity Day 2017

On December 12, 2017, the AIWS Round Table convened following the 2017 Cybersecurity Day Conference and a presentation by Professor Max Tegmark, co-founder of the Future of Life Institute. Participants discussed both Max Tegmark’s presentation and their concerns about artificial intelligence. Most were in consensus that AI needs to have a clear set of goals and ethics, and that the technology will likely affect jobs and democracy as we know them. Read the transcript to see the sort of topics AIWS discusses.

Among the participants at the meeting were:
- Governor Michael Dukakis, former Governor of Massachusetts and Chairman of BGF
- Tuan Nguyen, Co-Founder and Director of AIWS and CEO of Boston Global Forum
- President Toomas Hendrik Ilves, former President of Estonia and 2017 World Leader in Cybersecurity
- Max Tegmark, MIT Professor and co-founder of the Future of Life Institute
- Dr. David Silbersweig, Chairmen of the Department of Psychiatry, Brigham and Women’s Hospital
- John Savage, Brown University Professor
- Patrick Winston, MIT Professor
- Henry Truong, CTO at Teletech
- Bill Ottman, CEO of Minds.com
- Ronald Sandler, Northeastern Professor and Director of the Ethics Institute
- Barry Nolan, U.S. Congressional Advisor
- Alan Cytryn, Principal, Risk Masters International
- Ian Goodfellow, Google researcher and 2017 MDI Leadership Fellow

Max Tegmark, author of the bestseller Life 3.0, gave a presentation at the AIWS Round Table on Cybersecurity Day 2017. In his talk, he shared his optimism for the future of artificial intelligence and the “wisdom race” between advancing the technology and setting standards for using it wisely and ethically. “I’m optimistic that we can create an inspiring future with tech, as long as we win this race to meet the growing power of the technology with the growing wisdom with which we manage it,” he said.

Mr. Tegmark also spoke about his own organization, The Future of Life Institute, and how it was founded. At a conference last year, experts met and created the 23 Asilomar A.I. Principles, which include ensuring that A.I. technology is used beneficially and distributed evenly. He believes that A.I. will advance considerably in coming decades, and gave the examples of AlphaGo and AlphaZero. Max Tegmark has partnered with AIWS and has contributed to our Ethical Framework for AI, which will be announced this coming April at our 2018 BGF-G7 Summit Conference.

(Max Tegmark’s presentation was given on December 12, 2017 following the 2017 Cybersecurity Day Conference)

December 12, 2017 at Harvard University

It’s a great honor and pleasure to get to be here. You’ve heard about possible negative things when you’ve spoken of cybersecurity earlier today, so I want to inject some more optimism in this. You’ll notice that this book is not called “Doom and Gloom 3.0.” I think it’s important, also, to remember all the wonderful upsides technology can bring if we do it right.

The second theme here is that, even though I spent the last week in California at the NIPS Conference on technical AI - which has 8,000 people now and keeps doubling basically every year – I think it’s very important to broaden this conversation beyond nerds like myself, because the question of how we use this beneficially for the future is a conversation everyone has to contribute in. It’s particularly important for people who have knowledge in policymaking. So, let’s start on an optimistic note with this example of technology.

And, of course, the guy who had the bold vision to make it happen, JFK, also has great Boston connections. So, this is an example - this was not only a successful mission, but also inspiring, because it shows that when we humans use technology wisely, we can accomplish things that our ancestors could only dream of. But, as my Estonian Friend Jaan Tallinn likes to point out, rocketry is also a great metaphor for how we use all of technology, because it’s not enough to simply make technology powerful. You also have to steer it and you have to figure out where you want to go. They didn’t first launch this and then: “Wait, how do we steer this? Now they have these great engines. Where do we go, maybe to Mars?”

So, in this spirit, I want to talk just for a little bit about another journey empowered by something much more powerful than rocket engines, where the passengers aren’t just three astronauts but all of humanity. So, let’s talk about our collective journey into the future with artificial intelligence, the most powerful technology that we will ever have seen. In the grand scheme of things, if we take a step back, technology is incredibly empowering. You know, we’re still stuck on an imperceptibly small little speck here in a seemingly lifeless and dead universe, yet technology offers us the opportunity for life to flourish, not just for the next election cycle on our little planet but for billions of years. And there’s no shortage of resources, either, though we have to stop quibbling.

So, let’s, in the spirit of Jaan Tallinn, talk first about the growing power of AI, and then a bit about steering it, and then a bit about where we want to go with it. As you all know, the power of AI is improving dramatically. It’s an honor to have one of the early AI pioneers, Patrick Winston, here in the room – and he can be the first to tell you that, whereas past accomplishments like when we got overthrown in chess by IBM Deep Blue, we hear the intelligence of the machine was largely programmed in by humans who new how to play chess. It beat him simply because it could think fast and remember more. But, in contrast, Alpha Go and the new Alpha Zero that was just announced a few days ago – they dethroned us in Go – it took 3,000 years of human Go games, millions of them and Go books and poems and wisdom and threw them all in the trash. And just learned from scratch one day and was able to blow away everything.

The same software now also beats the world’s best – not just the world’s best chess players – but, also all the world’s best computer chess programs. So Alpha Zero has shown in chess now the big deal isn’t that it can blow away human players, but that it can blow away human AI programmers who spent over thirty years making all this chess software. In just four hours, it got better than all of them and played against Stockfish, the world’s best chess program. Played 100 games, didn’t lose a single one and even discovered some pretty profound things. Raise your hand if you’ve ever played chess. So, Sicilian Opening? Forget about it. Turns out it’s no good. The English Defense – that’s actually the one it uses the most. So, that’s just one example to get a flavor of simple reinforcement learning at work.

You can just look at a computer learning to play a computer game. You can see it sucks here badly, it misses the ball almost all the time because it doesn’t know what a ball is, or what a paddle is, or what a game is. But, just by a very, very simple reinforcement learning algorithm loosely inspired by our brain, it gradually gets better and pretty soon gets to the point where it won’t miss the ball. It does better than I am and then, if you just keep training it – this is now such a simple thing, I can train it up in my lab at MIT on a GPU very quickly – it discovers something where the people at Google DeepMind didn’t know about, which is that if you drill a hole into the side of the screen and just start raking up the points, after discovering it, it of course exploits this again and again and again. And again. And it’s not just computer games. There’s been a lot of progress now at taking robots or simulating robots and seeing if they can learn to walk from scratch. They can, even if you don’t ever show them any videos of what watching looks like and have no human intervention. You just make it into a game, again, where they get a point, basically, whenever they move one centimeter to the right. It works for bipedal life forms, quadrupeds, all sorts of things.

This kind of rapid progress obviously begs the question: How far is this progress going to continue? I like to think about it in terms of this landscape of different tasks, where the elevation represents how difficult it is for machines to do them. The sea level represents how good machines are at doing them today. So, chess, arithmetic, and so on have of course long been submerged. The sea level is rising, so we have kind of a global warming going on in this abstract task space. And the obvious question is: What’s going to happen eventually? In the short-term, obviously, we should not advise our near and dear to take jobs right on the waterfront, because they’re going to be the first to be automated away, but will eventually everything get flooded? Or will AI ultimately fail to live up to its original ambition to do everything? This is a controversy, where you have top AI researchers on both sides of it, but most people – most AI researchers in recent polls think that AI will succeed maybe in a few decades. Opinions are very dated. As long as there’s a possibility, it’s very valuable to talk in the broader society about, well, what does that mean? And how can we make sure this becomes the best thing ever, rather than the worst thing ever?

Which brings us to the second part of Jaan Tallinn’s metaphor. How do we steer this technology in a good direction? How can we control it? So, in addition to my day job working with Patrick and others at MIT, I founded this little nonprofit, the Future of Life Institute, with my Estonian co-founder Jaan Tallinn, and we are very fortunate to have a lot of great people. Our mission has the word “steer” in it here. We are optimistic that we can create an exciting future – an inspiring future – with technology, as long as we figure out the steering part. So, specifically, I’m optimistic that we can create an inspiring future with tech as long as we win this race to be meet the growing power of this technology with the growing wisdom with which we manage it. And here, I think, is a really big challenge for policymaking and for society as a whole because, to win this wisdom race, I feel we need a strategy change.

In the past, we always stayed ahead in the wisdom race by learning from mistakes. Fire - “oopsie,” screwed up a bunch of times and then we invented the fire extinguisher. We invented the automobile, screwed up a bunch of times, invented the seatbelt, the airbag, the traffic light - and it works pretty good, right? But as the technology gets ever more powerful, at some point we reach a threshold where it’s so powerful that we don’t want to learn from mistakes any more – when that becomes a bad idea. We want to be proactive, plan ahead, to get things right the first time because that might be the only time we have.

Sometimes people will tell me: “Oh, Max, shut up. Don’t talk like this. This is luddite scare-mongering.” I say to them: “No, this isn’t scare-mongering. This is what we over at MIT call ‘safety engineering.’” Think about the Apollo 11 moon launch we just watched. NASA systematically thought through everything that could possibly go wrong with this. Was that luddite scare-mongering? No, that was exactly the safety engineering that guaranteed success of the mission. So that’s what I’m advocating here, as well. We should have a ‘red team’ approach where we think through things that can go wrong with these technologies, precisely so that we as a society can make sure they go right instead.

With the Future of Life Institute, we’ve organized conferences where we’ve brought together world leaders in AI – from both industry and academia – to talk specifically about this. Not about how to make AI more powerful, which there are plenty enough of conferences about already, but on how to make it beneficial. And the outcome of the last one we held in Asilomar, California this year, was the 23 Asilomar AI Principles – which you can find here on this website – which have been signed now by AI researchers from around the world. It’s a sort of amazing list of people. You have the CEO of Google DeepMind, responsible for those videos I showed you here. Ilya Sutskever from Open AI. Yann LeCun from Facebook. Microsoft, IBM, Apple, and so on. Google Brain and academics from around the world.

I want to spend the last couple of minutes just highlighting a few of these principles, which I hope can stimulate more discussion. So how do we win this wisdom race? One of these principles is about making sure that AI is primarily used for new ways of helping people, rather than ways of harming people. There is a good precedent here. Any science, of course, can be used for new ways of helping people or new ways of harming people. Science itself is completely morally neutral. It’s an amplifier of our human power.

Today, if you look at people who graduate from Harvard, for example, with biology and chemistry degrees, they will pretty much all of them be going into biotech and other positive applications of AI, rather than building bioweapons. It didn’t have to be that way, but biologists pushed very, very hard – it was, in fact, a Harvard biologist – who persuaded Henry Kissinger, who persuaded Richard Nixon, to push for an international treaty limiting biological weapons which created a huge stigma against bioweapons. Same thing has happened with chemistry. AI researchers are quite united in wanting to do the same thing with AI. This is very touch and go. There was a meeting at the United Nations in Geneva a few weeks ago, which was a bit of a flop. This has nothing to do with superintelligence or human-level AI. This is something that can happen right now, by just integrating the technologies we already have and mass-producing them.

A second one where there was huge consensus is that this great wealth, that can obviously be created if we have machines making ever more of our goods and services, should be distributed in a wide way so that they actually make everybody better off and we get a future where people can look forward to a future more like this than like that.

A third principle is that we, by we I also mean governments, should invest heavily in AI safety research. So, raise your hand if your computer has ever crashed. [Many hands raise.] And you spoke about cybersecurity this morning, so you’re very well aware of the fact that, if we can’t get our act together with these issues, then all the wonderful technology we build can cause problems, by either malfunctioning or actually getting hacked and turned against us. I feel we need significantly more investment in this - and not just in near-term things like cybersecurity - but also as machines get ever-more capable, also in the question of: “How can you make machines really understand human goals and really adopt human goals, and have the guarantee that they will retain these goals as they get ever more capable?” Right now, there’s almost no funding for these sorts of questions from government agencies. We gave out around 37 grants with the help of Elon Musk to sort of kickstart this. There were a number of sessions at the NIPS Conference where it was clear that researchers wanted to work on this, but they have to pay their grad students. Real opportunity to invest in this aspect of the wisdom.

What I’m saying is, to win the wisdom race – to win any race, right – there are two strategies. If you want to win the Boston Marathon, you can either slow down the competition by serving them two-week-old shrimp the night before, or you can try to run faster yourself. I think the way to do this is not to try to slow down the development of AI - I think that’s both unrealistic and undesirable – but rather to invest in these complementary questions of how you can make sure it gets used wisely.

Last, but not least, when you launch a rocket, you think through in advance where you want to go with it. I think we are so focused on tomorrow and the next election cycle and the next product cycle we can launch with AI, and we have a tendency to fall in love with technology just because it’s cool. If we are on the cusp of creating something so powerful then maybe one day it can do all our jobs, and maybe even be thought of as a new life form - at the minimum utterly transform our society - we should look a little bit farther than the next election cycle. We should ask, “What kind of future are we trying to create?”

I often get students walking into my office at MIT for career advice, and I always ask them, “Where do you want to be in ten years?” And if all she can say is: “Uh, maybe I’ll be murdered, and maybe I’ll get cancer.” Terrible approach to career planning, right? But that is exactly what we’re doing as a species when we think about the future of AI. Every time we go to the movies, there’s some new doomsday scenario – oh, it’s Terminator, oh it’s Blade Runner, this dystopia, that dystopia – which makes us paralyzed with fear. It’s crucial, I feel, that we can form positive visions – shared positive visions – that we can aspire to. Because if we can, we’re much more likely to get them. Thank you.

AIWS Roundtable in Tokyo Fostered
AI for Good of Society

On April 2, 2018, Michael Dukakis Institute (MDI) and Boston Global Forum (BGF) hosted a Roundtable in Tokyo with the participation of Japan’s top Artificial Intelligence (AI) researchers and thinkers. Among the many ideas covered during the program were the Artificial Intelligence World Society’s (AIWS) 7-layer model for AI, cyber security, the proliferation of fake news, and increased technological competition with China.

In the opening speech, Mr. Tuan Nguyen, CEO of Boston Global Forum and Co-founder of AIWS presented that the AIWS 7-Layer Model is built to provide a solid foundation for the development of AI applied in all walks of life as well as to mitigate the risks and damages that AI may cause to mankind. Seven layers cited in the Roundtable are: Charter and Principles; Ethical Frameworks; Standards; Laws and Legislation; International Policies, Conventions, and Norms; Public Services and Policymaking, with the final layer being Business Applications for All of Society.

The AIWS 7-Layer Model was first presented at the Roundtable of The World Leadership Alliance – Club de Madrid (The world´s largest forum of democratically elected former presidents and prime ministers) in San Francisco in February of 2018 with the participation of former presidents, and prime ministers, as well as representatives from Facebook, Microsoft, and other technology groups, along with professors from Stanford, Harvard, and other institutions of higher learning. The Model was then developed and applied by the Michael Dukakis Institute to build a new AI-based politics, with WLA President Vaira Vike-Freiberga, and Governor Michael Dukakis co-chairing AIWS activities and conferences.

At the AIWS Roundtable in Tokyo, top Hitachi engineer Dr. Kazuo Yano gave the keynote addressing on how AI would change the entire society. Dr. Yano is known as Chief of Hitachi’s R&D Group and the pioneer of AI technology in Japan and has worked with AI systems for nearly 15 years.

During his speech, he called for a switch from standardization to experimentation that will foster the ability of AI to enhance the adaptability of systems and businesses. To underscore his point, he showed the group images of an AI system learning to swing, much as a human would on a swing-set. While the arc starts off haphazardly, it quickly begins to swing more steadily, faster and higher, just as knowledge and experience and daring are increased. AI needs to “keep experimenting,” said Dr. Yano, adding, “we must endlessly experiment and learn.” Whereas many models currently favor standardization and rigid rules, he believes that what’s needed now are diversity and testing.

Instead of replacing labor, AI really threatens to replace rules, noted Dr. Yano. The key to successful AI policy and paradigms is an “outcome-oriented” approach instead of a “rule-driven” one, which he believe is embraced by Layer 3 of AIWS Model - Standards for the Management of AI Resources and Development.

Also, in this Roundtable, Ambassador Shunji Yanai, Judge of International Tribunal for the Law of the Sea and former Japanese Ambassador to the United States, congratulated AIWS for introducing AI Politics. He confirmed that if AIWS is completed, AI will contribute to the health of world politics, he joined in the consultants to build an AI international court for AIWS.

Professor Koichi Hamada, special adviser to Prime Minister Shinzo Abe of Japan, who helped build the "Abenomics" doctrine, said Japan should be at the forefront of building an AI economy and told the participants that he is willing to work with AI innovators to create AI business strategies for Japan in the 21st century. Prof. Hamada is both an economist, and deeply perceptive creator of music for peace who has said AI can help create special cultural and artistic works, and that Japan should encourage AI art culture as well.

Waichi Sekiguchi, Nikkei Shimbun journalist, with 25-year experience in IT, shared his concern over the development of AI in Japan. Compared with big players in the world such as United States and China or even France, he believed Japan is five or six years behind in AI and suggested that the Japanese government should pay more attention to utilize research, at the same time spend more money in this field.

CEOs from Japanese technology companies and start-ups attended in the Roundtable including: Masahiro Fukuhara, CEO and founder of the Institution for a Global Society (IGS); Shunsuki Aoki, CEO of Yukai Engineering; Kei Yamamoto CEO of D-Ocean, Inc., and Satoshi Amagai, CEO of Mofiria Corportation.

Fukuhara reviewed his working interest in applying AI in education and human resource area. As the innovator of GROW360 - AI & Big Data driven HR solutions, he also expressed concern over the notion of AI politics and establishing an AI international court for AIWS.

The AIWS created by the Michael Dukakis Institute for Leadership and Innovation on November 22, 2017 as a way to build a social model that will make Artificial Intelligence safe, trustworthy, transparent, and humanistic. AIWS, as defined in Boston Global Forum – G7 Summit Report 2018, is a set of values, ideas, concepts and protocols for standards and norms whose goal is to advance the peaceful development of AI to improve the quality of life for all humanity.

AIWS has made rapid progress since the first AIWS Round Table in December of last year at Harvard University. The Roundtable, attended by many innovators and thinkers on the development of AI in Japan, brought out the positive influences of AI as well as dangers AI can present in modern life. Based on all valuable ideas and contributions, Michael Dukakis Institute and Boston Global Forum has committed to building the AIWS 7-Layer Model to ensure the safe development and implementation operation of AI in society.

The conclusions of the Tokyo Round Table are in concert with the BGF-G7 Summit Conference on April 25, 2018, during which the first two layers of AIWS: Charter and Principles (Layer 1) and Ethical Frameworks (Layer 2) were discussed and presented to the Government of Canada - President of G7 Summit in 2018, four inclusion in the forthcoming gathering of the world’s largest democratic economies in Quebec Province.

Remarks of Governor Michael
Dukakis honors President Ilves

In his remarks, Governor Dukakis, Chairman of the Boston Global Forum honored former Estonian President Toomas Hendrik Ilves with the 2017 recipient of the World Leader in Cybersecurity Award. Under his leadership, Estonia built a model system of e-governance and online security, earning the country two nicknames: “Digital Leader of Europe” and the “E-Republic.” The award was presented to President Ilves by Governor Dukakis, Chairman of the Boston Global Forum and Tuan Nguyen, Director of AIWS on December 12, 2017 at Harvard University.

Remarks of Governor Michael Dukakis to honor President of Estonia Toomas Hendrick Ilves

Iam please to announce the receipient of this year’s World Leader in Cybersecurity Award - Toomas Hendrick Ilves, two-term of Estonia.

The World Leader in Cybersecurity Award is given annually to an Individual who has contributed significantly to the advancement of cybersecurity.

President Ilves has worked tirelessly to make cyber issue a priority not only in Estonia but also in Europe.

He was president of Estonia in 2007 when it was the target of one of first massive cyberattacks, Bank, media outlets, and government agencies were inoperative, some for days. Since then he has led efforts in Estonia and Europe to protect against cyberattack and expand digital capacity.

Through his leadership, Estonia now ranks among the top of nations globally in the cyber area. Forbes magazine recently declared Estonia “The Digital Leader of Europe.”

President Ilves has chaired the EU Task Force on eHealth and the European Cloud Partnership Steering Board. He’s also chaired the High-Level Panel on Global Internet Cooperation and Govermance Mechanisms convened by ICANN. He co-chaired the advisory panel of the World Bank’s World Development Report 2016 “Digital Dividends” and chaired the World Economic Forum’s Global Agenda Council on Cybersecurity.

He currently co-chairs The World Economic Forum working group The Global Futures Council on Blockchain Technology.

These and other efforts mark Toomas Hendrik Ilves as a world leader in advancing the cause of cybersecurity. The Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation is honored to present him the World Leader in Cybersecurity Award.

Keynote speech of Estonian
President Toomas Hendrik Ilves

Former Estonian President Toomas Hendrik Ilves was awarded the title of World Leader in Cybersecurity by the Boston Global Forum on December 12, 2017. In his acceptance speech he discussed how digital technology has drastically changed our world in the last quarter-century, but that laws and regulations have so far been unable to keep up with these advances. The internet has changed our financial systems, electoral processes, and infrastructure.

resident Ilves described the digitization of Estonia during his tenure. It began with the creation of an online identification system, which established a “strong digital identity” that the rest of its system is based on. That system is called X-Road, a centralized digital network that allows government and citizens to interact in a more secure, integrated way. Estonians can vote, file taxes, attend university, and more online. Foreign nationals can even apply for e-residency. Their system of e-governance is being adopted by other states, as well. Read his speech to learn more about the transformation of Estonia into the “E-Republic” from the man who led it.

(This speech was given on December 12, 2017 at Loeb House, Harvard University for the 2017 Cybersecurity Day Conference)

(This speech was given on December 12, 2017 at Loeb House, Harvard University for the 2017 Cybersecurity Day Conference)

Anyway, get to my talk. I am going to start talking about security from a ground up level because I actually think and expand upon that security, as at least in all post enlightenment democracies based its approach on John Locke's model that the individual gives up his Hobbes' right to kill someone else to the state in return for security, be at your local police, your national security agencies, or internationally, in the Army. And what we have done in Estonia is actually put the state at the center of the security. At the same time, just let you think we are kind of deal with it, European governments were probably far less intrusive in people's lives than in the United States. But more broadly I think that we have to rethink of it.

Most aspects of our lives, in looking at living in the digital age basically ever since William Gibson and his dystopian novel "Necromancer," took Norbert Wiener's term "cybernetics" and popularized the prefix "cyber." This prefix is proliferated to almost all spheres of human activity which I think is an indication of how much the digital world has permeated our lives. So we have cyberpunk, cyber crime, cyber hygiene, cyber space, cyber Pearl Harbor, cyber war, cyber security and of course inevitably cyber sex. Rather than be mown as some have the ubiquitous use of the prefix, saying it is meaningless, I actually welcome the ubiquity to emphasize how profoundly our lives and our societies, our nations and indeed almost all human endeavors have come to be depended upon digital communication.

So basically we are into the privacy of emails or our electoral democracy, to our infrastructure, right in apartment sharing, the integrity of our financial system, banking, the ads that we see on social media during electoral campaigns. All of these are subject to manipulation and attack. All of these, with the exception of social media and the sharing economy, also existed before the digital era but they now have all been altered by the free movement of electrons and are in completely different form, which requires us to rethink much of how we do things in all other aspects and realms of human activity.

And this is of course all due to the the increasing power of the silicon chip or so known as Moore's Law which still doubles every year and half even if it's slowing down a bit because we are pushing the limits of physics. But basically the world is nonetheless completely different from the way it was 25 years ago.

While the all things digital have changed beyond belief, government's policies, laws, regulations actually have failed to keep up with this. (Of course we will talk about what the government can do on cybersecurity, cyber governance is. That is very good but on the other hand we actually have not looked at all the rest of life.

We have events such as when 145 five million adults in the United States had all of their financial records stolen. I mean that is probably 80 percent of the adult population. It was completely untouched by government regulation except for the fact probably from sort of old style rules that the management sold their stocks before informing the population that their data had been stolen. We have to come to terms that this is a much broader issue.

And I guess most importantly if we look at the core of our digital security and I'm not talking about the government, the NSA and our electrical infrastructure, but basically what we, all of us, do online started out 35 years ago with a system that worked fine then when there were about 3,500 academics using a network called BitNet where security relied on an email address almost always ending with a top level domain of dot edu. These people generally did not pose a security or criminal threat. Yet today there are 4.2 billion people online. We fear all of these things such as cyber war, cyber crime, docs, emails. But basically what we are dealing with is that since we use BitNet we have had 22 or 23 iterations of Moore's Law, which means that today computers are 8.4 million times more powerful than they were when we started using this system among 3,500 academics. We also have an increase of a roughly the same order of magnitude from 3.5 thousand people using BitNet to 3.5 to 4.2, depending who you ask, billion people online.

We've been very slow to realize this. Say, Joe Nye pointed out in an article 6 years ago, immediately after the Munich Security Conference without naming me, he quoted me, that this is the first time Munich security conference has ever dealt with the issue of cyber security. That was 2011. Up 2011 till the Munich Security Conference, the premier conference on security of the world, had not even a single panel on the issue of cyber security. Now, of course, the Munich Security Conference has an entire separate conference of cyber security. But that just shows how recently this was not considered an issue.

Now what I will try to do today is to try to look at cyber security at three levels, beginning with the individual and then moving on to the state and then finally getting to the international level.

And again to reiterate, my point of view is that security has been the responsibility of the state pre-digital and it remains so today but the state has failed to keep up in general in most places and that this does remain a key aspect of John Locke in the Social Contract where we do give up certain rights in exchange for protection against sort of Hobbes's War of All against All. We have also gotten there in the analog or physical world but we are very slow to get there in the digital world.

Ultimately I would argue that security is a political choice based on policies, laws and driving from those laws and regulations, just as we have in the physical and analog world civilian control of the military as a core concept in democracies, Habeas corpus laws regulating use of guns. Again when we get to digital we are fairly poor in this respect.

When we come to cyber world, I argue, we are too focused on the technology rather than the policies, laws, and regulations.

I would say, specially now knowing the system we have created in Estonia, that actually the technology is not that advanced but we are way ahead of everyone else when it comes to use of digital technology. This is a function of the laws. I should mention here that just this week in The New Yorker you will be able to read probably the best article I have ever read and I think I have read every single English language article that has ever come out on my country and digitization but the best article that has appeared just came out yesterday it's in this week's New Yorker it's written by a guy named Nathan Heller. That describes the way everything works in a very nice way so I do not even get into that.

One thing I should add before I talk about what we do. There is a huge difference in this regard between what we and most countries do. Because our focus has been always on the gee whiz aspects of technology which became clear to me when after 25 years of dealing with digitizing my country. I mean aside the fact that I was a geek once but it is always tough going politically. When I finally finished my term, my dream came true. I was invited to Stanford, the Mecca of innovation in IT. Of course that is where everything is. In a ten mile radius of my office I have the headquarters of Apple, Google, Facebook, Tesla... I mean you keep going on and on. I guess only Microsoft is really missing. And on top of that three miles away from me is Sand Hill Road which basically funds all of this enormous innovation.

When I went to register my daughter to go to school, I had to bring electricity bill to prove that I live there. Then after she had to take an E.S.L. exam because she was going to school in Estonia and she placed out of taking a catch up course and she had to get permission to enter a regular English class. So I had to sign two pieces of paper. I had to deliver one to the school, physically signed paper, and the other one four miles away at the Municipal School District headquarters. When I got there, there was a line of about 20 people. I said I just have piece of paper to drop off here and the last person said we all just have a paper to drop off here but they have to make a photocopy of it. Then suddenly it struck me that in fact everything that I had been experienced in that process, except for the photocopying, was identical to the 1950s. Nothing it had changed, except in the 1960s you started getting photo with Xerox machines in the U.S. school system, so you could actually make a photocopy. I got to say that to illustrate where we are in most countries when it comes to digitization.

We took a different route. I want to by the way mention what it is like to register a car. It usually takes one to two days sometimes three. Unless you buy a new car and the dealership does it for you which I had to finally end up doing.

But what we did in Estonia, just for background, I mean why we did, what you did, which I mean we emerged out of the miasma of the Soviet Union in 1991 or reemerged because we had been independent. In 1938, the last full year before World War, Estonia, and our linguistic cousins across the bay, or the Gulf, had the same GDP per capita. When we became independent again the difference between GDP per capita between our two countries was 13 fold. We were still basically operating with no infrastructure except for military infrastructure; all roads that were built during this Soviet period were for military purposes. So looking at this awful situation, people came up with all kinds of plans. I proposed (since I had been talked in a real fluke and serendipitous event, I learned to program at age 14) why don't we teach kids how to use computers. We embarked upon in 1995-1996 that by 1998-1999 we had all schools online.

Schools had labs which we opened to the public after school hours so that other people could learn to use computers. Keep in mind everyone is poor so they cannot buy computers but they do have access to them. By this time we had gotten this sort of thinking that maybe digitization really is the way to go for the for the country. But we realized somewhere around the late 90s that we could do it differently because ultimately we were worried even then about security and what that meant and we do have a neighbor next to us that is very big and probably very good at causing problems in the digital realm as the US has discovered later on.

So we thought long and hard about what it is that we need to do. One of the things we came to very quickly was the fundamental issue of cyber security for the population is identity. Who are you? We all know the old New Yorker cartoon "On the Internet, no one knows you are a dog,". Actually, the fundamental problem of cyber security is that you do not know who you are talking to, (in fact this is where differs from what I will talk about later on the kinetic world of warfare), you don't even know if he is in your own country that you are talking to who you are talking to.

So what we realized is that we must start off with a strong digital identity and this is what one of the key axioms I would argue for the future of digital security.

Of course that sounds good theoretically. What that meant in policy terms was that in 2001 we offered everyone living in Estonia at that time citizens' permanent residence a unique chip based digital identity card, in that communication was insured with two factor authentication with N2N encryption.

And I said we did this because we realized even then that the primary model of e-mail address plus password is not going to last for long. In fact, today there is no password that cannot be broken in the email plus password paranoia through brute force hacking. If you do not have two factor authentication, you might as well give up and this already means that on most transactions that you do in life in most countries, you cannot be sure of anything.

We did this with a chip card plus a code. I am sure that people are really interested in this. We see in many places today two factor authentication is slowly coming in. Apple also uses it as Google. The problem with two factor authentication is the ways that in most places. For example, at Stanford that has become the norm because of a big hack several years ago. The S7 protocol which governs the communication between mobile phone communications has been hacked, is hackable. In fact the first case of a big hack was the loss of 3 million euros by a German bank this Spring that did use two factor authentication using a mobile phone second factor.

So that was how we started off. We did this on a public-private partnership basis because every interaction has to be authenticated. The verification or certification of each transaction is done by a 50-50 public-private partnership, half paid for by the government, half by a consortium of banks.

The second step was that using a two factor authentication with a highly encrypted public key infrastructure. Encryption meant that we could offer all people living in the country genuine security, or starting from the premise that nothing is complete secure, at least far more secure than the kind of security the most people enjoy in most places.

We have been using until then we found out that the that infinity and produced a full flawed chip, or 2048, we did it fast. I guess unlike most companies in most countries, we actually said we had a problem with the chip. And now we have gone over from our say to an elliptical encryption. As I say that other countries that use the same chip unfortunately have not been very open about it as we were.

Going back to 2001, we did one more step which is actually a key to make creating a functioning digital society in which again most places have not undertaken at all which is that we gave the identity legal efficacy. You can sign legal documents online with this system. That means hooking it up to a national registry. This causes howls of indignation from the Five Eyes countries, also the Anglosphere, the U.K., Canada, the United States, New Zealand Australia, who say we will never have a digital identity, let alone any kind of legal efficacy, which I would find kind of odd because in fact the United States, the U.K., Canada etc all offer passports in which the state says you are you. All we're doing is saying, the state is saying, you are you to enable legal transactions.

Digitally, as opposed to having it in a physical passport, the use of our system and I mean the card in here as a behavioral economics is that we make it mandatory to have a card. You never have to use it but you must have one. Why do we do that? Because uptake rates of digital identities in most countries, or today in Europe, all countries must issue or offer digital identity, the uptake rates are 15 to 25 percent.

The early adopters are the ones who take out a card. We decided we would make it mandatory because no services will develop either in the public sector where different ministries should be developing things or in the private sector which would have an interest in this. They would not do it if they think that 85 percent of the population cannot even use this service. So we have things such as digital prescriptions which are used actually today by 99 percent of the population. You do not ever have a paper prescription; you call your doctor and he will renew your prescription or your doctor writes it in when you go see him. No one takes the effort to develop those kinds of systems unless you have the private sector and the public sector assure that basically everyone can use this.

So this is laying the groundwork for digital society and of course what makes our bank transactions secure instead of what I find here is that it is all card based chip, be it up for mobile phone or your card. We do not have checks in Estonia. I read recently how one system works here is that you can you have electronic banking so you go online, you do something and the bank prints a paper check and then mails it. This is not a digital society, I would argue.

Basically, the state guarantees ID and it seems to be the main stumbling block in most countries for a secure digital society. My argument is this is simply something in a democratic society that if it is responsible for the security of the citizens, it must offer this. I mean you may not want to go the full step that we did, that you make it mandatory, then you basically assume that digital services, at least on the part of the government, will not take off.

I just read last night a perfect example of why a democratic government that wants input from its citizens needs a digital identity in the ongoing debate on net neutrality. The FCC, like many federal agencies, asked people's opinion and got a million fake or bizarre nonexistent comments.

Against net neutrality, I don't know how many got in favor of maintaining that neutrality. But unless you can log on and be you as a citizen of the United States commenting on impending regulations then what's the point of asking anyone. In fact, some four hundred thousand of the comments came from Russia. I mean this is not how you run a democracy or at least this is not how you do open government soliciting opinions from your citizens. We have the same system in our country where, on various issues, we ask people's opinion. But you have to do it by saying who you are. If you do not say who you are, there is no point. I do not want to get into issues of anonymity and how crucial that is or may not be and how it would may be ultimately a victim of our lack of cyber security in the cyber realm. Nonetheless I would say that without a secure identity, the functioning of a democracy becomes, I would maintain, stymied.

The second thing we did (just to talk about how we have put security into the system) is designing a very different architecture from what is usually used. Most big countries or most governments have used centralized databases. The OPM hack: 15 million or 23 million U.S. federal government employees including CIA, NSA personnel, including their personal psychological profiles were hacked, as you probably know, two years ago. Does it matter who did it? The fact is that they had all of this stuff easily accessible and in clear text that was not even encrypted. I would find again unconscionable not to mention the kind of hack we saw with Equifax.

What we realized quickly is that we could not have a centralized central database for purely economic reasons. In the late 90s everyone was going after big central servers. We were sort of where we were. We had what we had done: every ministry, every agency, every company had its own servers, often using different systems and also with a great degree of independence, but at least arrogance, there were little fiefdoms. So in trying to figure this problem out, we had some mathematicians of ours came out with a distributed data exchange layer which we call X-road, in which everything is connected to everything through the authentication of your identity. Basically, the idea is that if your identity gives you the wall and the moat of a castle. Once you breach the moat and the wall, you are in and everything is open to you. In our system, if you breach the moat and the wall you are still stuck in a room: one room, one person. You can get something for that one person but you cannot get the rest of citizens.

I would like to play a three-minute video just to give my throat a break and as a little commercial to show how our system works.

"Running a modern state is a data centered endeavor. Ensuring the functioning of the state requires administering very large quantities of data. Estonia lacks a centralized or master database. Data is stored where it is created. Each agency administers its own data separately and data is not duplicated. At the same time state authorities and agencies need data outside their per views in order to function. For example, the police constantly require information from the population registers. Likewise, the unemployment insurance fund depends on information from the health information system. How can authorities securely exchange important data? First the data must be easily accessible by the authorities that are authorized to use it. Second the integrity of the data must be maintained: no third party should be able to make any changes to the data while it is in transit. Third the data must remain confidential during its journey: it must be protected from the eyes of unauthorized parties.

The X-road is a data exchange platform that fulfills all three of these requirements. The X- road makes life simpler for both the state and for the citizens. For example, when a child is born, information about the birth is sent directly from the hospital to the population register. From there it is sent automatically to the health insurance fund so that the child will have health insurance and a family physician. This prevents the creation of excessive paperwork and saves time. The state functions in the background. The X-road helps authorities make work processes more convenient. Many activities can be automated which frees employees to deal with matters that require human involvement. Authorities also do not have to worry about the authenticity of data. They can be confident that data received from the Tax Board definitely originated from the actual tax board. Additionally, the X-road can be used regardless of what technology and authority use this. For the state, the X-road, above all, makes it possible for authorities to efficiently exchange data among themselves. Sensitive information moves securely and the system itself is so resilient that it cannot be easily brought down by those with malicious intentions.

Since the birth of X-road in 2000, the system has operated continuously without interruption. The X-road helps the state see the big picture of how different authorities are connected to one another. In addition, the X-road makes it possible to exchange data not only within the country but also across national borders. That is, of course, if databases and information systems are working properly. The biggest beneficiaries of the X-road are of course the citizens. They enjoy the benefits of a better functioning state and save all of the time they would otherwise spend on submitting papers and forms. How much time? During the time it took you to watch this animation, the X-road saved around 240 working hours in Estonia. Cool"?

Now what this does, among other things, is, in addition to giving you security, it changes the nature of bureaucracy for the first time since it was invented 5 thousand years ago, either in Mesopotamia or China.

Bureaucracy has always been the serial process. If you want the permission to do something, you apply with a piece of paper. The paper goes through one agency to another agency. Think about establishing a business, you have to check if all the board members pay their taxes, someone else check if they pay their alimony, someone else has to check if anyone has ever gone bankrupt. So it just takes quite a long time. This makes a bureaucratic processing parallel. In fact, which beats things up from establishing a business in my country is it takes about fifty minutes because all of those queries are answered simultaneously.

This system also allows for greater transparency and reduction of corruption because basically decisions are made by checking the boxes rather than by having an official who uses his discretion to decide whether you get something that you are entitled to or not. If I want permission to dig hole, I have to apply to my municipality just to make sure there is no water main down there or there is no electrical cable. In a lot of countries if you apply, you know you should get the permission but there is an official there saying "well you will not get it for free".

That is, you have to pay in whatever currency.
These kinds of decisions are made automatically. The best result however of this is we have applied a once-only rule, which means that the government not ask you for any information it already has. I mean once you are identified, you no longer have to write your address down again, your telephone number or any of that stuff because this is ALL done online.

And the system has now been adopted from us (we give it away as foreign aid) by a number of countries. This platform is kind of foreign aid on a thumb drive. Finland, probably most prominently, with us now are jointly developing its own open source non-proprietary software. Mexico is adopting it; Panama is taking over; Moldova has had it for a while; Georgia. Countries vary in how much they do this. Oman. We gave it to the Palestinian Authority but they never use it. So it really depends.

But again what this does allow us, from the point of view of the citizen, is to go do things that traditionally have not happened at all. We will as of next year have cross-border interoperability of digital prescriptions so as Finns are coming to Estonia. We get too good a time, we get eight million Finns in a year. If he loses medicine, he can then call or write his doctor in north of the Arctic Circle. The doctor will then remove his prescription. He will take his Finnish ID, plug it into any pharmacy, put in his identifying numbers and he will get his medicine. I proposed this 5 years ago to the Finnish President and next year will be six years since I proposed it. That is how long it takes the technology would probably, as in most cases, take about three days to do all this. Political will, policies, laws and regulations have just taken that long to go anywhere.

Further on digital security and security before I move on to the big picture, the big issue in Europe has, especially since Snowden, been privacy. As privacy is, of course, very important, I would argue this system allows far more privacy than the current system but does require a certain degree of trust which is why we do not have backdoors. If you had backdoors you would no longer have trust and no one uses the system.

But the real issue to my mind has been is really data integrity.

I may not like it if someone publishes my bank account or my blood type. If someone changes my blood type or the record of my blood type or someone changes my bank account number or contents, that is a disaster. So what we have done is to put all critical citizen data, health records, property records, law cases (because now they are all digital and you would not want those changed) on the block chain.

It is interesting that all public sector is in all our private block chain because as if the public want to take forever to work as with Bitcoin but it's on a private block chain and administered by the government, which then means that you cannot change these data.

The other thing that we have done for security in addition to all of this is that as a small nation that has been invaded about 20 times in the last thousand years, we do worry about our data. Based on the experience of Japan which lost about five percent of its of data in the Fukushima incident, we have now established a data embassy. Applying the Vienna Convention on extraterritoriality of diplomatic representations, we have given our big server diplomatic status. It is in Luxembourg and there will be others so that if we happen to have (I mean we will not have) any bad seismic events most likely, or if I were Greece, I would certainly do something similar. Not a happy place for seismic events but certainly you want to keep your data elsewhere. It is not an issue for the United States. The U.S. is huge and generally has not to worry about all you need or keep your data in several different places but for smaller countries, you probably do need to think about these things.

And the final thing and at the national level of what we do is that we have a prohibition of unupdated software. All you have to do is look at want-to-cry which took down the UK's entire national health service because the UK being too cheap, did not update. For the version of Windows they were using, Microsoft stopped updating in 2009. The UK and Microsoft then made a special deal to keep it up till 2013 but even that time lapsed and then this spring 2017 you had the want-to-cry ransomware which shut down the medical system of a big European country.

We cannot allow that. This is again, I think, a fundamental issue that needs to be dealt with both in the private and the public sector. You cannot have legacy software. In other words, you must think of software as an operating cost, a running cost. Most companies and most countries think of software as a capital investment, right? It is not like a car. It is not as if you bought a car two years ago, you do not need another one for three. You must always keep your software up to date. Or as in the Equifax case when they identified of a vulnerability in February, they did not bother patching it until after they were breached. I mean if you are not going to get companies to observe that and if governments do not observe that, you are going to have to legislate that.

Certainly, in the case of Europe, the application of the new general data protection regulation will force U.S. companies at least in Europe to worry about patching things or what happens to citizens data because the fine is going to be four percent of a company's revenue worldwide, which is no small thing. People may complain and moan about the regulations of the European Union but personally, I think, after Equifax, there's nothing you can say about that. I am more surprised that there has been so little of a citizen outcry on all of this. I am also surprised that all kinds of things such as what happens to data in this country or in a number of European countries and its use, for example, Cambridge's analytic use of data is brought in creating highly targeted, highly granular ads in the last election and probably also in the UK's Brexit referendum. I think that these are all issues that will need to be addressed. They are not political issues. They're not there yet.

I would like to move on just quickly to the to the international part of this. While I agree with Joe on the need for conventions, there is only one convention that works at this point and that is the Budapest convention on cyber crime, recently with the Council of Europe, which is then acceded to by liberal democracies, the U.S., Canada, Mexico, Japan and Australia. They decided to call to Budapest convention because it was no longer a Council of Europe thing.

The problem with that convention, but which may also lead the way to future thinking, is there are a whole host of countries that have not acceded to the Budapest convention, most prominently China, Russia and Belarus. I think Ukraine is somewhere in between because Ukraine, at least up till the end of Yanukovych's regime, was also a primary source of all kind of cyber crime. But rather I direct attention to a fundamental conundrum of cyber security at the international level that we need to address, which is our thinking about security since the first rock by a hominid pre-human hominum was thrown to kill another pre-human hominid, has been kinetic, distance based. Force equals mass times acceleration, meters per second squared. Meters no longer matter in security these days; distance does not matter. All of our security thinking up to the present has been based on the concept of distance, therefore geography. Think about what is the primary security organization that we have, I mean are in it, the North Atlantic Treaty Organization. Countries that share all of the values of the countries of the North Atlantic Treaty

Organization such as New Zealand Australia Japan and Uruguay... they are not in the North Atlantic Treaty Organization simply because they're not in the North Atlantic. All the work of the North Atlantic Treaty Organization is based on things such as tank logistics, fighter range, bomber range, troop movement logistics. It is all distance-space. Today, all of the threats have nothing to do with distance: borders are breached without being noticed. On top of that, the threats, I will take just one, APT28, or Fancy Bear, have hacked the Bundestag, hacked the Italian foreign ministry. They have done all kinds of things to the Netherlands, Sweden, Ukraine. Even the World Anti-Doping Agency has been hacked by this one group of probably GRU hackers. It of course did hack the DNC. I should point out here that David Langer at least told me that of the 126 people working at the DNC with access to the DNC server, 124 were actually using two factor authentication, two were not. Guess how the DNC server got hacked!

Anyway the point is that our ways of looking at things in this side in the digital era just have to change. We have to think about security not in terms of geography. We have to realize that the threats can hit all over and perhaps what is at risk are our forms of government, ways of organizing society. Certainly that is the case what we've seen in the last year or so, not only with attempts to derail the US elections but, we know better that, with the Brexit campaign. We know that, in France, Emmanuel Macron's server was hacked. Having learned from the DNC hack, they actually loaded their email server with obvious fakes so that when they were docs, published things that were so obviously fake that it disqualified virtually everything, even what was perhaps potentially embarrassing. Nonetheless I would say that we should learn from these individual actions and think about how we should guarantee our security in the future, think about working together a lot more.

Our own experience with this was not very good. From now on every history, cyber warfare begins with the April-May 2007 attacks on Estonia. They were DoT attacks, which meant our systems were never breached, they were just shut off from people. At the time NATO was loath to admit that this had been going on. Slowly people came around and realized that this was a closet Vicient event, attenuation of policy by other means. Ultimately what we had been asking for years was a center of excellence in Tallinn which produced Tallinn Manual 1 and 2. It was established in my country but even NATO took a while to get there. It is sort of the traditional model of you know someone breaches the border and then there is the Article Five. Decision made it inact doesn't really hold because in a cyber event, you have problems with the attribution, you don't know what the proper response is. We are just not ready for that or have not been ready for that.

But nonetheless we see the security situation has decreased to such a level that even our democratic systems seem to be under threat. That we have to start thinking in multilateral terms as I mentioned we do have the Budapest convention on cyber crime which kind of maybe gives us an idea of that like-minded nations have agreed that they will work against cyber crime, will give out criminals from their territory. It has been used to great effect in a number of countries where one country identifies a hacker in another country. According to the Budapest convention, they are then extradited.

We see that other areas do not work so well as Joe mentioned. Ungar has failed this year. That's because during the ITU discussions about five years ago, already then a set of like-minded countries, China, Belarus, Russia were basically arguing for what would amount to censorship of the web because their definition of security is of information security, is not devoted to hacking, to hacking other people's infrastructure. It includes freedom of speech and that's clearly something that liberal democracies are not willing to put up with. Another example of fairly successful cooperation that also might lead the way is the possession of the NATO center in Tallinn because while it was originally open only to NATO countries that it is now open to other like-minded nations. Finland is a non-NATO member. Japan basically has asked "we could we join, Is that fine"? It is a long decision making process there but if we are as we have seen with threatening both at the level of infrastructure, at the level of privacy, at the level of of our democratic processes, we will have to develop at least among liberal democracies some kind of defensive mechanisms among them, international cooperation. At this point or until perhaps two weeks ago, there has been no real cooperation within NATO. NATO's idea of cyber security is only to deal with the security of the organization, not the members or the allies but just the organization.

Thinking is moving beyond that but maybe has not gone far enough. I do think we will have to face up to the reality that liberal democracies are under threat, that the mechanisms for attacking liberal democracies are no longer merely kinetic and that we have to start working toward some kind of serious organization for cyber security for liberal democracies that as with the attacks transcend geographical boundaries from New Zealand and Australia to Finland and Estonia, countries will share information. It is going to be a long time cyber information even within NATO as I said. It is more a matter of following the espionage paradigm where you do not share anything as opposed to the interoperability paradigm that you put a U.S. missile under a French Mirage jet. It means in that sense interoperability. In fact, it is one of our experiences when we discovered some malware, we went to NATO and said oh look what we found in NATO said Oh you too to an ally. That is not how you do cyber security, frankly. So I would argue in close with that we do need to think about these things.

I will close with two small points. One of them is that we hear everywhere all this talk about we need backdoors. We have seen the Prime Minister of Australia, the Commissioner of Justice for the European Union, the Minister of Home Affairs in the UK, the U.S. attorney general also argue for backdoors. I do not understand that issue, frankly. Why you would want to do that?

Or maybe because it comes from not understanding technology basically soon as you have a backdoor that becomes the Holy Grail, the Holy Grail for the people because it is one stop shopping. Why would you want to try to hack anyone if there is a hackable key, a backdoor somewhere and we need not think only in terms of smart people hacking a key. We know CIA and NSA have been hacked but you do not even need that. The worst cases of breaches have been insider threat. Scott Sagan just put out a whole collection of insider threats but think about what is one of the worse case than Snowden? No one breached NSA. He was an insider threat. Reality winner that bizarrely named woman who just gave out an NSA document on Russian attempts to hack voting machines. It is an insider job. Now I take not to criticize the United States, so to say. In the European Union, 500 million people, the commissioner for Justice says OK it gets a wish and the wish is to have a backdoor key. Now if I am Vladimir Putin or someone else, I would say OK I do not have to hack anything, I just need the key I can get into everything. And instead of its that of trying to get in there through digital electronics means, I would just find out who the key master is, say I give you two billion euros, eventually you find someone who is going to fall for that.

So let's stay away from backdoor keys. My point in this regard, I should say, that Estonia which the ITU has listed Estonia as the most secure, in terms of cyber security, country in Europe. Russia is the most secure in Eurasia, China is the most secure in Asia. The only difference is that the Freedom House has also rated Estonia as number one in the world in freedom online, which disputes the argument that you need to be repressive in order to have security in cyberspace.

Ultimately everything boils down to my mind to a brilliant essay (it was not that brilliant but the ideas in it were brilliant). It was written 58 years ago, in 1959, by C.P. Snow called "The Two Cultures" which I think was not nearly as relevant when it was published as it is today. C.P. Snow was a physical chemist and a literary novelist who gave the world the term the corridors of power in one of his novels. But he had this great little essay about being at the faculty dining club in his College in Cambridge sitting with the physical chemists, the physicists, the other chemists discussing presumably quantum mechanics and then he would get up after dinner and go drink with the poets and the essayists and the novelists and the Shakespeare scholars. I mean he was the only one who could move between the two tables. The poets and essayists had no clue about physics and the physicists and the chemists could not care less about literature. And he said this is a problem of the university. I would argue today it is a problem of society. Be back then technology did not impinge upon people's lives the way it does now.

Your phone did not tell anyone where you were, it was plugged into the wall. The most you had to do, your greatest, your television could not look at you so despite the sort of all wellbeing published already ten years earlier but you did not need to put a little thing in front of your computer to keep the computer from looking at you or listening to you. The most you interact with technology perhaps was to set the timing on your distributor cap which is something the most people under 40 do not even know what it is. So it was a different world today: technology impinges upon us everywhere. Yet people do not understand the problem of this. Technologists do not understand the ethical, legal, moral, philosophical basis of a liberal democracy in many cases and the people who are responsible for the legal system do not have a clue about IT.

On the one hand, right after the iPhone came out, with one of the early apps you could find out where you traveled. I downloaded the app and I got this map of where I had been all based on the S7 protocol that says the mobile phone has been big fat lines where I traveled a lot and thinner gray ones where I did not. I showed it to security detail and they said eliminate that immediately.

I said what is the point? I mean the data exists, so someone else can have it.

And then again 2014, in the Fall, I went to went to the European Parliament. They have a fiveyear term. It was half a year after their most recent election I gave a talk about digital stuff, trying to tell them how important it is, that you actually know something about it. And as a kind of show and tell moment I pulled out my mobile phone and I said this thing here you all have one everyone had one of course. Thank you so much.

Talk of Joseph Nye at Global
Cybersecurity Day

December 12, 2017 at Harvard University

In his 2017 Cybersecurity Day speech, Professor Joseph Nye of Harvard discussed the multiple levels through which international cyber cooperation is advancing. Instead of focusing merely on international treaties and regimes through IGOs like the U.N., he called for increased attention to bilateral and multilateral agreements on cybersecurity and suggested that lessons in cybersecurity can be learned by analyzing how nuclear weapons treaties came into effect. After two decades of nuclear build-up and crises, the U.S. and U.S.S.R. began to cooperate. Professor Nye argued that cybersecurity was reaching its own two-decade mark, with the two biggest parties now being the U.S. and China.

He believed the key to creating lasting international norms and rules in cyber would require different levels of formality. For an example of multilateral agreement having wider effects, the Budapest Convention on Cybercrime led to increased cooperation within the EU, and then between Europol and Interpol. For another example, Professor Nye cited recent agreements between the U.S. and China to curtail cyberespionage that both parties have brought to the G-20. Just like with nuclear weapons laws and norms, the global cybersecurity regime is beginning today on multiple levels.

(This speech was given on December 12, 2017 at Loeb House, Harvard University for the 2017 Cybersecurity Day Conference)

Thank you, Mike and thank you, Tuan for your leadership of the Boston Global Forum. There are a variety of ways of approaching the topic this morning. I’m going to talk about the question of whether we can develop norms internationally to govern conflict in cyberspace. Basically, I’m going to this morning, repeat in - I think Tuan said 20 minutes or less - the things that I said last week to the Chinese conference on the internet in Wuzhen. I just got back from Beijing at 2am yesterday, so if I fall asleep in the middle of my talk, please forgive me. I’m permitted to, you’re not.

What I had talked about is normative restraints on cyberconflict. And I ask: “Where does the world stand in the development of norms to restrain conflict in cyberspace?” I don’t know if any of you have seen the new site that the Council on Foreign Relations and Adam Segal have put up, trying to catalog the number of inter-state attacks, and it’s quite fascinating to look at the shape of the curves. Since about 2004, it just has a steep upward slope if you go and consult that.

The question, I think, of whether we can develop international agreements or norms to restrain cyberattacks by states - remember ‘cyberattack’ is a very vague word to refer to a wide range of things – but I’m talking of the use of cyber for malicious purposes by state actors. I’ve tried to argue elsewhere, in an article in the Strategic Studies Quarterly, that states take a while to learn how to manage a new, disruptive technology, and while nuclear technology is totally different from cyber technology - as somebody put it, nuclear technology can end the world while cybertechnology can disrupt the world - there is some difference there. But that’s not the point I want to make. What I want to make is a larger historical point, which is how do states learn, when you have a very disruptive technology, to essentially encompass it in a set of ‘rules of the road’ or norms. And while the two technologies are vastly different, I think we can learn something about the process of how states learn to cope with highly disruptive technologies. And it’s interesting if you look at how long it took states to learn to develop some norms to restrict nuclear technology. It was about two decades after Hiroshima before you saw any progress.

Depending on how you count, we’re somewhere at about the two-decade mark in terms of cyber. You would say to me: “Woah, that’s terribly off, because the origins of the internet go all the way back to the late 60s and early 70s.” But if you think about cyber as a security problem, it really becomes a major security problem at the end of the 1990s when it becomes a basic substratum for economics and political relations. And you’ve all seen that famous hockey stick graph, where cyber uptake goes very slowly until you get the web and, after you get the web browsers, it really takes off around ‘96 ‘97 and this is the famous hockey stick that we see.

Now, with that enormous interdependence that comes with that interconnectedness, you find the interdependence produces huge benefits but also huge vulnerabilities. And with vulnerabilities come insecurity. So essentially, cyber, I would argue, is at about the same two-decade mark in terms of being a major security problem. If you look at the nuclear example, you’ll find that the first efforts to control this new nuclear technology were centered around major United Nations treaties and, incidentally, that’s what the Russians and Chinese want today: a big U.N. treaty. Our lead in the 1940s meant that we proposed the treaty, and we proposed the Baruch Plan where the U.N. would own and control nuclear technology, just as the Russians and Chinese want the international telecommunications union to have that kind of a role.

Of course, the Russians turned that down. They weren’t going to essentially give up the technology they were stealing from us through spies and they wanted their own nuclear weapon, and so these efforts in the first decade – the U.N.-centered efforts – were a failure. It really wasn’t until after the fright we induced in each other in the Cuban Missile Crisis in 1962 that you got the beginnings of real efforts to set norms to surround this new disruptive technology. The first was the Limited Test Ban Treaty - which you might note was a very limited thing - which was essentially an environmental treaty. It was a game against nature to prevent strontium-90 from poisoning us all. That was followed in 1968 with the Nonproliferation Treaty, which again was a game in which the U.S. and the Soviet Union cooperated against third parties - other countries developing weapons - rather than dealing directly with each other. So, it really wasn’t until the 70s - that was 30 years after Hiroshima - before you got strategic arms limitation talks, which is direct US-Soviet negotiation which produced something which began to set constraints on this new, threatening technology.

Now what’s interesting in the parallel process is that China and other members of the Shanghai Cooperation Group have followed what Russia proposed in 1999, which is a full-fledged United Nations treaty, in their words to ban electronic and information weapons. Electronic and information weapons. Notice how broad that is. That would also include propaganda. It would include banning what the Russians just did in the American elections, if such a treaty had existed. The trouble with such a treaty is the Americans pointed out, and has pointed out since 1999, is it’s totally unverifiable. It’s a set of words that makes us feel good but essentially has no effect on real behavior. Instead, what the U.S. and other countries proposed - and it was agreed upon - is that the UN Secretary General should set up a group of governmental experts, the UNGGE, and they first met in 2004. Initially, it had very meager results. It didn’t accomplish very much. But the interesting thing is that by July of 2015, it was able to issue a report in which the 15 members, by then expanded to 20, were able to set an agreement or have a consensus on a set of processes and procedures which would set some limits on cyberattacks by states, particularly against critical infrastructure.

What’s interesting is that the report – the 2015 report of the UNGGE - was taken to the Group of 20, the most powerful economies in the world and was endorsed by the Group of 20. Now if you know anything about the way the U.N. works, the basement is full of rooms with groups of experts on this that and the other thing. And rarely does their product rise above the basement level. So, it’s quite remarkable that the UNGGE produced a report which was endorsed by the 20 most powerful economies in the world. But, before you say “ah, that’s good news,” they met again and in 2017 – this past summer – they failed to reach consensus. They expanded the group as more states demanded to be a part of it, with the expansion it was more difficult to have consensus. In addition to that, there were deep problems in the political relations between the united states and Russia in the aftermath of the Russian meddling in the 2016 election, and China began to backtrack because China has the feeling that the internet should be subject to sovereign controls. And so China worried a little bit, that by signing on to the 2015 report, that it was weakening its new strengthened position and sort of doubles it or squares it if you want, which is the Xi Jinping regime is trying to have tighter and tighter Party control of everything in China, including the internet. So, in 2017, the UNGGE fails to continue the progress that it had initially shown in its report on 2015.

Now, to understand the GGE, it helps to put it in the broader context of normative constraints on states. We often think about international law, and the Tallinn Manual is an effort by a group of international lawyers to write down what was agreed to be international law and how it applies to the internet. There’s still disagreement among lawyers about some of the causes and things in the Tallinn manual, but it would be a pity if we restrained our imagination to efforts to have interstate control or setting up rules of the road of the internet if we just restrict ourselves to binding international law. As Martha Finnemore and Duncan Hollis have written in a recent paper called Constructing Norms for Global Cybersecurity, a norm is a collective expectation of proper behavior of actors with a give identity. Norms apply to multiple actors and they’re not legally binding. In that sense they’re different from international law.

So, if you think about the different types of normative constraints on states, you can imagine formal interstate agreements under international law which take the form of either treaties or have evolved over time to be what’s known as ‘customary international law.’ But there’s also practice - common practices – which become norms in the sense of expected collective behavior. And then there are also sort of codes and rules of the road for conduct. So, there are different degrees of formalism in the normative constraints we can imagine. So, if you think of formal agreements, you would think of a treaty. If you think of common practices that make the internet work, it might be routing practices and exchanges, border gateway protocols and so forth. Or you can look at the domain name system. These are practices which make things work. And then you have sort of efforts to develop codes or rules of the road, and that’s where the GGE came in – to try to create a code or norm which said you will not attack critical infrastructure, and you will not interfere with each other’s certs, and they tried to understand attacks and problems

Notice what I’ve restricted myself to so far is different degrees of formalism in the normative constraints, but they’re all at the global level. What I’m arguing is that, in the aftermath of the failure of the GGE this past summer, that we ought to thing about different levels of agreement in terms of their scope. So think of a matrix, in addition to formalism which I’ve described. You can also think of different degrees of inclusiveness and you can have global, which I’ve described so far. You can also have plurilateral and regional. Not all states, but some states. And below that you can also have pure bilateral agreements. If you put those together, formal agreements, common practices, norms and codes, with the degrees of formalism, with global plurilateral and bilateral as different degrees of inclusiveness, you obviously get a simple nine-cell matrix. And only one cell of that matrix is the UNGGE acting at the global level. And we should not let the failure of the GGE keep us from realizing that there are eight other venues in this matrix where we can be trying to develop rules of the road or normative restraints on conflict.

To give a few examples on plurilateral and regional-type disputes, you have the Budapest Convention on Cybercrime, which is not universal, but it starts in Europe and has adherence to other like-minded states on criminal aspects, along with that practice you have the cooperation that’s developing on Europol and Interpol on crime issues. On common practices, you might see something like codes of conduct among like-minded states. So, if you ask: “Could we get a declaration of internet freedom which is universal, you know, that there is a right to communicate universally?” Some people call it a universal right within the context of the U.N. Convention on Human Rights. Chances of China or Russia signing on to something like that are if not zero, close to zero, meaning it’s not going to happen. But could you imagine Brazil and India, or Australia and Japan joining Europe with us, to say that we should make sure that the internet stays open to the extent that we’re able to? Yes. And then you would face people saying “ah, but that’s fragmentation of the internet.” Well, I hate to tell you, but the internet is already fragmented. Anyone who thinks there’s a global universal internet hasn’t been to China lately – as I’ve just come back from there – and their internet is not universally open. But you could imagine a set of very significant states representing a large portion of the world economy taking a position of self-restraint on rules of the road to keep a degree of openness. When you come to norms and codes at the plurilateral and regional level, you can imagine organizations like ASEAN and other regional organizations trying to do something similar.

And, finally, at the bottom of my three levels of inclusiveness are bilateral agreements. And you can say: “What’s the value of bilateral agreements – just two states – how does that affect international norms and constraints?” Well, let’s think about the case of the U.S. and China for a minute. Remember I said the U.S. and China have very different, important differences, on rules for the internet. I just cited one about freedom of speech or content control, but let me give you another example, which is for years the United States complained bitterly to the Chinese that their use of cyberespionage to steal intellectual property from American companies and deliver it to Chinese companies was outrageous. And the Chinese reply was: “Look, you’re spying on us all the time by cyber means, so if we do something, first we deny we do it. And then the standard procedure is, having denied we did it, you do it too.”

And that Xi Jinping and Obama met at Sunnylands in California in 2013. That issue was top of the priority for the United States at the meeting, and then the Snowden Affair occurred. The Snowden Affair let China off the hook. All the Chinese wanted to talk about were the evils of the United States as revealed by Snowden, so they totally dropped this. Then the United States basically said, “we’re serious,” we had the indictment of five PLA officers, we had an executive order in the spring of 2015 saying that we were going to impose sanctions against companies that stole or took possession of stolen intellectual property. And the first reaction of the Chinese to this was to break the arrangements that had been made to have a relatively low level cyber consultative group – Chris Pater chaired it on our side for the State Department – and the Chinese said: “We pull out of that. You know, you indicted our officers, even if they’re never going to jail here. It’s an affront to us and were breaking our cooperative relations in cyber.” And then the United States said: “You know there’s a summit coming up in September 2015 and, if you want this summit to succeed, we’re going back to 2013 and what we put on the table. We are sick and tired of you using cyberespionage to steal intellectual property to transfer to your companies. It’s corrupting the level playing field or the degree of fairness that we expect in the international trading system.”

You know, it’s one thing to have espionage, we all do it all the time – all states – but the idea that you corrupt the trade system by this type of theft is unfair and its different from the traditional form of stealing commercial proprietary secrets because it’s quick, it’s cheap, and it’s more plausibly deniable. So, you say, “what’s new?” What’s new is you could send an electron across the border. You don’t have to train the electron or have to worry about your spy being caught with a briefcase full of documents and so forth. So, the American position was: If you want a successful summit, you have to agree to stop this. And lo and behold, when Xi Jinping and Obama met in September of 2015, the Chinese reversed their declaratory policy 180 degrees. And they agreed that they would indeed not use cyber espionage for theft of intellectual property. And as far as I’m able to discern from talking to both government officials and officials of companies like FireEye and so forth, that’s been largely observed by the Chinese.

There’s some cases at the margins where people say, “oh this is outside the bounds” and so forth, but there has been a discernable change in Chinese behavior.

Now just to conclude this, what’s interesting is that this bilateral agreement didn’t stay in this little box of the nine-fold matrix called ‘bilateral.’ It was essentially taken by China and the U.S. to the Group of 20 and there it was endorsed by a much broader set of states. So it became the core or the kernel of a broader framework of agreement about norms and constraints in one dimension of cyber conflict, which is using cyberespionage for theft of intellectual property. Now, that doesn’t mean that the problem is solved or that cyberconflict is solved, but it does illustrate that when we look at the failure of the GGE last summer - which is in this cell on the matrix which is broad, global agreements - that you can accomplish things in other areas, whether it be multilateral or plurilateral among likeminded states, or whether it be among states with very different views of the internet such as the U.S, and China at the bilateral level. Having reached a common interest, they can then begin to generalize it to broader sets of states. I suspect that’s the way progress is going to be made. It’s not going to be made by some large U.N. agreement. It’s not going to be made by reconvening the UNGGE at say, forty states or seventy states. I think that would be a recipe for basically burying the efforts to develop meaningful cyber-norms. But finding ways in which states can negotiate their concrete interests, and then broadening that from a bilateral or small group to something which smaller states can sign onto strikes me as a much more plausible approach that we should be following as we look toward creating normative restraints on states in the future.

Now to conclude, there is an interesting question which is: “Could you imagine doing this with Russia on the interference with each other with each other’s election campaigns?” After the Russian events or Russian interference of 2016, I should point out that information warfare and interfering in elections is not new. Americans have done it in the cold war, other Russians have done it with other countries. What’s new, again, is how quick and easy and deniable it is. So, it’s a little bit like the point of intellectual property theft by cyber means. It’s not that it’s totally new, it’s that it’s a technology that has made something which just grows exponentially in terms of the problem for relations. So, could you imagine an analogy to what I said about how we worked step-by-step from first an environmental treaty in the nuclear area – the test ban, atmospheric test ban- to a nonproliferation treaty which was aimed at third parties other states to direct bilateral restraints among countries which were bitter antagonists - the United States and the Soviet Union. But we were able to get the SALT agreements. And the question is in the cyber area, is it plausible to imagine doing with the Russian on interference with each other’s elections, something similar to what we did with the Chinese on theft of intellectual property? And before we can conclude with “yes,” I should point out to you it’s going to be much harder. The relationship between the United States and Russia is much more fraught than it is with China. After all, there is a massive economic interdependence with china, or entanglement, and the other problem of course is that, how would you verify this? The United States had a list of sixteen critical infrastructures, so the GGE rules were supposed to prevent interference with critical infrastructure. The United States after the 2016 election added a 17th critical infrastructure, which was the electoral system. Is it plausible that we and the Russians can sit down and get some sort of negotiation in that area? I think the odds are low, but I think it’s still worth a try. So that’s my effort to try to portray for you what I said in China last week, which is: despite the failure of the UNGGE this summer, efforts to create normative restraints on cyberconflict among states are not dead, and there are many other avenues where we should be trying to proceed besides the UNGGE. Thank you.

Talk of Nazli Choucri at Global
Cybersecurity Day

December 12, 2017 at Harvard University

In her presentation on Cybersecurity Day 2017, Professor Nazli Choucri of MIT explored the core components of the Tallinn Manual, breaking down its doctrine and looking at the overall network view of what it says. By using this analytical approach, one can see that a theme such as “sovereignty” is integrated throughout most of the 600-manual, while others like the laws of cyber conflict are less integrated.

From there, she went on to explore U.S. cybersecurity policy. The biggest constraint was the obstacle of “seamlessness.” Data for different aspects of cybersecurity can be found in many locations and, currently, agencies and entities often have their own separate cyber strategies. Instead what is most needed is a seamless, streamlined set of data and tools for government agencies and corporate entities to better evaluate their own cybersecurity. The other obstacle to streamlined cyber policy, Professor Choucri pointed out, is a lack of coordination within institutions and private enterprises. Overall, she argues, we already have most of the data that we require to create an improved cybersecurity strategy. We just so far lack to seamlessness and capacity to use it properly.

(This speech was given on December 12, 2017 at Loeb House, Harvard University for the 2017 Cybersecurity Day Conference)

There been a minor change in the focus of what I was assigned and, given the twenty minutes, I’m going to start with a minor change – short - and then what I was assigned – short. The deviation has to do with the fact that, given that we have a distinguished guest here, I thought it might be useful to share with you some of our efforts to understand the architecture of the Tallinn Manual. Alright, no deviation. Let me tell you verbally what I would have shown you. You know the Tallinn Manual is about 600 pages, give or take, and 154 rules - very, very, very dense.

So, what I wanted to do is share with you a couple of network views of the Tallinn Manual. Imagine that you’ve read it and you know what the subject is about, but you really want to know how are they connected to each other, why, and does it make any difference? And there may be other things that you may want to know, so I’ve taken the liberty of reminding you of what our visitor said, and that is that law provides something of a road map and we might as well follow it.

You know that the manual is in four segments, and each segment, this is the network view of the entire system. So one page, and I’d like to simply highlight this is a product of transforming the text into a- this is an instruction matrix, just for looking at the rules. So, the focus of this is the rules. You can do this with a chapter or whatever it is. And then the rules are weighed by how important they are, calculating the item values, and the only thing I want to draw your attention to is the big, the bigger ones.

It’s no surprise that number four right there has something to do with sovereignty, and as you go down further, you will see that the sovereign element goes through, and the connections to the others follow suit. Now, without knowing exactly what the rules are and having them by heart, it’s difficult to make sense of what this is, but imagine that you have this and you have the list of rules next to you, and just check. The alternative, of course, is to go through the 600 pages.

Now, the reason we’re interested in this is that not all rules are created equal from the point of view, not law, but the point of view of the way they’re addressed in the Tallinn Manual. And not all of them are connected to the rest of the little system as well. This is the specialized regimes, and the specialized regimes and pieces of those regimes that exist in place, are taken out by the Tallinn Manual and put in there. But still look, the dominance of four.

And this is international peace and security. And back there, notice that it’s back there, it’s not integrated in the whole thing, has to do with the law of cyber armed conflict. The- you could say that “this can’t possibly represent the Tallinn Manual,” and I will tell you that mathematically it does. It may not be the intent of the authors, but for us, it’s very interesting to see the big ones over here, there’s only 92 over there, and that is on the armed conflict. So, the- my suggestion is that next time you look at the Tallinn Manual, take your time and go through the 600 pages, and then think of the visual representation with the list next to you. Finito.

And now I’d like to turn to what was assigned to me. One of the notable presidential directives, notable meaning that, ‘not very much being there to object to,’ is the one that was issued in May of this year to try to - it’s a directive to try to improve the nation’s posture on cybersecurity. Specifically focused on, these are the actions that have to be taken. So, what we were concerned about it is, alright, very nice, what is it that we think we can contribute to- as an example- to a research project designed specifically to contribute to the presidential executive order. And it occurred to us that there are some gaps in how we think about cybersecurity and what we think we ought to be doing. So, what we decided to do is to do a test - a pretest - a proof of concept, around the hypothesis that there is a lot of data around, lots and lots of tools to work with. The problem is the data are all over the place, and they’re disconnected, and it takes an awful lot of time and effort to sew them together. The data on the cyber threats, etc., for critical infrastructure. Then, there’s a lot of data and guidelines about what you should and should not do, again disconnected, etc. And one agency that puts it all together in some way in just one place doesn’t connect them but do them altogether, in case you know where to go is NTSC, as an example.

So we focused on seeing if we can develop a one-stop, full-package set of tools to help industries or agencies or entities to steer through - figure out what data they need on identifying their cybersecurity problems, because this provides you reference models. If you’re this, this is your reference model, if you’re born as this, this would be helpful to you. Period. New paragraph. Another pile of data that says that, well if you have this problem, we have guidelines for you, but they’re over there. So the simple technical problem of seamlessness might help an analyst to pull together what is relevant to them.

The interesting thing about that is that, what we’re emphasizing here is that, there’s a number two is not really stated right, it’s not creating large data sets – we’ve got those. It is creating the tools for selectively linking what you need. And I’m not arguing for big data, I’m saying whatever we have in here is plenty to start with, and once you put together your database, then you really want to set up analytics to understand what’s the problem, what’s the issue down on the floor that recalls the ground truth for my enterprise or your enterprise? All of these things we assume we do, but actually we do it descriptively, we don’t do it systematically. Having gone through the proof of concept, I know for a fact that we don’t have a full end-to-end, so to speak.

The next bit of this is, after we establish the ground truth of the system, is alright, enterprise responses. If this is our problem, this is our dilemma, what do we do about it? Back again to this set of data on guidelines, directives, etc., as a starting point to see whether there’s any kind of match there. Any undergraduate can do this quite easily. But the dilemma is, even if the undergraduate puts it together, which is matching guidelines that meet the problems that you have in your enterprise, there’s still the minor matter of the CEO up there - plus that top layer - who really isn’t quite understanding what’s going on. No one would believe what an undergraduate says or even the boss of his boss, and the gap between what is relevant to here and what is relevant to the as-is system and the ground truth is rather big. So, part of the dilemma is the general area of national response to cybersecurity threats is organizational - as you know - and institutional - as you know - in ways of reducing the gap between them that direct and them that operate.

So, the end point is we would recognize that we got somewhere if these and these consolidate and understand, agree, in principle as to what might be the common response to a set of problems that they all know that they have and has been validated by the fact that this has characterized the reference model and provides a set of tools to deal with them. You will probably wonder what possible contribution can one little project really give to improving our approach to the analysis of infrastructure threats and responses can make to the scheme of things. My answer would be that we’ve always argued “more data, more data, more data” and the fact of the matter is we don’t know how to use the data that we have well. And we’re certainly not connecting the response-side, which is guidelines, with directives – though shall not do whatever it is – what the nature of the diagnostics has been. So we thought that if we’re able to provide a supply chain of response patterns along the lines I’ve summarized, then the next step would be to see whether it’s scalable. And if it’s scalable, then we’re moving in the right direction. If it’s not scalable, too bad. That’s it. Thank you.

Cyber-Defense Strategies for a Nation

December 12, 2017, Derek Reveron and authors

This report was created by the Boston Global Forum for Cybersecurity Day 2017 and was presented by Prof. Derek Reveron at the AIWS event. It cites several major cyberattacks in 2016-17, including Russian election interference, the Equifax breach, and WannaCry as examples of how serious hacking and other attacks have become. Among the belligerents on the digital front are criminal networks, terrorist groups, and nation-states. Given these developments, cybersecurity, especially protecting cyber infrastructure, has become a national security imperative. Cyber-Defense Strategy for a Nation advises that states work toward more cohesive cyber-defense plans both within their own agencies and with private sector partnerships.

Principles for a Cyber Defense Strategy

Derek S. Reveron, Jacquelyn Schneider, Michael Miner, John Savage, Allan Cytryn, and Tuan Anh Nguyen December 12, 2017

Threat Landscape

The past two years were a watershed for cyber-attacks. From the Russian-led hacking campaigns in the European and American elections, to the spread of ransomware WannaCry and Petya, to the massive data breaches against credit agency Equifax—never before have cyber-attacks had such a significant effect on national security, economies and cultures. Although attacks in developed countries often occupy the headlines, developing countries are also suffering attacks.

In addition to the political and economic implications of cyber-attacks, major infrastructures — electric grids, dams, wastewater, and critical manufacturing — are vulnerable to physical damage from cyber-attack. The U.S. Department of Homeland Security’s Industrial Control Systems Cyber Emergency Response Team says it has never seen so many successful exploitation attempts on the control system layer of industrial systems. Hackers are increasingly infiltrating the networks of major industrial operations all the way down to the sensors and systems that manage our digitized worlds.

And the actors that conduct these attacks are just as prolific and varied as their targets:

• Transnational organized criminal groups harness the power of the internet to steal identities and conduct financial crimes. They have also been tied to nationalist and state hacker groups.
• Terrorist organizations use cyberspace to recruit fighters and promote physical acts of terror. They are increasingly prolific criminals as well, using financial attacks to buttress their funding.
• Nation states employ cyber tools for espionage to lay the groundwork for significant military operations in cyberspace and launch campaigns to steal intellectual property. North Korea— a major conventional adversary—has used cyberspace operations to steal money from banks, to threaten a private film company, and to disrupt South Korean news organizations.

The pace of cyber-attacks and the increasingly large-scale effects created by them suggest that these cyber actors are getting stronger and bolder and that the entry barriers to perpetrate cyber-attacks continue to lower. Even non-technical adversaries can hire ―hacking as a service‖.

If the trends continue, we can expect significant disruptions to critical infrastructure, severe financial impacts, and potential loss of life. With the proliferation of smart technology and the Internet of Things, the attack surface for cyber operations only expands. At risk from these cyber threats are not just individuals or military units, but the increasingly digitized critical infrastructure that undergird modern states’ economies and societies. A recent estimate from the private insurer Lloyd’s of London estimates a major cyber-attack could cost over $50 billion.

Perhaps because of the diverse nature of threats and the constant barrage of cyber-attacks, governments have struggled to create the strategies and tools needed to successfully deter and defend against cyber operations. Governments have been taking steps either bilaterally like the 2015 agreement between the PRC and US limiting intellectual property theft or multilaterally through the UN, G20, and G7 that produced the 2001 Budapest Convention on Cyber Crime and the UN Group of Governmental Experts to study information security.

Overall, international law and norms are developing slowly, but have proven to be insufficient to safeguard these necessities that citizens universally require. Developed countries are better positioned than developing countries. Cyber-insecurity cuts across many dimensions and simultaneously crosses from technology into political, economic, and social realms. More than ever, citizens, regardless of nationality, are exposed to risks created by cyber insecurity. Both public opinion polling and global intelligence agencies’ assessments place cyber security as a leading national security challenge and a pressing concern for citizens and policymakers alike.

Role of National Security Strategy

Cyber infrastructure is at the heart of essentially every aspect of modern life: including telecommunications, financial systems, energy, transportation, defense and other critical sectors. The more open a society, the wider the attack surface with vulnerabilities that require defense. Information sharing centers and organizations have proven an effective means to bring stakeholders together, but national security challenges remain in cyberspace.

No longer just a nightmare scenario, a December 2015 attack against a Ukrainian power plant made clear that code can be weaponized. The attack resulted in 225,000 people being without electricity for a period of time. The Ukraine experience demonstrates cyber-attack is a practical instrument that can wreak havoc on civilian populations. Governments should focus not only on preventing such attacks, but also preparing contingency plans and developing resilient societies. Though currently a low probability of occurrence in peace-time, the future is uncertain. Such high-impact events will likely emerge during war-time and perhaps even before. Frighteningly, such attacks might be the surprise attack that launches a war.

As countries grapple with the challenges of cyber defense, they are guided by several interests. First, governments must work to prevent, deter and reduce the threat of a cyber-attack on critical infrastructure since the impact on its society and civilians would be significant. Second, given the wired nature of the global economic system, governments must ensure stability and resilience of major systems that know no borders and extend around the globe. Finally, governments must protect their citizens from external aggression, which can be preceded by a cyber-attack against civilian infrastructure from state or non-state actors.

How can governments build national cyber strategies that accomplish these objectives? Structural goals can support a viable cyber defense strategy. First and foremost, governments should streamline cyber operations by reducing bureaucratic complexities and duplicative responsibilities to maximize time and efficiency. There are many good examples in developed countries that can be emulated.

Among these are: overcoming the inherent insecurity in legacy systems, updating archaic and territorial bureaucratic mechanisms in order to improve information sharing, and aligning cyber capabilities with emerging threats.

Additionally, it is essential to induce public support for a cohesive national approach to cybersecurity. Most network vulnerabilities are exploited as a result of human error or negligence. Although cutting-edge hardware, smart programming (and grids), and artificial intelligence could mitigate vulnerability gaps in the future, they will never close them all. Strengthening digital literacy and education across national populations as a foundational element of national cyber defense can enhance whole-of-government efforts in combatting cyber-attacks from the individual to the systemic level.

Lastly, collaboration with the private sector is not only a preferable course of action, but a necessary one. Developing policies to harness the talent and cooperation of the private sector will be a decisive factor for a cyber defense aligned with the interests and values of the society they are entrusted to defend.

Principles for a Cyber Defense Strategy

By creating standards and promoting information sharing, governments are assisting industry to improve cybersecurity. Given the scope of critical infrastructure, there is no way that any government can create the capabilities or institutions to defend against all attacks. Additionally, each country has its own laws, cultures, and expectations of government that guide strategic development. Nevertheless, there are basic principles that all governments can follow:

Characterize threshold for action. What do we care about? If states understand when actors view cyber-attacks as national security incidents, then they can create more tailored deterrence strategies. At the same time, understanding adversaries’ thresholds for action allow states to combat threats with counter-cyber operations that stay under the threshold for escalation.

Resolve hack back authority. Governments attempt to control subversive cyber behavior within their borders with prohibitions against hacking. But there are strong incentives (both technical and economic) for companies to pursue some level of hack-back against cyber attacks. To avoid escalation and misinterpretation, governments must retain the monopoly on legitimate use of force – both kinetic and cyber - preventing companies from conducting unilateral actions. But to ensure that all of a country’s resources are engaged to maximum effect without risk of unintended consequences, the respective roles of government and industry need to be clear and aligned. Israel, for example, has clearly delineated that Cyber-Defense is in in the civil sector and led at a senior governmental level, while cyber-offense is left to the defense organizations.

Connect national and local governance. Local responders are generally the first (and quite often the only) government aid to remediate the effects of critical infrastructure attacks. This requires strong connections between national entities and local governments. Governments should work together to identify and remove barriers for information sharing in order to ensure that national and local responders have full access to the problems that they are defending against and responding to.

Collaborate across borders. International collaboration has proven to be effective in many realms, including regional and national security. Governments should lead efforts based on these many successful precedents should be undertaken to enable nations to collectively work to enhance their cyber security.

Facilitate cooperation across critical sectors. The US government has successfully sponsored information sharing centers and organizations within sectors, such as finance and electricity distribution. These models should be extended to address the complex interdependencies among sectors. Government and industry should continue to work together to determine how best to promote and enable working groups of executives across industry sectors, advisory boards, and routine gatherings between government officials and the private sector to address cyber vulnerabilities and dependencies.

Engage the civilian information technology sector. Given that cyberspace is a civilian space, it is important to engage vendors of cyberspace technology in the discussion of norms for responsible state behavior. Corporations such as Microsoft are promoting norms. States should take these nascent efforts seriously. More broadly, government is encouraged to bring technology experts to the table when formulating cyber defense. Government should also reflect on how to better fund cyber defense research and incubate software technologies that enable defense.

Empower digital literacy and education. The most frequent cyber threats occur at the day-to-day individual level. While larger systemic threats are less frequent, though they hold the potential for greater impact. Governments can harness education to mitigate individual threats and simultaneously harden attack vectors toward greater systemic threats.

Practice comprehensive resilience. A long-term cyber defense strategy requires ongoing short-term resilience planning. Efficient standard operating procedures, redundant systems, and competence building exercises can inject trust in the safety of our systems and enhance the public-private sector partnership. A resilient society can also better deter or respond to cyber-attacks. Governments can lead by encouraging resilience training in the information technology and related sectors.

Build partnerships among developed and developing countries. Developed countries are pursuing important standards and generating norms to improve information security. Developing countries, however, often lack the resources to do so. Nations should extend traditional alliances and partnerships designed to promote international peace and security to ones that promote information security.

Next Steps

The most concrete step a state can take for cyber defense is to articulate a comprehensive cyber defense strategy. This must recognize the unique nature of the cyber realm, where there are no natural barriers – borders, distance, or geography - to attack. Plagued with the constant pace of assaults, states have spent too much time responding to the near-term threats without crafting longterm strategies to change the threat landscape.

Cyber defense strategies that identify vital country assets and policy short-falls and that prioritize resources are vital to successful defense. These strategies must extend into the promotion of the cyber-IQ of the nation’s population, from personal awareness of safety through social and corporate responsibility. In turn, a well-crafted cyber defense strategy will lead to the development of appropriate institutions, authorities, and capabilities for the entire nation. This also requires sharing best practices for designing and maintaining computer systems. In turn, governments must invest the time and resources to develop effective regulation for critical data and sectors.

Internationally, cyber defense must be tackled beyond the state level. There are many important efforts by OECD, OSCE, ENISA, NATO, and SCO. Additionally, NGOs such as the Boston Global Forum, East-West Institute, the Bildt Commission, and the Global Commission on Cyberspace Stability should continue to promote the development, identification, sharing and adoption of best practices in the cybersecurity arena with particular focus on developing countries. Developing countries should make investments to secure their infrastructure; this is essential to security and preventing a widening gap in the capabilities of nations. These investments are essential to reducing costs resulting from cybercrime and espionage and to increasing the confidence and trust of businesses to operate in developing countries.

Acknowledgments
The authors would like to thank Michael Dukakis, Thomas Patterson, and all other members of the Boston Global Forum for their support and coordination of this endeavor. These are the views of the authors and do not reflect official policy.

BGF-G7 Summit Initiative 2017:
Taormina Plan

At the BGF-G7 Summit Conference on April 25, 2017, Boston Global Forum announced the Taormina Plan which then was presented to the G7 Conference in Taormina, Italy in May . The Plan was drafted in response to the proliferation of cyber weapons and military cyber commands, as well as the spread of fake news online.

The Plan proposed increased interstate cooperation on cybersecurity to prevent cyber conflict and progress towards a formal international treaty on the issue. The Taormina Plan also escalated pressure on social media and other outlets to prevent the spread of fake news and disinformation campaigns.

Cyber Conflict and Fake News
Proposals for Consideration at G-7 Summit, Taormina, Italy, May 26-27, 2017

The Boston Global Forum herein submits policy proposals in two areas—cyber conflict and disinformation (fake news)—for consideration at the 2017 G-7 Summit in Taormina, Italy.1

1. Taormina Plan to Prevent Cyber Conflict

At the 2016 G-7 Summit at Ise-Shima, Japan, countries affirmed their commitment to support an open, secure, and reliable cyberspace through the application of international law to state behavior in cyberspace, the acceptance of voluntary norms of responsible state behavior in peacetime, and close cooperation among states against malicious cyber activity. Absent from the formal communiqué were statements on cyber conflict, which is a large and growing threat.

Dozens of countries are building military cyber commands. Given the threat of cyber conflict to civilian populations, it is essential to develop ways to prevent the proliferation of cyber weaponry. Thus far, states have shown remarkable restraint in using overt cyber weapons, the exceptions being Stuxnet, used against Iran’s nuclear program; Shamoon, used against Saudi Arabia’s energy infrastructure; and the Sony attack against free speech. The international community should build upon this restraint and push toward norms that would make the use of cyber weaponry unacceptable.

Cyber weapons are new, not well understood, and if not properly controlled, likely to lead to escalation, with serious unexpected consequences. Software can be used for espionage or to activate or disable a weapon. If a military cannot assess the intent of foreign malware found in its critical computer systems, it might assume the worst. For example, if such software were found in a nuclear missile facility, a commander, fearing that an enemy wants to disable its launch capability, might decide to “use it or lose it.

Targets for cyber attacks could be a) a nation’s military command and control system, including military satellites and logistical systems; b) its economy, including its critical infrastructure such as power, water, banking or telecommunications; or c) its system of governance, including its major agencies and electoral systems.

Because national economies are tightly integrated today, cyber conflict, whether it escalates to kinetic warfare or not, could cause serious economic or political damage to a state and put civilians at risk. An attack against a military system is likely to spread to civilian systems, thereby violating collateral damage norms. The law of armed conflict does apply in cyberspace, but the boundary between war and peace has blurred.

International cooperation is essential to reduce the risk of cyber war by improving transparency across the major powers and enlisting their cooperation against non-state actors and non-conforming states. Risk reduction should begin with identification of critical assets and the risks to which they are exposed. States should then create a system to reduce risks. This will include cooperation with other states and acquiring the necessary expertise to reduce software vulnerabilities. This effort will take the form of information sharing, bilateral and multilateral agreements, articulation of norms of state behavior, and the creation of risk reduction centers that are equipped with “hot lines” to those of other states. At some point a formal international treaty could be advisable.

Implementing norms against unacceptable behaviors fosters collective action and strengthens restraint. There may be a time when the international community establishes an international center to monitor and combat cyber threats, conducts attribution analysis, coordinates actions to protect computer systems, and disrupts non-state actors. In the process, states may have to surrender some of their sovereignty.

Cyber risk reduction begins with adherence to the GGE Norms (UN A/70/174), the G7 Ise-Shima norms, and the G20 Norms. However, it goes beyond these and should include the following measures:

• Sharing of cybersecurity knowledge in depth, include established guidelines.
• Public identification of critical national asset classes.
• Banning the implantation of software in these classes during peacetime.
• Sharing of information designed to improve attribution.
• Creation and proper manning of national risk reduction centers.
• Establishment of regular security drills between national centers

The Boston Global Forum calls upon the G7 countries to lead an effort to create a new international institution, to be called the Taormina Commission whose purpose is 1) to collect and share with among nations deep computer system security knowledge to greatly improve the security of cyberspace, 2) identify those categories of critical national assets that should not be targeted in peacetime, 3) promote adoption of a norm banning the implantation of any software by one nation in a publicly identified critical national asset class of another nation, 4) facilitate sharing of effective technical means of attribution in cyberspace, 5) encourage the creation of national risk reduction centers, and 6) facilitate international exercises between national risk reduction centers for the purpose of minimize the risk of cyber conflict.

2. Taormina Plan to Combat Fake News

Disinformation, increasingly in the form of fake news, is a growing problem. Fake news consists of pseudo news stories fabricated to be believable. Producers of fake news are typically driven by one of two motives. One is the profit motive. Some social media sites, for example, are created to look like authoritative news sites but in fact publish false sensational stories designed to attract visitors and generate advertising revenue or click bait. The other motive is influence over public opinion. State and state-sponsored actors, as well as motivated individuals and organizations, are the source of most fake news of this type.

Although fake news is not new, the scale of the problem has increased because of technological change. Cyber capabilities and social media have made it possible to distribute disinformation at speeds and volumes not logistically possible in earlier times. And nearly anyone with access to social media can participate. Political instability and opportunity create incentives to engage in the practice, and technological innovations have made it easier to create a multiplier effect.

That fake news is ubiquitous is not in doubt. A study found, for example, that by the end of the 2016 U.S. presidential election campaign the number of Facebook shares, reactions, and comments in response to fake news stories exceeded the number in response to actual news stories. The study did not distinguish the initial sources of the fake news stories, but evidence indicates that Russia was one such source.

Nor is there any doubt that fake news is a threat to orderly society. It can disrupt elections; contribute to public misinformation; sully reputations—not only of individuals, but also of organizations, institutions, and states; and exacerbate ideological and group conflict. For example, Russia Today (RT) engaged in a highly sophisticated cyber disinformation campaign to inflame tensions in Russian-speaking minority regions of eastern Ukraine.

States should commit themselves to combating fake news. The European Union’s Eastern Strategic Communications division was created in 2015 to counter Russian disinformation and believes it has evidence of a widespread campaign targeting the European Union. The United Kingdom's Culture Media and Sport Committee created an inquiry on fake news in early 2017. Other states should make a similar commitment.

Political leaders bear special responsibility. Partisan advantage can accrue to political leaders when opponents are the target of fake news. Research indicates, however, that one of the most effective counters to fake news is unified condemnation of such messages by politicians of opposing parties.

States must cooperate in identifying sources of fake news. Although there are literally thousands of such sources, research indicates that a relatively small number of websites generate most fake news, relying on bots and other tools to spread disinformation. A U.K.-based research team, for example, found that many fake Twitter bots are interconnected—the largest cluster included over 500,000 bot accounts.

Pressure should be exerted on social media platforms to detect, identify, and block such sites. Facebook, Twitter, and other platforms have recently taken an interest in combating fake news. Facebook, for example, has partnered with fact-checking organizations to place warnings on fake news items. Questions remain, however, about the potential for scaling up such warnings and how far platforms are willing to go even if other obstacles to scalability are overcome.

News organizations, too, must be encouraged to combat fake news. In many countries, news organizations are facing financial pressures resulting from audience decline, which has weakened their capacity for fact checking, intervening directly to refute false claims, and conducting the well-sourced reporting that can outperform disinformation on social media. Bolstering reliable news organizations is vital to states’ national interest.

Technology must also be mustered in the effort to combat fake news. Technology is a breeder of fake news but can also be employed to mitigate it. There is an urgent need to accelerate the development of software that can assist in the detection and disruption of fake news stories.

The effectiveness of fake news rests to a substantial degree on human tendencies, including our tendency to believe information that aligns with our partisan inclinations. The tendency is strong enough that research has found that efforts to combat fake news with counter messages can sometimes backfire, resulting in reinforcement rather rejection of a false belief or perception. Nevertheless, counter messaging can work if conducted properly. Moreover, people can be instructed in safe practices on the Internet such as not sharing messages from unfamiliar sources. Media literacy should be a staple of a twentyfirst century education. The Global Citizenship Education Network at UCLA can be a valuable resource as can be the Ethics Code of Conduct for Cyber Peace and Security (ECCC) of the Boston Global Forum.

BGF-G7 Summit Initiative 2016:
Ise-shima Norms

December 12, 2016

In 2016, aiming at providing input to the agenda for the G7 Summit in Ise-Shima, Japan, the GBF proposed the Ise-Shima Norms. The Norms emphasize the importance of cybersecurity to the 2016 G7’s core themes of Global Economy & Trade, Quality Infrastructure Investment and Development. The document also established the Ise-Shima Challenge, which encourages G7 states to lead the way in improving online security and confidence in computer networks.

Among the specific actions recommended in the document are the adoption of norms set forth by the G20, the UN Groups of Governmental Experts and the BGF’s Ethics Code of Conduct for Cybersecurity , the establishment of national and international cybersecurity centers, and the open sharing of best practices by cybersecurity experts.

(The Ise-Shima Norms were introduced at the 2016 BGF-G7 Summit Conference on May 9, 2016 at the Harvard University Faculty Club)

Governor Michael Dukakis

Professor Thomas Patterson

Nguyen Anh Tuan

Professor John Savage

Professor Derek Reveron

Allan Cytryn

Ryan Maness

Securing Cyberspace and the G7 Agenda*

The Boston Global Forum welcomes this opportunity to provide input to the agenda for the G7 Ise-Shima Summit. Global Economy and Trade, Development, and Quality Infrastructure Investment are three themes of this summit. Given the importance of the Internet in all three areas, we encourage you to address the following actions concerning cybersecurity at the summit. These actions have as their goal to raise the general level of security in cyberspace.

1. Encourage the global adoption of the 2015 G20 cybersecurity norms, which include the 2015 GGE norms by reference, as the Ise-Shima Norms.

2. Endorse private and public efforts to improve ethical Internet behavior. The UCLA Global Citizenship Education Program and the Boston Global Forum’s Ethical Code of Conduct for Cyber Peace and Security are two such examples.

3. Engage vendors of cyberspace technology in the discussion of norms for responsible state behavior.

4. Establish domestic and international centers and mechanisms designed to reduce the risk of cyber conflict.

5. Encourage national cybersecurity experts to voluntarily publicize their best security practices.

6. Recognize that formulation of policy concerning cyberspace technologies requires the participation, on an equal footing, of respected academics and industry experts on the technologies in question.

*The lead author on this document was John Savage (Brown University) with contributions from Michael Dukakis (Boston Global Forum), Nguyen Anh Tuan (Boston Global Forum), Allan Cytryn (Risk Masters International.), Ryan Maness (Northeastern University), Derek Reveron (Naval War College), and Thomas Patterson (Harvard University).

These proposals stem from several developments.

First, over the last five years, small groups of governments have formulated international norms of state behavior, particularly for peacetime use. Negotiations have been held at the UN and many other forums. Now that a set of reasonable

*The lead author on this document was John Savage (Brown University) with contributions from Michael Dukakis (Boston Global Forum), Nguyen Anh Tuan (Boston Global Forum), Allan Cytryn (Risk Masters International.), Ryan Maness (Northeastern University), Derek Reveron (Naval War College), and Thomas Patterson (Harvard University). norms have been established it is appropriate to reach out to nations that have not participated in these discussions and encourage them to endorse them as well. In many cases, this will require some capacity development, which is encouraged by UN Resolution 70/237. The G7 nations can help increase confidence in computers and network technology by leading this effort, which could be called the IseShima Challenge.

Second, global citizenship education has an important a role to play in building a sustainable peace and security in cyberspace. We encourage a significant effort in this regard.

Third, we observe that the success of many computer vendors requires that their customers have confidence in their products, which is undermined by unreported cyber vulnerabilities and by state launched weapons that result in mass events. Thus, some vendors, notably, Microsoft, have begun to formulate and promulgate norms of state behavior that are important from their point of view. States should take these nascent efforts seriously and engage these firms in norms formulation.

Fourth, given the large number of states that are developing cyber weapons, the risk of accidental or intentional cyber conflict is rising. All states should recognize this risk and work to mitigate it. Centers designed to reduce the risk of cyber conflict are needed in every country with offensive cyber capability. Operators in these centers must come to know each other so that they can properly assess national intentions during a cyber crisis. This issue has been highlighted in the latest 2015 GGE report.

The fifth recommendation on best practices is illustrated by a public talk given in January 2016 by Rob Joyce, head of NSA’s Tailored Access Operations Department. He offered advice on cybersecurity measures to protect a computing facility from the type of penetration in which his department engages. This event was a remarkable example of the security services of a major nation, the US, offering constructive advice to others. Each G7 nation could assume the same responsibility for improving the security of cyberspace by offering such examples of best practices.

*The lead author on this document was John Savage (Brown University) with contributions from Michael Dukakis (Boston Global Forum), Nguyen Anh Tuan (Boston Global Forum), Allan Cytryn (Risk Masters International.), Ryan Maness (Northeastern University), Derek Reveron (Naval War College), and Thomas Patterson (Harvard University).

Finally, policy formulation concerning cyberspace can be very challenging. Unless technology experts are at the table with policymakers when such policy is formulated, errors are easily made that may lead to poorly formulated international norms or domestic legislation. Thus, it is essential that academic and technology experts be engaged and treated as co-equals with policymakers during this process.

The appendices that follow provide specific recommendations that have been developed by a variety of parties and are aligned with the above objectives.

*The lead author on this document was John Savage (Brown University) with contributions from Michael Dukakis (Boston Global Forum), Nguyen Anh Tuan (Boston Global Forum), Allan Cytryn (Risk Masters International.), Ryan Maness (Northeastern University), Derek Reveron (Naval War College), and Thomas Patterson (Harvard University).

Appendix A: The Ise-Shima Norms

The G7 nations should promote the development of social, legal and technological norms and agreements that will protect the information and communications infrastructures of the world’s nations and their people. In doing so, these norms will promote the abilities of these technologies to fulfill their promise to enhance the lives of all. These actions follow successful precedents in many areas where international, national and private efforts have worked together to enable the world to realize the benefits of new technologies in order to maximize their benefit to all and to mitigate differences between nations and peoples.

I. The G7 nations should encourage adoption of norms set forth by the G20, the United Nations’ Group of Government Experts (GGE), and the Boston Global Forum’s Ethics Code of Conduct for Cybersecurity (ECCC).

1. Key G20 norms

·Nation-state conduct in cyber space should conform to international law and the UN charter.

·No country should conduct or support cyber-enabled intellectual property theft for commercial purposes.

2. Key GGE norms

·No country should intentionally damage the critical infrastructure of another state or impair infrastructure that serves the public and would undermine the human rights guaranteed by the U.N. Declaration.

·No country should act to impede the response of Computer Security Incident Response Teams (CSIRTs) to cyber incidents, nor should CSIRTs be used to create cyber incidents.

·Countries should cooperate with requests from other nations to investigate cybercrimes and mitigate malicious activity emanating from their territory.

3. Key ECCC norms

·Countries should not establish or support policies or actions harmful to cyberspace.

·Countries should not engage in the unlawful taking of the assets or confidential information of private individuals or organizations. ·

·Nations should not use cyberspace to wrongly damage the reputation of other nations, organizations, or individuals.

II. The G7 nations should engage hardware and software vendors in developing cyber norms, following the six guidelines in the Microsoft report, “International Cyber Security Norms: Reducing Conflict in an Internet-Dependent World.”

1. Countries should not target information and communications technology (ICT) companies to insert vulnerabilities (backdoors) or take action that would undermine public trust in products and services.

2. Countries should have a clear principle-based policy for handling product and service vulnerabilities that reflects a strong mandate to report them to vendors rather than stockpiling, buying, or selling them.

3. Countries should exercise restraint in developing cyber weapons and should ensure that any which are developed are limited, precise, and not reusable.

4. Countries should commit to nonproliferation activities related to cyber weapons.

5. Countries should limit their engagement in cyber offensive operations to avoid creating a mass event.

6. Countries should assist private sector efforts to detect, contain, respond to, and recover from events in cyberspace.

III. The G7 nations should develop cyber risk reduction measures.

1. Create domestic threat reductions centers equipped with secure communications with other such national centers to mitigate risks before, during and after cyber-incidents.

2. Assess and improve the cyber security of national critical infrastructures.

3. Take steps to reduce the number of domestic compromised computers, particularly those that have been marshalled into botnets.

4. Improve domestic cybersecurity through advisory and legislative measures.

IV. The G7 nations should promote the development, identification, sharing and adoption of “best practices” in the cybersecurity area.

V. The G7 nations should support cyber security capacity building in developing countries.

1. Investments should be made in developing countries to secure their infrastructures as this is essential to securing the connected global infrastructure and preventing a widening gap in the capabilities of nations. In the interconnected world, these investments are essential to reducing costs resulting from cyber-crime and espionage and to increasing the confidence and trust of businesses to operate in developing countries.

2. Investments should be made and cooperation undertaken between developed and developing countries to re-envision methods of education and learning, utilizing the global information and telecommunication infrastructure to enhance the accessibility of suitable educational opportunities for people everywhere.

Appendix B
2015 GGE Norms
(Excerpt from UN A/70/174*)

The 2015 UN GGE committee consisted of experts from 20 representing Belarus, Brazil, China, Columbia, Egypt, Estonia, France, Germany, Ghana, Israel, Japan, Kenya, Malaysia, Mexico, Pakistan, the Republic of Korea, the Russian Federation, Spain, the United Kingdom of Great Britain and Northern Ireland, and the United States of America. The two G7 countries not represented are Canada and Italy.

“13. … (T) present Group offers the following recommendations for consideration by States for voluntary, non-binding norms, rules or principles of responsible behaviour of States aimed at promoting an open, secure, stable, accessible and peaceful ICT environment: a

a) Consistent with the purposes of the United Nations, including to maintain international peace and security, States should cooperate in developing and applying measures to increase stability and security in the use of ICTs and to prevent ICT practices that are acknowledged to be harmful or that may pose threats to international peace and security;
b) In case of ICT incidents, States should consider all relevant information, including the larger context of the event, the challenges of attribution in the ICT environment and the nature and extent of the consequences;
c) States should not knowingly allow their territory to be used for internationally wrongful acts using ICTs;
d) States should consider how best to cooperate to exchange information, assist each other, prosecute terrorist and criminal use of ICTs and implement other cooperative measures to address such threats. States may need to consider whether new measures need to be developed in this respect;
e) States, in ensuring the secure use of ICTs, should respect Human Rights Council resolutions 20/8 and 26/13 on the promotion, protection and enjoyment of human rights on the Internet, as well as General Assembly resolutions 68/167 and 69/166 on the right to privacy in the digital age, to guarantee full respect for human rights, including the right to freedom of expression;
f) A State should not conduct or knowingly support ICT activity contrary to its obligations under international law that intentionally damages critical infrastructure or otherwise impairs the use and operation of critical infrastructure to provide services to the public;
g) States should take appropriate measures to protect their critical infrastructure from ICT * Retrieved from http://www.un.org/ga/search/view_doc.asp?symbol=A/70/174 on May 7, 2016. *The lead author on this document was John Savage (Brown University) with contributions from Michael Dukakis (Boston Global Forum), Nguyen Anh Tuan (Boston Global Forum), Allan Cytryn (Risk Masters International.), Ryan Maness (Northeastern University), Derek Reveron (Naval War College), and Thomas Patterson (Harvard University). threats, taking into account General Assembly resolution 58/199 on the creation of a global culture of cybersecurity and the protection of critical information infrastructures, and other relevant resolutions;
h) States should respond to appropriate requests for assistance by another State whose critical infrastructure is subject to malicious ICT acts. States should also respond to appropriate requests to mitigate malicious ICT activity aimed at the critical infrastructure of another State emanating from their territory, taking into account due regard for sovereignty;
i) States should take reasonable steps to ensure the integrity of the supply chain so that end users can have confidence in the security of ICT products. States should seek to prevent the proliferation of malicious ICT tools and techniques and the use of harmful hidden functions;
j) States should encourage responsible reporting of ICT vulnerabilities and share associated information on available remedies to such vulnerabilities to limit and possibly eliminate potential threats to ICTs and ICT-dependent infrastructure; k) States should not conduct or knowingly support activity to harm the information systems of the authorized emergency response teams (sometimes known as computer emergency response teams or cybersecurity incident response teams) of another State. A State should not use authorized emergency response teams to engage in malicious international activity.

14. The Group observed that, while such measures may be essential to promote an open, secure, stable, accessible and peaceful ICT environment, their implementation may not immediately be possible, in particular for developing countries, until they acquire adequate capacity.”

In addition, the 2015 GGE encouraged states to implement confidence-building measures to include a) identification of domestic technical and policy points of contact “to address serious ICT incidents,” b) risk reduction measures, c) sharing of general threat information, known technological vulnerabilities, and best security practices, and d) identification of critical domestic infrastructures and the legal, technical and assessment steps that nations have taken to protect them. This GGE also encouraged states to exchange law enforcement and cybersecurity personnel as well as to facilitate exchanges between academic and research institutions. The creation of national computer emergency response teams is also encouraged along with exchanges of personnel between such groups.

Appendix C
G20 Cybersecurity Norms
Excerpt from the
G20 Leaders’ Communiqué
Antalya Summit, 15-16 November 2015*

“A26. We are living in an age of Internet economy that brings both opportunities and challenges to global growth. We acknowledge that threats to the security of and in the use of ICTs, risk undermining our collective ability to use the Internet to bolster economic growth and development around the world.

1. We commit ourselves to bridge the digital divide. In the ICT environment, just as elsewhere, states have a special responsibility to promote security, stability, and economic ties with other nations.
2. In support of that objective, we affirm that no country should conduct or support ICTenabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sectors.
3. All states in ensuring the secure use of ICTs, should respect and protect the principles of freedom from unlawful and arbitrary interference of privacy, including in the context of digital communications. …
4. (W)e welcome the 2015 report of the UN Group of Governmental Experts in the Field of Information and Telecommunications in the Context of International Security, affirm that international law, and in particular the UN Charter, is applicable to state conduct in the use of ICTs. …
5. (We) commit ourselves to the view that all states should abide by norms of responsible state behaviour in the use of ICTs in accordance with UN resolution A/C.1/70/L.45. †
6. We are committed to help ensure an environment in which all actors are able to enjoy the benefits of secure use of ICTs. “

†G20 Members: Argentina, Australia, Brazil, Canada, China, France, Germany, India, Indonesia, Italy, Japan, Korea, Mexico, Russia, Saudi Arabia, South Africa, Turkey, United Kingdom, United States, and European Union. All G7 member states are members of the G20. Their names are in boldface.

* Retrieved from http://www.gpfi.org/sites/default/files/documents/G20-Antalya-Leaders-SummitCommuniqu--.pdf May 7, 2016. † UN resolution A/C.1/70/L.45 incorporates the GGE Norms by reference.

REFERENCES

Bloom, Les and John E. Savage. “On Cyber Peace.” The Atlantic Council, August 2011, Accessed 3/4/2016 at http://www.atlanticcouncil.org/images/files/publication_pdfs/403/080811_ACUS_OnCyberPeace.PDF

Boston Global Forum. “Ethics Code of Conduct for Cyber Peace and Security,” December 12, 2015. Accessed 3/14, 2016 at http://bostonglobalforum.org/2015/11/the-ethics-code-of-conduct-for-cyber-peace-and-security-ecccversion-1-0/

Nicholas, Paul. “Six Proposed Norms to Reduce Conflict in Cyberspace.” 1/20/2015. Accessed 3/4/2016 at http://blogs.microsoft.com/cybertrust/2015/01/20/six-proposed-norms/

Painter, Christopher. “G20: Growing International Consensus on Stability in Cyberspace.” State.gov, 12/3/2015. Accessed 3/5/2016 at https://blogs.state.gov/stories/2015/12/03/g20-growing-international-consensus-stability-cyberspace

Valeriano, Brandon and Ryan C. Maness. “The Coming Cyberpeace: The Normative Argument against Cyberwarfare.” Foreign Affairs.5/13/2015. Accessed 3/3/2016. https://www.foreignaffairs.com/articles/2015-05-13/coming-cyberpeace

The 2015 GGE norms are stated in paragraph 13 of “Developments in the field of information and telecommunications in the context of international security,” UN Report A/70/174, July 22, 2015. Accessed 5/7/2016 http://www.un.org/ga/search/view_doc.asp?symbol=A/70/174. The full set of GGE reports can be found at https://www.un.org/disarmament/topics/informationsecurity/

The 2015 G20 norms are stated in paragraph 26 of “G20 Leaders' Communiqué, Antalya Summit 2015”, November 15-16, 2015. Accessed 5/7/2016 at http://www.gpfi.org/publications/g20-leaders-communiquantalya-summit-2015.

“The Ethics Code of Conduct for Cyber Peace and Security (ECCC),” Boston Global Forum, 9/3/2015. Accessed 5/7/2016 at http://bostonglobalforum.org/2015/11/the-ethics-code-of-conduct-for-cyber-peace-and-security-ecccversion-1-0/

Preventing Cyber Conflict: A 21st Century Challenge

Global Cybersecurity Day December 12, 2016

Published for Cybersecurity Day 2016 on December 12, 2016, this Boston Global Forum report emphasized the danger of cyber weapons. With a few exceptions, to date states have shown remarkable overall restraint with cyber weapons given their potential. Even so, the threat they pose as the technology advances and proliferates is severe. In addition to highlighting the unique danger this type of technology presents, this report re-affirmed the need for global norms to prevent cyber conflict and urges states to adopt the Ise-Shima Norms.

Allan Cytryn, Nazli Choucri, Michael Dukakis, Ryan C. Maness, Tuan Nguyen, Derek S. Reveron, John E. Savage, and David Silbersweig

At the 2016 G-7 Summit at Ise-Shima, Japan, countries affirmed their commitment to support an open, secure, and reliable cyberspace through the application of international law to state behavior in cyberspace, voluntary norms of responsible state behavior in peacetime, and close cooperation against malicious cyber activity. Absent from formal communiqué were statements on cyber conflict. While cyber-enabled criminal activity and espionage preoccupy cyber discussions today, dozens of countries are building military cyber commands. Given the potential devastation a cyber conflict with advanced cyber weaponry would bring civilian populations; it is essential to develop ways to prevent the proliferation of cyber weaponry. Thus far, states have shown remarkable restraint in using overt cyber weaponry, the exceptions being acts such as Stuxnet and Shamoon. It is important that the international community build upon this restrained behavior and push toward norms that would make their use taboo.

Cyber weapons are new, not well understood, and if not properly controlled, likely to lead to escalation, a process that can lead to serious unexpected consequences, including conventional war. Development costs are minuscule relative to conventional military power and has expanded the range of threats. Differentiating the intent of software designed for espionage from a cyber weapon, designed for sabotage is easily confused that can cause miscalculation. Thus, implantation of foreign software in an adversary’s military or critical infrastructure systems poses a serious threat of both harm and escalation. In a worst-case scenario, if a computer system in question consists of a state’s nuclear weapons command and control center, nuclear conflict may result especially with states locked in unresolved conflict, such as India and Pakistan.

Under the UN Charter, an attack is a use of force, to which states have the right to self-defense. We define a cyber-attack to be an action launched via computer and/or networking technology that either produces physical damage equivalent to the use of force or corrupts critical information sufficient to cause damage to the national welfare akin to that produced by the use of force. We define cyber to be a conflict that largely consists of cyber-attacks. Given the novelty of cyber conflict and the opportunities for miscalculation, cyber conflict has the potential to lead to conventional conflict using both kinetic and cyberspace technologies. In the event countries think they may lose a capability due to a cyberattack, they could prematurely escalate a conflict through pre-emptive military strikes.

Targets of cyber-attacks could be

a) a nation’s military command and control system, which includes military satellites, its logistical systems, and one of its major wartime commands;
b) its economy, which includes its critical infrastructure such as power, water and banking; or
c) operation of its system of governance, including its major agencies and its national electoral system. Whether the damage done by a cyber action arises to the level of force will need to be determined. However, loss of GPS during a period of heightened tensions could be considered a use of force, as could the disabling of a significant fraction of the electricity grid of a state under similar circumstances. Altering the outcome of the election of a national executive, an act tantamount to the forceful replacement of the executive, may also rise to a use of force

Because national economies are much more tightly integrated today than at any previous time in human history, cyber conflict, whether it escalates to kinetic warfare or not, is likely to cause serious economic or political damage to many states. Given how widespread a cyberattack can be impacting telecommunications, banking, and power generations, civilians are at grave risk. Citizens regardless of nationality are exposed to risks created by cyber insecurity. International cooperation is essential and countries must prioritize ways to reduce the risk of cyberwar.

Yet the use of cyber weapons that do physical harm remain rare, and we must promote their non-use further, while at the same time recognizing the proliferation of certain types of acts that continue to have real impact: espionage and disruptive cyber events. Chinese espionage on US intellectual property has had real monetary impacts in the billions of dollars. Russian disruptive campaigns against the electoral processes in the West have sown discontent in institutions among these populations. The prevention of these types of attacks should be at the forefront, as their continued use could lead to retaliation with cyber and conventional weapons, and possibly major power war. The battle over information is being fought now, and measures must be taken to stem its tide.

Progress has been made in this battle. As a result of a bilateral agreement between the United States and China struck in September 2015, the incidence of “theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or commercial sectors” has greatly subsided. (See http://www.nbcnews.com/news/us-news/russia-may-be-hacking-us-more-china-hacking-us-much-n664836.)

Risk reduction must begin with identification of critical assets and the risks to which they are exposed. States must then create a system to reduce risks. This will include acquiring the necessary expertise, whether available domestically or not, to reduce software vulnerabilities and cooperate with other nations to improve transparency. This cooperation can take the form of information sharing, bilateral and multilateral agreements, articulation of norms of state behavior, and the creation of risk reduction centers designed to control escalation and equipped with “hot lines” to other national risk reduction centers.

Restraint is strengthened by implementing norms against unacceptable behaviors and creates a more mindful attitude towards using cyber systems. Fostering collective action, which is necessary to protect cyber capabilities needed by individuals, groups, and societies, enhances restraint. There may be a time when the international community establishes an international center to monitor and combat cyber threats, and to coordinate actions to protect computer systems and disrupt non-state actors that operate in cyberspace. States may have to surrender some sovereignty to do this, but it may be reflective of the non-sovereign Internet.

Cyber risk reduction begins with adherence to the GGE Norms (UN A/70/174), the G7 Ise-Shima norms, and the G20 Norms. However, it goes beyond these and should include the following measures:

• Sharing in depth of best practices to secure computers and networks.
• Public identification of critical national infrastructure asset classes.
• National prioritization of assets by value.
• Reduction of the risk of compromise of high-priority assets.
• Creation and proper manning of risk reduction centers.
• Establishment of regular security drills both domestically and with other risk reduction centers.
• Banning of the implantation of software in another state’s high-value systems during peacetime.
• Applying the law of armed conflict in cyberspace.
• Improving attribution through forensics and context.

Cyber-attacks present a new danger to the security of states. Thus, states are urgently encouraged to begin discussion of mechanisms to address these issues.

ECCC Version 3.0

Ethics Code of Conduct for Cyber Peace and Security

Ethics Code of Conduct for Cyber Peace and Security (ECCC) version 3.0 was created by the Boston Global Forum to advise Net citizens, policymakers, IT experts and business firms and leaders, educators, influencers/Institutions on the best practices to maintain the security, stability and integrity of cyberspace. These guidelines aim to educate on responsible and ethical computer use and improve overall cybersecurity. Citizens and everyday internet users are encouraged to avoid suspicious websites and update security software regularly. The ECCC recommends that states adopt UN-proposed norms on cybersecurity, that businesses will take responsibility for securing their sensitive data, and that educational institutions will train citizens and professionals in best online practices. It is firmly believed that adopting the ECCC will make a better and safer internet.

Governor Michael Dukakis, Mr. Nguyen Anh Tuan, Mr. Allan Cytryn, Prof. Nazli Choucri, Prof. Thomas Patterson, Prof. Derek Reveron, Prof. John E. Savage, Prof. John Quelch, Prof. Carlos Torres.

The Boston Global Forum’s Ethics Code of Conduct for Cyber Peace and Security (ECCC) makes the following recommendations for maintaining the security, stability and integrity of cyberspace.

Net Citizens Should

Engage in responsible behavior on the Internet, e.g.

Conduct oneself online with the same thoughtfulness, consideration and respect for others that you expect from them, both online and offline
Do not visit suspicious websites
Do not share news or content from sources that are not trustworthy

Learn and apply security best practices, e.g.

Update software when notified by vendors.
Ensure your PC has virus protection software installed and running.
Use strong passwords, change them periodically, and do not share them.
Do not transmit personally identifiable information to unknown sites.
Maintain a healthy suspicion of email from unknown sources.
For web communication use HTTPS instead of HTTP when possible.

Policy Makers Should

Endorse and implement recommendations made by the 2015 UN Group of Government Experts (GGE), the Group of Seven (G7) and the Group of Twenty (G20). Below we summarize the important norms concerning information and communication technologies (ICTs).

(G20). Below we summarize the important norms concerning information and communication technologies (ICTs).

1. [GGE] International law, including the UN Charter, applies online.
2. [GGE] States should help limit harmful uses of ICTs, especially those that threaten international peace and security.
3. [GGE] States should recognize that good attribution in cyberspace is difficult to obtain, which means miscalculation in response to cyber incidents is possible.
4. [GGE] States should not knowingly allow their territory to be used for malicious ICT activity.
5. [GGE] States should assist other states victimized by an ICT attack.

6. [GGE] States, in managing ICT activities, should respect the Human Rights Council and UN General Assembly resolutions on privacy and freedom of expression.
7. [GGE] States should protect their critical infrastructure from ICT threats.
8. [GGE] A state should not conduct or permit ICT use that damages the critical infrastructure of another state or impairs its operations.
9. [GGE] States should work to ensure the integrity of the supply chain so as to maintain confidence in the security of ICT products.
10. [GGE] States should prevent the proliferation of malicious ICT tools and techniques and the use of harmful hidden functions.
11. [GGE] States should encourage reporting of ICT vulnerabilities and the sharing of remedies for them.
12. [GGE] States should not knowingly attempt to harm the operations of a computer emergency response team. Nor should it use such a team for malicious international activity.
13. [G7] No state should conduct or support ICT-enabled theft of intellectual property, trade secrets or other confidential business information for commercial gain.
14. [G7] If ICT activity amounts to the use of force (an armed attack), states can invoke Article 51 of the UN Charter in response.
15. [G7] States should collaborate on research and development on security, privacy and resilience.
16. [G7] States are encouraged to join the Budapest Convention.

States should not create nor tolerate the dissemination of fake news.

IT Engineers Should

Apply best practices in the design, implementation and testing of hardware and software products so as to

Avoid ICT vulnerabilities,
Protect user privacy and data

Make use of the NIST “Framework for Improving Critical Infrastructure Cybersecurity” as a guide for improving the security of critical applications.

Should not create nor use technology to create or disseminate fake news.

Business Firms and Business Leaders Should

Take responsibility for handling sensitive corporate data stored electronically.

Create employment criteria to ensure that employees are qualified to design and implement products and services that meet high security standards.

Ensure that IT engineers are kept abreast of the latest ICT security threats.

Implement effective Cyber Resilience in your business.

Engage in information sharing of ICT hazards, subject to reasonable safeguards, with other companies in similar businesses.

Educators, Influencers/Institutions Should

Teach the responsibilities of net citizens described above, including fostering good behavior and avoidance of malicious activity.

Help global citizens to acquire the critical thinking skills needed to identify and avoid fake news and discourage its dissemination.

Ensure that IT engineers are taught the skills necessary to produce safe, reliable and secure ICT products and services.

Educate and lead global citizens to support and implement the ECCC.

Create honors and awards to recognize outstanding individuals who contribute greatly to a secure and safe cyberspace.