OK. Hello. My name’s Alex Perry Well I’m a professor at M.I.T.. And I run a group. Called connection science which actually develops techniques and. Open source software. For helping countries. And. Companies. Deal with A.I. in a way that is both effective. And efficient. But also ethical. And. The key idea is. That in this new world we’re entering as you have been everywhere. Most of the day was in private hands in the hands of. Companies. By yet you need that data to be able. For it. To be able to run government to be able to be efficient about social systems. Civic systems. How are you going to do that in a way that’s trustworthy unbiased fair. And that the people understand. What’s happening. And. So. I’ve developed a method that I call. Open algorithms. That allows you to take data from private entities like companies. And combine it. In a safe way. To produce. Data. About. Entire countries or other large structures large. International companies etc.. In a way that’s safe. Auditable. And. Understandable. So. The drivers of this are things like. Security. If you put all the data in one spot it’s sure to get stolen. An enormous risk. It’s. Inertia. Different people own the data. And. With privacy concerns. Individuals. Own the data in a certain sense. They should be able to know what’s happening. They should have control over data. About and. How do you put this together. The big. Idea is that instead of putting data on one place you keep data where it’s originally collected. This is you. Federated data. Rather than concentrated. So the people that have. Ownership. Rights over data. Continue to hold on to the data. But they agree. To. Algorithmically answer questions. So. The computer that data is on is also willing to answer questions. For the public good. Or for. The Joint Group. Those questions are called. Algorithms. And. In our system we call these open algorithms because. They’re things that are. Legally agreed to beforehand. So instead of going to someone to say can you give me your data and you say. Well here’s an algorithm I’ll show it to you. Can you run this on their data and give me the answer. Legally and organizationally that’s a much easier thing to do. And in fact. Small countries like Estonia have been doing it for a long time. And big companies. Like. AT&T. Do it as a matter of self-protection. To make sure that they’re not breaking the law. Well it sounds complicated to the human. Ear. I’m a computer point of view. Reading the data where it’s collected and answering questions. Is. In many cases more efficient than moving the data to one place and answering the questions there. The difference is is that it’s a lot safer to leave the data wherever it’s collected. Military people figured this out a long time ago it started building a castle with nodes. Which. In today’s computer Oracle firewalls and they work to defense and debt. That’s what we do with. Data. When you do that all of a sudden you can keep track of what questions are being asked and what data. And what people who collected the data and the people who already. Have the ability to monitor if the. Questions are ones that they agree to. And. So you can record what’s happening in the system is orderly. So if you have a question about bias or fairness. You can go. And. Answer that question because now you have a record. Of what was done with the data. And who did it. So for instance. In the country of Columbia we were able to look at. Know. Poverty programs. And discover that there were almost a million people. Who were getting benefits that shouldn’t. And a million people that weren’t getting benefits. Sure. And that comes with the ability to. Audit. All. The decisions. Which are going to human ears sounds complicated but actually. Computer point of view. It’s just a dashboard that keeps track of what’s happened. So. That’s the sort of big view is that instead of having. One centralized control one centralized repository. You have a federation. Of different. Players. And their interests that agree to answer certain questions for certain functions. And you ordered them. So. As I said there are some countries like Estonia that are doing more. Recently. Europe. Agreed. Eurostat the official data. Organizations of all. EU countries. Adopted this sort of framework. And there are several other countries that we’re working with. Israel and Australia. Others. That are also. Putting pilots in this to be able to explore. How they can get better insights about their country. And had better policies. By using this public and private data together. In an. Open. Honorable way. So. That’s the key thing. We’re. Deploying these things in different place parts of the world. We’d be happy to help you if you’re interested. People are typically interested in things like. Social programs being more efficient being able to have greater income from tourists or from. Innovation. Or new sorts of civic systems better transportation and public health. And we help build those. Things for people. We don’t know turnkey. Solutions what we do. Is we build. Prototypes. That are then. Specialized to be operational for your particular situation. And if you want to know more about this I refer you to for instance. The keynote again for the EU presidency. Or. Other sorts of talks like that that. Will be making available. To you. And we have a book called trust data. Which describes the techniques that includes the. Piece that the Obama White House asked us to do for them. The piece that the UN secretary general has. A piece for The World Economic Forum. Describing both the policy the legal and the technical aspects. Interestingly the Chinese central government just translated this into China on. Chinese and published it through the. Chinese. Central and economic press. So you might be interested in. The. Problem. So that’s the top level story here. I hope you’re interested we’d be happy to talk to you and. And work with you. Thank you
After the great success of the AIWS–G7 Summit Conference with the AIWS-G7 Summit Initiative, Boston Global Forum organize AI World Society (AIWS) Summit to engage governmental leaders, thought leaders, policymakers, scholars, civic-societies, and non-government organizations to build a peaceful, safe, and new democracy for the world with deeply applied AI,. Prominent figures often very busy, so many could not meet the same time and same place, therefore BGF gives a new format for AIWS Summit: combining online and offline.
Alliance of civic societies, non-government organizations, and thought leaders for a safe, peaceful, and Next Generation Democracy.
Mission:
A high level international discussion about AI governance for a safe, peaceful, and Next Generation Democracy.
Organized by Boston Global Forum, and World Leadership Alliance-Club de Madrid, and sponsored by the government of the Commonwealth of Massachusetts.
Outcome: recommendations, suggestions for initiatives, solutions, and policies to build a society and world more peaceful, safer, and democratic with AI; the new social and economy revolution with AI that will shape better and bright futures in equality of opportunities in contribution, transparency, openness, in which capital and wealth cannot corrupt democracy, citizens will be recognized, rewarded and have a good life.
Format:
Combine between online and offline.
Moderators: Governor Michael Dukakis, and Nguyen Anh Tuan
Speakers: leaders of governments, political leaders, business leaders, prominent professors, thought leaders. Governor Michael Dukakis will send invitation letters to speakers to introduce mission, topics, outcome of the AI World Society Summit 2019.
Speakers can send their talks by video clip (maximum 30 minutes) or text to Content Team of the AI World Society Summit 2019, then the Content Team will post to AI World Society Summit section of Boston Global Forum’s website and deliver to other speakers, and discussants, and then their talks will be submitted to G7 Summit 2019 as a part of AIWS-G7 Summit Initiative.
Time: start April 25, 2019 at AI World Society – G7 Summit Conference to August 5, 2019.
The first speaker is oneof Fathers of Internet, Vint Cerf, Vice Prresident and Chief Internet Evangalist of Google
The second speaker is Professor Neil Gershenfeld, MIT.
Vint Cerf: One of the Fathers of the Internet Received World Leader in AIWS Award at the BGF-G7 Summit Conference on April 25, 2019 at Harvard University Faculty Club.
Professor Nazli Choucri, MIT, Board Member of the Boston Global Forum and Michael Dukakis Institute for Leadership and Innovation, also a very active member of AIWS Standards and Practice Committee, has launched a new book:” International Relations in the Cyber Age “.
“The international system of sovereign states has evolved slowly since the seventeenth century, but the transnational global cyber system has grown rapidly in just a few decades. Two respected scholars – a computer scientist and a political scientist-have joined their complementary talents in a bold and important exploration of this crucial co-evolution.”
– Joseph S. Nye, Harvard Kennedy School and author of The Future of Power
“Many have observed that the explosive growth of the Internet and digital technology have reshaped longstanding global structures of governance and cooperation. International Relations in the Cyber Age astutely recasts that unilateral narrative into one of co-evolution, exploring the mutually transformational relationship between international relations and cyberspace.”
– Jonathan Zittrain, George Bemis Professor of International Law and Professor of Computer Science, Harvard University
“Cyber architecture is now a proxy for political power. A leading political scientist and pioneering Internet designer masterfully explain how ‘high politics’ intertwine with Internet control points that lack any natural correspondence to the State. This book is a wake-up call about the collision and now indistinguishability between two worlds.”
– Laura Denardis, Professor. American University and author, The Global War for Internet Governance
“This book uniquely combines the perspectives of an Internet pioneer (Clark) and a leading political scientist with expertise in cybersecurity (Choucri) to produce a very rich account of how cyberspace impacts international relations, and vice versa. It is a valuable contribution to our understanding of Internet governance.”
– Jack Goldsmith, Henry Shattuck Professor, Harvard Law School
About this book
A foundational analysis of the co-evolution of the internet and international relations, examining resultant challenges for individuals, organizations, firms, and states.
In our increasingly digital world, data flows define the international landscape as much as the flow of materials and people. How is cyberspace shaping international relations, and how are international relations shaping cyberspace? In this book, Nazli Choucri and David D. Clark offer a foundational analysis of the co-evolution of cyberspace (with the internet as its core) and international relations, examining resultant challenges for individuals, organizations, and states.
The authors examine the pervasiveness of power and politics in the digital realm, finding that the internet is evolving much faster than the tools for regulating it. This creates a “co-evolution dilemma”—a new reality in which digital interactions have enabled weaker actors to influence or threaten stronger actors, including the traditional state powers. Choucri and Clark develop new methods of analysis. For example, one method is about control in the internet age, “control point analysis,” and apply it to a variety of situations, including major actors in the international and digital realms: the United States, China, and Google. Another is about network analysis of international law for cyber operations. A third method is to measure the propensity of states to expand their influence in the “real” world compared to expansion in the cyber domain. In so doing so they lay the groundwork for a new international relations theory that reflects the reality in which we live—one in which the international and digital realms are inextricably linked and evolving together.
Authors
Nazli Choucri
Nazli Choucri is Professor of Political Science at MIT, Faculty Affiliate at the MIT institute for Data Science and Society, Director of the Global System for Sustainable Development (GSSD), and the author of Cyberpolitics in International Relations (MIT Press).
David D. Clark
David D. Clark is a Senior Research Scientist at the MIT Computer Science and Artificial Intelligence Lab and a leader in the design of the Internet since the 1970s.
IN EARLY APRIL, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.”
One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI.
Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says.
When a formal draft was released in December, uses that had been suggested as requiring “red lines” were presented as examples of “critical concerns.” That shift appeared to please Microsoft. The company didn’t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoft’s senior director for EU government affairs, said the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’” Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted toward industry. “We need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.”
The brouhaha over Europe’s guidelines for AI was an early skirmish in a debate that’s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interest—and in some cases appear to be trying to steer construction of any new guardrails to their own benefit.
Harvard law professor Yochai Benkler warned in the journalNature this month that “industry has mobilized to shape the science, morality and laws of artificial intelligence.”
Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into “Fairness in Artificial Intelligence” that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed.
Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,” he says.
Microsoft used some of its power when Washington state considered proposals to restrict facial recognition technology. The company’s cloud unit offers such technology, but it has also said that technology should be subject to new federal regulation.
In February, Microsoft loudly supported a privacy bill being considered in Washington’s state Senate that reflected its preferred rules, which included a requirement that vendors allow outsiders to test their technology for accuracy or biases. The company spoke against a stricter bill that would have placed a moratorium on local and state government use of the technology.
By April, Microsoft found itself fighting against a House version of the bill it had supported, after the addition of firmer language on facial recognition. The House bill would have required that companies obtain independent confirmation that their technology worked equally well for all skin tones and genders before deploying it. Irene Plenefisch, Microsoft’s director of government affairs, testified against that version of the bill, saying it “would effectively ban facial recognition technology [which] has many beneficial uses.” The house bill stalled. With lawmakers unable to reconcile differing visions for the legislation, Washington’s attempt to pass a new privacy law collapsed.
In a statement, a Microsoft spokesperson said that the company’s actions in Washington sprang from its belief in “strong regulation of facial recognition technology to ensure it is used responsibly.”
Shankar Narayan, director of the technology and liberty project of the ACLU’s Washington chapter, says the episode shows how tech companies are trying to steer legislators toward their favored, looser, rules for AI. But, Narayan says, they won’t always succeed. “My hope is that more policymakers will see these companies as entities that need to be regulated and stand up for consumers and communities,” he says. On Tuesday, San Francisco supervisors voted to ban the use of facial recognition by city agencies.
Washington lawmakers—and Microsoft—hope to try again for new privacy and facial recognition legislation next year. By then, AI may also be a subject of debate in Washington, DC.
Last month, Senators Cory Booker (D-New Jersey) and Ron Wyden (D-Oregon) and Representative Yvette Clarke (D-New York) introduced bills dubbed the Algorithmic Accountability Act. It includes a requirement that companies assess whether AI systems and their training data have built-in biases, or could harm consumers through discrimination.
Mutale Nkonde, a fellow at the Data and Society research institute, participated in discussions during the bill’s drafting. She is hopeful it will trigger discussion in DC about AI’s societal impacts, which she says is long overdue.
The tech industry will make itself a part of any such conversations. Nkonde says that when talking with lawmakers about topics such as racial disparities in face analysis algorithms, some have seemed surprised, and said they have been briefed by tech companies on how AI technology benefits society.
Google is one company that has briefed federal lawmakers about AI. Its parent Alphabet spent $22 million, more than any other company, on lobbying last year. In January, Google issued a white paper arguing that although the technology comes with hazards, existing rules and self-regulation will be sufficient “in the vast majority of instances.”
Metzinger, the German philosophy professor, believes the EU can still break free from industry influence over its AI policy. The expert group that produced the guidelines is now devising recommendations for how the European Commission should invest billions of euros it plans to spend in coming years to strengthening Europe’s competitiveness.
Metzinger wants some of it to fund a new center to study the effects and ethics of AI, and similar work throughout Europe. That would create a new class of experts who could keep evolving the EU’s AI ethics guidelines in a less industry-centric direction, he says.