AIWS Distinguished Lecture at United Nations on UN Charter Day

AIWS Distinguished Lecture at United Nations on UN Charter Day

On United Nations Charter Day, June 26, 2019, AI World Society Distinguished Lecture will be co-organized by the United Nations Academic Impact and the Boston Global Forum as the United Nation Academic Impact Charter Day Lecture at the Headquarter of the United Nations in New York.

The Agenda of the Lecture can be found here.

Dr. Bray’s keynote address will explore of how advances in the Internet, artificial intelligence, and data technologies transform communities and societies. By 2045, the United Nations will be 100 years old and this distinguished lecture will consider what possible changes will have occurred in the world and human societies by then.

The panel discussion will follow after the keynote speech by Dr. David Bray:

Moderator Maher Nasser, Director of the Outreach Division in the United Nations Department of Global Communications.

  • Mariko Gakiya, Shine Advisory Board Member-Sustainability and Health Initiative, Visiting Scientist-Environmental H, Harvard T.H. Chan School of Public Health
  • Fabrizio Hochschild, United Nations Under Secretary-General and Special Adviser on the Preparations for the Commemoration of the Seventy-Fifth Anniversary of the United Nations
  • Nam Pham, The Assistant Secretary of Business Development and International Trade, Government of Massachusetts • Atefeh Riazi, United Nations Assistant Secretary-General and Chief Information Technology Officer
  • David Silbersweig, Chairman of psychiatry at Brigham & Women’s Hospital in Boston, and co-directs the center for the neurosciences; Academic Dean, and Stanley Cobb Professor of Psychiatry, Harvard Medical School, Board Member of the Boston Global Forum, Member of AI World Society Standards and Practice Committee.
The Boston Global Forum (BGF) Is the Strategic Alliance to host the Summit on AI Governance, Big Data and Ethics

The Boston Global Forum (BGF) Is the Strategic Alliance to host the Summit on AI Governance, Big Data and Ethics

The Boston Global Forum (BGF) Is the Strategic Alliance to host a Summit on AI Governance, Big Data and Ethics, a special program of the AI World Government Conference in Washington DC, June 24, 2019. The rapid acceleration of AI has led to a global call for AI governance and ethics. One of the core issues facing the adoption of AI is centered on how to ensure that these advanced technologies can be deployed in a fair and non-biased way that can serve the betterment of mankind. Currently, there are hundreds of efforts underway globally, working in silos, to standardize around how business and governments can engage in ethical AI practices. The Summit on AI Governance, Big Data and Ethics brings together a key group of global thought leaders who will present the challenges, discuss solutions and lead networking roundtables to help government and industry leaders better collaborate with each.

At the summit, Professor Thomas Patterson, a Board Member of the Boston Global Forum, will represent the Group of Authors working on the AIWS-G7 Summit Initiative to talk about this initiative to the audience. Announced on April 25, 2019 at the AIWS-G7 Summit Conference at Harvard University, the initiative has three parts: AI-Government, AI-Citizen, and AI-Government Index. The focus of Professor Patterson’s talk is on a Model for AI-Government. This model envisions a society where creativity, innovation, tolerance, democracy, the rule of law, and individual rights are recognized and promoted, where AI is used to assist and improve government decision-making, and where AI is a mean of giving citizens a larger voice in governing.

A proposed concept of the AI-Government Model is Social Value Reward (SVR). SVR is an incentive mechanism to recognize a citizen or an organization’s contribution to the society. SVR rewards a citizen for doing “good” things, in the forms of creativity, innovation, service, or willingness to share data, with points that can be redeemed at partnering organizations. SVR also incentivizes organizations for making the world better in a non-profit way. According to Mr. Nguyen Anh Tuan, CEO of the BGF, as he delivered a talk about SVR at the Vietnam Internet Forum March this year in Hanoi, SVR is fundamentally different from China’s Social Credit System (SCS). While China uses SCS as a way to “judge” its people, SVR is a mechanism to give global citizens more personal freedom and power and recognize their contribution to society as a whole with real values that they own and decide when and where to use.

 

Shinzo Abe’s Latest Diplomatic Long Shot: Peacemaking in Iran

Shinzo Abe’s Latest Diplomatic Long Shot: Peacemaking in Iran

Shinzo Abe’s visit to Iran this week, the first to that country by a Japanese prime minister in more than 40 years, is the latest in a series of high-minded but long-shot efforts to lift Japan’s influence on the global stage.

Mr. Abe, who flew to Tehran on Wednesday, is putting himself in the middle of a confrontation between the United States and Iran that has raised fears of war.

The tensions, which began with President Trump’s decision to pull out of the 2015 nuclear accord and impose crippling sanctions, escalated recently as the Trump administration moved additional troops into the Persian Gulf after having accused Iran of plotting to attack American targets.

Mr. Abe’s effort to avoid frictions was reflected in his remarks to reporters after having met President Hassan Rouhani of Iran. Agence France-Presse quoted Mr. Abe saying it was “essential that Iran plays a constructive role in building solid peace and stability in the Middle East.”

The Boston Global Forum honored Prime Minister Shinzo Abe as World Leader for Peace, and Cybersecurity Award on Global Cybersecurity Day, December 12, 2015 at Harvard University Faculty Club.

Transcript of Mr. Paul Nemitz’s speech at AIWS-G7 Summit Initiative: Legal concepts for AI – Layer 4 of AI World Society

Transcript of Mr. Paul Nemitz’s speech at AIWS-G7 Summit Initiative: Legal concepts for AI – Layer 4 of AI World Society

Mr. Paul Nemitz, Principal Advisor, Doctorate General for Justice and Consumers, the European Commission

 

Mr. Paul Nemitz presented the principles of creating Artificial Intelligence Law, layer 4 of the 7-layer Artificial Intelligence Society model at AIWS-G7 Summit Initiative on April 25, 2019.

Legal concepts for AI – Layer 4 of AI World Society

“I would say you know the Sun goes down all the line. Today, I have a very very long discussion but we don’t need such discussion on AI when you see it. You know at the same time, I think we evaluate so high on AI, we will see sunny to produce masada suit claimed that they use is what’s up percent of those who they think to do AI. Actually, it is not very related to right now. We say we work on AI, but actually, this is not so much. So why? AI, we are nevertheless having this high degree of activity proposed by companies also governments and academia. Because, on one hand there’s promise I say it’s useless promise that this technology will solve mental problems and on the other hand there are people describe apocalyptic risks coming from this technology and in the middle of all these and policymakers and we’ve heard already today about French vision and I think exactly the same who had the task to maximize the potential for society growth employment but also panic interest, we want those two areas and also want to minimize the risks. And I think that’s what I am playing focus on a little bit on maybe what can go off and Essex contribute people offense. Let me say that my friends and here just a word about the Commission of European have a strategy for AI says every member state should have strategy of AI and we are I think usually inspiring each other there is a very iterative process in Europe between as a member says and your opinion and so misrepresented strategy of AI, April of last year against we build chapters daily in France Industrial Policy Research aspects for Super Readers just research. Second the preparation the labor market we need people to acquire skills like non learning also the older generation supposed to develop but was to block and use AI and say not only as customer but as citizen who understands the risks batches when using this technology so there this is firing skills not only to elastic anticipating Commerce path and I think that’s because interest here the project of AI was excited because it work with this technology in the future processes of governance, the Public interest and democracy answer ethics and law for AI mean that it’s great disadvantage of Europe that we understand Essex debates and making laws while others develop the technology and we had. This is in fact the disclosure which I have been a witness of woman working in public service this has being said in relation to the internet because it’s related to other technologies, energies. It’s the reoccurring discourse. It has very deep discourse related to environmental protection was that in the nineties away invention by Ernst Heinrich, the primary laws sensible precaution with it was not as huge catastrophe autonomic development and the promises were made before AI. I said when AI was directly on the GDR a new commission then was confronted, for example, by study from American Chamber of Commerce who said if you’ve been and his law who top it means minus five percent of GDP (General Data Protection). So, I would say together with Islamic promises of technology go announcements of a growing interest that if you apply the physical precaution the huge negative consequences. That’s interest driven fiscals, empirical reality is completely different if we look back presented the environmental movement and its successes. We have seen that being environmentally sustainable as an enterprise means also to reduce oscillation and it has become a huge driver for profitability not only for your incomes our companies worldwide to become very logically sustainable and therefore resource efficient in term of using as little as possible natural resources. And in the end son were only convinced when he’s sober of the greening of GE (General Electric) went around the world. It’s still not for today that you can marry in a very productive way with climate change detection and with economic growth. So it’s clear that we will never convince oil but I will say that there’s a large division in Europe sending from the time of logic movements and atomic energy with also the peace movement at the time already end and start with the 70s was the great book of Anna’s principle of responsibility with in New York. Useless research that the principle of recoil as part of the invasion force is the secret of success and novelty because a client disciplines as part of it which is integral population ensures long term sustainability but also for economic and social sustainability of the technology for business. And the empirics in this, I could not develop further but the most striking the greening of industry now.

Let me turn before I come to what this means AI wanted an energy utterances of course historically speaking pure has completely different outlook on personal data and related right of individual information self-determination then America that better watch out. But because Europe has completed little history, big part of Europe has lived through communist dictatorships, Germany as it passes the pictures and in both of these dictatorships, the aim of government was to know everything about everybody and the historical experience raises the sensitivity about collection and is this historical background which led to the creation of the legal concept of information acidification as fundamental ideas to the liberty which at that time was not written in any constitution, by judges and courts in Europe. And when I say by judges and courts in Europe, you can really go through the different layers of human rights, Cosmo based computing convention you write the system when men essays of this convention which are the used magic plus few more controls from outside. By contrast, all comparison right through prostitution, …

Or together six years to get this regulation, this because to any mention the lobby trying to avoid this at all cost. I would say historically, were fortunate situation we had vice president which was from conservative you would say that Republicans, I agreed to a paternity was funded me, it’s a liberal democrat and then of course we had that what’s known as data quad which thoughtful for everybody what these technologies today allow and all together this was historical not a moment to get these instruments who I would say and this is not our speculation. But, I can tell you before the whole insemination process but really was …and so at this moment, when we are a very American contribution energy pathways that need branch out with this. They all what we have learned from America with learning our course in relation to AI because the discourse that America mainly because only a Nixon money and on the other hand to have China also develop the technology but not only for money but a successful communist dictatorship control the people. Good or dream of Communist Party since 1917 fulfilled communist ensure technology AI, had China so too soaring sister. All these things, I think this discourse has a few defect and here in Boston and how we considered which is. This of course truth to describe America is just one the commitment not represented by the stock market number one, two, three, four, five and six Apple, Google, Facebook, Amazon and so on. But there’s a lot very very good work and very very good thinking in America which we had profited from anywhere and in 2000, all property in the world on AI and today’s discussion of the Penn College has been produced here both with the focus on making AI useful for democracy and for public interest, my home, even talented advertisement, was the great money, right now is just one example, I made. I would like to quote another example, which is very very important which is right now the workers, FEC, I arrived in Boston and learned that Facebook makes traditional marketing many billion dollars for possible kind of the entities because Facebook did not follow up on its promises relating to price. I have to say I have a great extent of the work of ABC over the years network a lot together with some colleagues on where you see it from here and generally under President Obama they had 3pm – … when we negotiate – negotiator for the Commission on privacy shield very professional and of course, the great advantage of the FTC in comparison to your Kindle rotational service is that it has an experience of rigorous enforcement in competition against the nutritionist of the rules of protecting against the legal agreements between trust and this experience hopefully. Now this is horrible in California views Christ and I would love to see this running from a magician lock transiting to privacy taking place in Europe. So I think we’re all looking forward to the FTC and doing its job under the rule of law in America by American law maybe also making reference to say America – commitments for Europe peaceful undersigned these commitments relating to respecting privacy rules of Europe. in Asia, a privacy rules which provide equivalent protection will be produced when they process as the same as in America. And so you see it is potentially to show that this green is actually means something by amazing the spire also on this non respect of this Agreement such numbers say it’s family by the FTC but it also has the chance to bring America into the lead as it was recently speaking what would say on has its own perspective on young development of academic analysis of facts back into the little privacy by demonstrating that is not only portrait rules on books but also to fossils. So therefore I would say not necessary education opposition between Europe and America on these issues but we are struggling on both sides of the Atlantic with the same changes for public interests the same problems and we can learn a lot of culture and this line can also take place in relation to the future things of AI in relation to greater protection of privacy. I think first of all the interest today will approach because this is probably after the Internet, The second, big ware of technology, where in everything which is done we have to see in terms of society. Internet policy already is actually society policy some, some ponies, Balaji policy. The Internet reaches today by our mobile devices everywhere, it is a forum for public debate in democracy, it allows for people without centralized information, it’s much more than just something that should be left to technologists and to be debated under the heading emoji that is present. It is true as we hear the AI will be excreted pervasive as electricity, we have to look at AI in the same way as we look at rules which we made in legislation because AI will continue in these rules according to which decisions are made because it will take largely decisions, this technology, this program. And if it doesn’t take decisions, it will guide people as some would say not or manipulate in a certain direction. So suddenly, it will have huge effects both our individual freedoms and our actions but also collectively on how privacy protection and that is the first reason why there’s so much debate about AI. Actually, an efficiency economy and also innovation processes which may arise. The second reason why the debate is so intense as over all is because we are struggling with the new change of autonomy of technology. Who will be responsible for the new cause energies which may be caused by a program which lurks and therefore it was in the direction, it was not intended by the Creator who will be responsible for. But images which has a program creates economic, it is physical environmental damages but also under original without damages? So this question of liability had already led to European Commission to set up a special working group with experts on civil law liability thoughts and the question which is both obviously is true AI be a subject of strict liability. We have Europe like a United States very strict rules on product liability to pay we applied layout and here also on all the other issues in a very broad thinking process which is a process interactive learning on what interests them in debates philosophers, lawyers, scientists, sociologists, anthropologists and the technological world mostly academic capacity are not companies but also in an iterative learning process in three or four direction. One of those products was in the union between European member states and six governments for civil society and one the other hand of course the global process which was so empty described by colleagues absolute general content bench consul-general illness. So in this learning process the Commission has focused on the essays we assembled a group of 52 independent experts well aspire coil, business leader consciously also business representatives in this group, they have produced two papers which have been made public. The second paper being iteration of paper and the Commission has just dated this paper on the 8th of April in a communication made naturally the content of these principles of ethics. It’s all so that it’s now the communication of unit permission on the table or is it will be one building of AI and Trust. And the principles which we would like to see in this trustworthy AI from the outset integrated evolved about three billion dollars we want to be trustworthy because it acts lawfully. So, one could say these programs to never take action which a human commit them but media law. And that’s not small right in the business climate in the world we’re in business rules future managers are taught about disruptive innovation and innovation to disrupt the law. And it is probably not the concept which we can afford with AI. I’m not saying we’ll see a lot of disrupting the law in the Internet age, you wish or a natural pose equal three billion euros AI on Google for not all, we have to of course the decision on Apple to pay taxes and Ireland 11 billion euros taxes are paid. You are followed to the based on copyright of Google just coming in but they’re doing whatever they want. So there’s an issue in the digital world, in the Internet business of disruptive little, I would say that’s who want an extreme state and they say that is something where we have to be careful to apply this thinking also to AI. Yes, you want to be innovative what innovators describe but not in the thing you get so not wellness first seconds beyond the law, the development of the technology should be density based not all of this good and necessary society and equal each loss. The loss, the law is strongest Ward and needed to be able to enforce but there are other things which will be on anesthetics answers we want this technology to be reversed, they need to perform as intended and what we have said. That’s also not qualified the more complex the technology it’s more able to get. The more will be issues of homelessness. One of the greatest scientists in a mission and many here in Harvard, MIT when he says this again from the example of a lot of America when says the most important on AI will be continues control of the systems because we must follow them to see how they learn and how they mutate how they behave to be able to learn about what we have done wrong and how to correct. But as a principle of responsibility and one could say principle V Prakash. Putting a technology into the world is extremely powerful that we learn to mutate on web or the other we wanted to do this. That is the whole efficiency game but the enemy also to follow it. Before our greatness almost pure it must be taken off the market.

So there are three basic principles: lawfulness, robustness and ethic orientation. And then the group and now you will be permission has further develop these principles into seven more operational rules. The first is that AI should stand under human agency head, human forces so that the humanism and this is not only the term of the emergency channel but if I can come back to the GDR (General Data Rules) where we can article automated decision making you will dignity as human right requires people a possibility to say you know how to make a decision related to them and the solvation decision making touches everyone at alliance. Why? Because we follow all conception of human being as a out of the Bible as a human enlightenment , the Latin entity of our institutions cannot imagine human as subject to become objects of technology, object of technology basic ecology control them and decides all that paper it may be vision to do it, to control it with some technology but it is not the nature the human being who is responsible and who acts and who has freedom and participation democracy has decided that the state private entities decisions over individual in all this way is individual successful, the right people human decision rating and the right to appeal this. Second robustness of course include safety where you can do great harm, great efforts, and investments must be made. Then of course, privacy, they are fictional data governance. And I would say that as regards possible they identify human being it’s possible that human beings GDR contains already a high percentage of rules which we need to work well with AI will go as far as saying it’s very very difficult. By then, five additional rules, one would employ AI when it processes data beyond GDP are. Why? Because it contains this fight community decision-making, it contains human decision-making, the right of jail to order information, it contains very hard-working rights to information about the program which decides which they topped injured, what is the content of the algorithm, what is the purpose was significant of decision making. All this, you can ask to see under Julia at least in Europe and the protection authorities have job to make sure that this happens. Angelina also contains the very mod rights to delete data or rectified it, you can ask that your data has to be collected somewhere and use as no specific interface which says this data could be sold and this must be a parliamentary law which says this look at our Swedish you say I don’t like Facebook everyone, I’m not going to use the general social network and then Facebook must be the heat in the people of justice is a great professional friends about exactly this duty the French Data Protection Authority killed at the time a mature woman is a very high to attack. The world you have to delete information about people and that’s it. Who doesn’t know we don’t think we have to believe we always think we make it invisible in Europe because business would be extraterritorial we want to be able to continue information America addresses and clear signal division as lost as means thief and logic imposes. But if you have to eat a data and therefore, provide of the law you can’t show anywhere. Though this case went up through the French courts and it’s not before you people of justice and let’s see what you got justice inside of this. But I would say it’s as traumatic case as the case is not yet people what certainly will come there may be to see the young knew who the French head of AI Facebook famously ones. Twitter, we are making 300 trillions predictions today Facebook demonstrate the computing how Facebook has my answer to that has been Yes. And all these relations are master data on individuals and everybody should have the right to see the predictions about themselves and also the right to the least of these predictions and I’m sure this will also become part of litigation. But one thing is clear in the world of AI, we’re predicting of us would be much more important collecting and we’re influencing people will be what they predicted. If these information and tools on people, I’m not falling under the rights of Judea and therefore people have the right to see them and know all of them but also ask for deletion the right data protection in the world AI will be very active. So I’m very convince is that the costs we’ll take the right decisions here and when I talks to Americans, then many Americans were extremely worried about these potentials what people can be done to buy those who own data and make predictions what influence whether it’s the same of swimming or whether it’s big business smoothing and by the way again the learning from America, the best book right on the market comes from Harvard Business School professor on all this in the private sector not government and it’s called Surveillance Catches and other professors who can do both of how this is the fantastic book which I suggest for everyone to meet to understand into which power transformations. All of us as individuals we are living already and how we work tomorrow if we don’t build up the system of both serving democracy and the country of the rule of law economic rights in term of making our political system and I have to say not much an American system from technological systems but also system which protects us as individuals. So GDPR help in this path into the world of rule by technology but the question after the ethics, principles which I will trust compete and then I will discuss especially with you is can we leave it beyond GDPR and what do we need you lot. But first, what are other SS principles which were downright AI resolute. So we discussed human agency oversight, robustness and safety privacy data governance force transparency. And here, very important we all need to know are we dealing with you or are we meeting with the Machine? And this is something which I would say cannot be left to voluntary ethics. You know one company doesn’t like that, no, this is something where we need a lock which makes it here. But those who are brought is technology is when these frequently served I know who’s human beings. But human must be made aware “hello, this is machine” speaking and this is of course our thoughts. You only extremely bond when we come into the democratic environment and closer and closer to elections. And just imagine to start your telephone in the morning, you look at Facebook and Twitter and there are all messages in paper, one candidate that person what she produced that’s almost so I think we’re ready for that reason but also for reasons of transparency contracting and for reasons of human dignity, we need to make aware right our sheets and your message was the sheer speed. The velocity had non discrimination was mentioned is a huge debate in American values makers coming from New York, AI now cross America silicon so on. The scientists in New York University, a great technology scrutiny I think that America has huge advantage over Europe because both American technologists or you are closer to take you’re social scientists, your loyalty technology that was all of course also closer to the daily our technology pincer to the risks of technology and also to the abusive and sixth principle and this is where the paper falls very nicely, societal well-being, environmental well-being and how to village. But so on accountability just tightly back and a great friend of Julia England the great reformer was the jobless. Honestly when then, I just started, want to start a new school for the backup where she received a 20 million from the Craig Newmark and you know again, something, we only can dream of in Europe she developed this concept of protect accountability journalism which is absolutely necessary these days because let’s be honest, if we don’t have strong technical ability journalism, many of us including myself we will not be able to do our job is a problem in the technological world which goes beyond that say commercial interests or abuse of technology which is a symbol that the technology becomes complicated you need mediators to explain and understand in doing, in helping understand policymakers, in helping understand other parts of science to take a view on the technology. So Julia Angwin has achieved very very great things you remember when in the beginning of GDP a solicit six or eight years ago, the world phenomena started a series of articles or investigative journalism under the title what they know and they were Facebook, Google so on and Julia Angwin is in each of these series brought up stunning revelations of what they know, what they do with it? She presented I think if I remember correctly was the one who brought to the world that there are many websites which automatically raise Isssac you got it was evident Utah. Now, Windows computer the assumption of this error websites be well paid, one more money will x-ray surprises fantastic journalism very difficult to document. He was documented. You know, for me, I have seen these things because that’s great learning, the price of the father who starts to get these mails, baby diapers, baby nourishing, smooth the mailings, the commercial ratings from once of our Alice verse of pregnancy of the daughter because the daughter had been grown faster shinning shots looking at these things the maids come home and so you know, the companies profilers, the technological automate profile of the human pregnant dead end the day. So this is another stunning example of. I have seen all this and men Americans given to the world because when asked this serious fight through anger into Worcester Charlotte was very vigilant, I wish her well in the present and this comes on studying her visual difficulties she encounters in their comics. So, I wanted to discuss with you as the last contribution to today’s conference. The question, do we need new law? Or can we just meet with all the ethics and I would say, we need to be able to set it or have to search for the parameters for decision there is a parameter which exists in American law and also in many countries in Europe which I will call the parameter as in shape. So if something is dissention, it needs to be dealt with by Democratic legislature because that’s what democracy is about. Democracy is not the policies unexcited by the executor or by business and for the rest members of Parliament’s Congress Senate. You know, they can’t walk. Democracy means ruled by the people through their representative and the principle as charity says everything which is important in society, the end needs to be cited there. And what is the criteria for deciding what is important?

That criteria, first, whenever, there is an insurance into some liberties, economic rights, does the Americans sentence no taxation without representation of the centrality to immediately rise a concern of people? Second, if there are important considerations of the good functioning of the state of democracy.

So if these are all parameters, what new law may we need on AI and of course ideally, agreed globally, around the world. I also say heavy beam in public law issues and primatologist in shipping in creepiest, strongest American coordinators. Of course, nobody in Europe, Zetterberg takes commitment and Nola nobody in America that we will only give ourselves rules in our democratic process only if they are already rigid we agreed internationally because that would mean we have to wait for 20 years.

As you can see shilling, tripping is probably the best example of a sector where the principle that rules must come from AI all as legs were hugely structural under regulation of sugar. So yes, we will go of course, the Commission is already starting this on AI. You know, there are many motor installs of diplomacy which are being put in place 200 starts to take us long and maybe one day we will arrive at some international law rules on AI. I don’t think that there is a precondition for doing what is necessary in democracy from time to time, they need to be solid. IBM decided of GDPR, of course, in parallel, we had worked on the convention one way into the house of Euro which goes beyond the international organization. For example, Canada has also signed up and we have a number of South American countries who looking at the convention of data protection pipes in the US has signed up to the protocol of the Council of Europe on data security, cyber crime in the same way in US a sign up to the convention of the US could also sign up to the convention 108 on data protection of the Council of Europe.

So because of Europe is a classic instrument or US go beyond US in union and create international consensus which have before so flawed and sometimes also happy. So yes, of course, we will make broad consensus and as he amount of finding and discussion phase. But I would say in the same way that we need a healthy criticism of  the same as the internet must be named undivided because if you go to carve his dispensable, it means no democracy can take any rule autonomously because that leads to fragmentation of these. That we have a conflict between those two principles and GDPR clearly was the step where decided the rules the same ways we have done this before. I don’t think that we were for the better others would be ready to precondition lawmaking when it is necessary a democratic process to the prior international law.

So, when we think about therefore applying the principle essentiality, it’s not so clear and not so easy to find new areas where is necessary because as I said already whenever AI treats posted a job our study think of the little resolution element protection. What about if AI doesn’t need us to date, should we extend P principles of transparency of data protection to AI working with small class today. I think that is a legitimate question which needs to be discussed about and I could imagine there are many who say yes, some of the principles should be extended and you need the law not only because of the principle in essentially we need more also to create level playing field and that is of course the very very important comes on economic development you want a level patriot where everybody plays by the same rules and regulation that’s not the case you don’t have level playing. So that would be my finally remark. Why? Back in his article of services of March washing paws sandy say we need laws. We need laws on privacy, we need laws on content regulation, we need laws portability by the way and you have a right to the possibilities of your data on one side to the other. I would say, first there is for some learning in people, in organizations and it’s good that Silicon Valley moves on from the very irresponsible, that’s just steward that’s great sense and that’s excuse beta-2, we take responsibility we are corporate citizens, we are working with democracy not against it and if you read the book or honor to work about democracy attitudes in Silicon Valley. Remember our business school professor has been very very boring. So, I would say this hopefully an element of learning that’s good. This may be also hazardous energy which pertains to let me change here because in the stock economy where technology moves very fast. If you realize that when the constant public scrutiny to be ethical to be good to do the right thing, you start thinking about never think why us always why not everybody and maybe this argument that the rule should apply to everybody. This is also important the argument before the increasingly for those who previously I continue that from my own experience. Bobby tells me that against…so where does this leave us at the end you want to never pay wrong foods to be in cost also against those who don’t want to pay ball and they are we learn from America all the time. Because when it comes to the enforcement of laws, America certainly a great example but pretty tough and we want democracy to live it that means also Parliament’s packed to do the job we want to see them acting, you want to see them taking responsibility and put good rules in place. We don’t want Parliament’s and shouldn’t ask in the public interest. I think we can be happy if Parliaments come up with laws and take responsibility and discourse which are the one ancestor. That’s an issue the democracy we have not enough people engaging. Democracy we have no people on street, the issue alone, we have all bind in the hold an autocratic tendencies and so on.

Yes, we have problem with democracy but if we have a problem with democracy, one way to revive democracy is also to say let’s democracy do it. So I mean my personal who is here very clear the neoliberal discourse of we don’t want laws it can all be done in several relation ethic codes and so on.

This part of how the modern economy driven discourse has undermined the democracy, so if you want democracy, I think you have to work with democracy and make it useful and come up with good laws which provide the principle of ethics which requires the principle essentiality which comes in where we are convinced that we want the level playing field and where we are therefore convinced that if we want rules which are impossible. And I think the task before us on AI is to identify those fields where this is necessary. In Europe, I can say I don’t find much I gave you one example, where convinced that we need party or that is transparency principle maybe that the platforms have to take responsibility that one sees the bot ís the machine and also on the Internet those who employer and outside the platforms just directed as the dream we must make this transparent their life by the way any newspaper which takes paid advertisement must make here is this journalistic contribution or am I doing here PR. Yes, so we impose this burden or quote on our press which by the way come peace with the platforms and the technology often very unfair terms and we don’t have such a principle yet relating to the internet generally and opportunity AI. So I think now that’s why example in America probably a little bit more work is necessary because you don’t have rules like GDPR but for Europe I would say that to give frame to AI. It is possible and to define a few areas, the laws help innovation by GDPR.

Thank you very much for your attention!”

 

Transcript of Professor Neil Gershenfeld’s Talk at AI World Society Summit

Transcript of Professor Neil Gershenfeld’s Talk at AI World Society Summit

May 15th, 2019

 Transcript:

[00:00:02] Hi I’m Neil Gershenfeld director and like ts Center for Bits and Atoms and I chaired the fab foundation and I want to talk about.

[00:00:11] The future of A.I. and how the digital world relates to the physical world. So as background the Center for Bits and Atoms is created to look at the boundary between digital and physical. And we’ve done things like creating the first significant faster than classical quantum computations or creating synthetic life research rate at that boundary. And as background one of my students Jason Taylor built and runs all the computers at Facebook. One of my students Rafi recorded and built the computers for Twitter and then led the reboot of computing for the Democratic National Committee. One of my students spend wrecked 1 the test of time award from MIT. The big annual meeting and all of that. I’m not really a computer scientist and they’re not computer scientists but they could do that because they learned how to believe in physics not computing in a sense because computer science. The theory violates laws of physics. And once you understand how they relate you can do the kind of work I’m describing. Sowith that background it appears we’re now in a revolution. That’s what this meeting and a lot of attention is about. But it’s important to be aware that we’re really in about the fifth boom bust cycle that there have been these cycles of A.I. is going to solve all the problems that night is going to fail then it’s going to solve and fail. And we’ve been through about five of those. What that boom bust misses is the scaling. And so. Year by year processors have gotten faster. The memory you can process has increased the data you can store has increased the data you can collect over networks has increased. And so when you add all of that up together what it means is a brain does about 10 to the 17 operations per second. If you count the number of neurons and the rate they fire and the supercomputer now today does about 10 to the 17 operations per second. And so the computers have caught up to a brain in the number of operations they can perform and we would be fundamentally derelict in our duty if at that point the computers weren’t beginning to do things comparable to the brain. The real thing that happened wasn’t a breakthrough. It was the steady scaling increases so computers are matching the complexity of our brain. That’s what’s really leading to the result today. Today the steady scanning in processing speed storage networks but A.I. has a mind body problem because it doesn’t have a body essentially. And so What’s missed in those numbers is the supercomputer doing tend to the 17 operations a second is made out of about 10 to the 17 parts if you count to 10 followed by 17 zeros if you count the transistors. A billion dollar chip factory can make that many transistors in a month to produce all the parts you need a person you listening to this is doing that every second so you’re full of manufacturing machines molecular machines called Ariba zones and they’re placing tended the 17 parts every second. And so as you’re sitting there listening to this every second you’re making the complexity of the supercomputer and.

 [00:03:57] That’s done at the heart of that is through a process called Morpheus Genesis which is how genes give rise to physical form. Now that may sound a little remote from artificial intelligence but two of the fathers of computing one is Alan Turing. He gave us mark the modern models of computation. This is the last thing he studied. He studied how genes give rise to form. And John von Neumann gave us modern computer architecture. The last thing he studied was cell free producing machines how a machine can communicate a computation for its own construction. And so this is the literal mother of all A.I. problems. It’s the evolution of A.I. itself. How did intelligence create intelligence. And that didn’t come from the brain thinking it came from molecular intelligence. And so the way more for the genius this works is in one of the oldest parts of the genome. There’s a program and in the sense you think of as a computer program. But it’s a computer program that gives rise to form. And so your genome doesn’t store things like you have five fingers. It stores a program that produces five fingers and that may sound like a detail but it’s profound. One reason is a billion bases in your genome can specify a trillion cells. But the deeper reason is almost anything you did to the genome would either be fatal or inconsequential. But changes to these developmental programs are interesting. You can go from five to six fingers or fingers to webs or walking to flapping. And this is exactly the heart of A.I.. So what A.I. does is find representations. How you search for data hasn’t really changed what these A.I. algorithms do is represent where it’s an interesting place to search. And so in the same sentence evolution searches over programs that create life by finding a beautiful representation for this evolutionary search. And so this was the breakthrough of the last year last year in science according to Science Magazine. And it really is artificial intelligence but it’s natural intelligence. It’s embodied in your molecules and it’s what creates life. It’s what creates our intelligence. So now the connection to this meeting in the heart of what I want to talk about is to look now ahead at the scaling. And so we’re really living through a third digital revolution that unites the first two. So the first one was in communication. We used to send analog waves down the wire and they degraded with distance. Shannon Claude Shannon showed if you communicate digitally if the error is below a threshold the noise reduces exponentially. And what that means is unreliable devices can communicate reliably. And that observation gave us the Internet. John von Neumann built on that to show you can have an unreliable computing device but it can do a reliable computation again by detecting and correcting errors. So the first two digital revolutions were digital communication and computation which at heart means reliable operation with unreliable devices. That was kind of enshrined in Gordon Watts came to be called Moore’s Law. Gordon Moore one of the founders of Intel in 1965 plotted five data points for the transistors on a chip. And he showed if you take the logarithm they line up on a straight line and a straight line on a logarithm plot means something is doubling. So it’s going from one to two to four to eight. And so he saw it looked like this was doubling and he projected what if that happens for 10 years. And in fact his projection was wrong. It went for 50 years. And so that came to be known as Moore’s Law. And the scaling of Moore’s Law is what’s led to the digital revolution is transforming the world. And it led to the scaling I described that led to computers matching the complexity of reaching the complexity of a human brain that the largest computers.

 [00:08:34] So the heart of what I want to talk about is the same thing is now happening for going from digital to physical and this has come to be called Last as law. After Sherry Lasseter somebody I I worked with. And so we actually have more data than Gordon Moore had to see the same sort of exponential scaling but now not in digital computing or communication but in digital fabrication. Crossing the boundary from digital to physical. And so to look at the history of that M.I.T. invented computerized manufacturing in 1952. But that leap led up to the State of the art manufacturing today the most advanced things like 3D printing. But those are all analog. The computer is digital but there’s no information in materials life. Four billion years ago evolved what I described of this process of genes giving rise to form. That’s truly digital in the sense of digital computing and communication. And so you can just you can program that directly. My lab was part of a collaboration creating fully synthetic living organism organisms but we’re learning how to do that in. Inorganic systems in Engineered Materials. And so to trace what that history looks like around the same time that my team made that first computerized manufacturing it made a computer called the world and a few blocks. From where we are sitting right now. The whirlwind filled the building.

 [00:10:08] It was two floors of a building and it was the first significant real time computer a computer that could respond not to a batch of operations but in real time.

 [00:10:19] And you could trace really all modern computer operating systems grew out of that project. It’s very important historically and there is just one of those that fills the building. It’s the size of a planet in the same sense the first computerized manufacturing. There was one of those multimillion dollar initiative the whirlwind got transistor eyes did M.I.T. it was commercialized as the PDP. That was a mini computer and many computers historically went from filling a building to filling a room so he’d fill about a room of this size it would cost maybe one hundred thousand dollars way a few times. And so that was much too big for an individual but it was smaller than a whole organization. And on many computers that’s when the Internet e-mail video games word processing all the things you do with modern computing happened when computers reached the level of a workgroup. The analog for that for this digital fabrication revolution is something called Fab Labs. And in a moment I’ll tell you much more about that. So to continue tracing. The mini computer that cost one hundred thousand dollars there are about a thousand of them. Thousands of them and that’s roughly one per city. What came after that were hobbyist computers that weren’t useful but they were personal. And Microsoft and Apple and all of those companies grew out of these first hobbyist computers. And so there are millions of those roughly one per town and the analog to those isn’t just using a machine to make things but it’s actually machines making machines. Fab Labs making fab labs then the computers became truly personal. In the era of pieces and smartphones and there are now billions of those as many as people on earth and the research we’re doing for those is going from printing processes or cutting machines to assemblers that can make almost anything in one process can make things like integrated circuits are all the technology. And today we’ve reached what’s called the Internet of Things stage where a smart thought we might have trillions of smart devices like a thermostat and the thermostat has the computing power of the mini computer. But now one person might have thousands of those. And for those we’re waking up to things called self assemblers. And the reason I just skim through that is computing numerically went from one to a thousand to a million to a billion to a trillion over a 50 year period transforming every aspect of how we live in the same way I just described going from digital the physical is going through that same scaling each of those stages a thousand million billion trillion exists today in some form in the laboratory but it’s going to take between now and 50 years in the future for them to emerge. But the implication is that 50 years in the future. So in the same way that the Internet and all of that was created and many computers as a outreach project initially for the National Science Foundation we began setting up these fab labs and they fall in between the millions of dollars of tools they run in the lab at M.I.T. and the self replicating systems far in the future. So the fab lab today cost about one hundred thousand dollars it fills a room like this it weighs about two tons and it contains 10 different machines that together read computerized data and do manufacturing. It includes printers and lasers and precision milling and cutting and. Things like embedding and programming electronics. And once you have those tools you can make technology that grows food. You can make consumer electronics you can make furniture you can make houses boats icicles clothing just about everything you buy is a commercial product today. You can make when you have access to these digital fabrication tools. There are inputs you need but what like you can’t yet make the integrated circuits you need the research I described but with those you can really make just about anything. And so they’re not yet personal that’s coming down in costs but they’re a level of the community group and the dramatic thing that happened is we set up a few as an outreach project and then they’ve been doubling for the last decade every year and a half the number of these labs doubled. There’s about fifteen hundred now and they range from as far north to the top of Norway to the bottom of Africa. They’re in rural India. They’re at the bottom tip of South America. They’re in favelas. They’re in just about every sort of setting. And so in the same way that Africa got to skip landline phones and largely go to mobile phones. A significant part of the world is skipping the industrial revolution and go into this distributed manufacturing. And that in turn has a number of profound implications. One is for education. So behind me is that my tea’s campus which was added up a few years ago to businesses spun off from here in the world’s 10th economy it falls between the economic output of India and Russia from these just few square blocks. It’s not because the people here are usually smart it’s that this is a productive place for them to flourish. And what we’re finding is these fab labs all over the world are attracting exactly that profile of bright and then of outliers exactly the ones we see here. But now in rural African villages or in Arctic towns and so we started a program called the fab Academy where instead of traveling a distance to a central campus like this or instead of looking online at a screen students have peers in work groups with mentors and machines locally and we connect them with video and content sharing to make an educational network a distributed educational network that’s really growing. To tap the brainpower of the planet and so that was one unexpected thing and then another unexpected thing is the implication for academies and for cities and countries. So Barcelona for example has a great design sense but over 50 percent youth unemployment a whole generation can’t leave home and work. So my counterpart there fascinating guy art became the city architect the planner of Barcelona. And he started sending up fab labs in districts around the city. So in the same way you expect the city to provide clean water or electricity. The city is now providing the means to make the means to produce as part of urban infrastructure. And that launched a fab city initiative of many cities around the world of Detroit or Oakland or Mexico City or so signing up to this fab city initiative to turn their consumers into creators with the means to produce on the scale of a country of Bhutan for example is based not on gross domestic product but on gross national happiness which doesn’t mean they’re happy. But it means they measure well-being as the output of the economy. So it’s a profound initiative on the scale of a country. But they were limited physically by buying crap trucked in from India and China. And so we’re working closely with Bhutan to describe did play these labs throughout the country to take gross national happiness and make it physical. And so among the most sensitive issues right now in the world are diverging income income inequality. Tariffs economic races to the bottom. All of that package of news. And if you think about this connection of once you can go from digital to physical it’s fundamentally an end run around it. So digital fabrication is not separate. It completes the first two revolution digital computing is the means to think digital communication is the means to message digital fabrication lets bits become atoms and atoms become bits. And so if you go into a fab lab and you produce that sort of things you see me around in this office in the lab. It fundamentally changes a series of assumptions at the heart of these battles of income inequality and tariffs and taxes and all of that is the assumption that you need a job. You need a business to have a job to have work to get money should then be able to purchase something if you can go into the lab and make something. It fundamentally changes the equality of all of those things and really in a way it does an end run around it. Now it’s not utopia but if you think about the democratization that’s happened in computing and communication it does the same thing. So you could make some for yourself you could make it for your community. You could make it for your town a wonderful group of Fab Labs in Detroit. Run by Blair Evans called Insight focus has an explicit model of a third of the time in the lab. As for traditional economic activity for money a third is for a post salary economy that involves barter and exchange and community infrastructure and a third like boot time is economic activity. But not for money but for enrichment transformation. For improving yourself and your community really revisiting these very basic assumptions about what is an economy what is work what is money how do you meet people’s needs in a way it’s a very old idea to break global supply chains and consumption but it rests on the ability to think globally to be part of these global networks. But fabricate locally. So in the first two digital revolutions it took us decades. You could take Gordon Moore’s plot in 1965 but it took. US decades to catch up to spam. Fake News viruses. Differences in access to computing. We don’t need to wait a few decades now. We have a moment right now where we can shape how this revolution is going to unfold and there’s very interesting data points for that in the U.S. Congress in the House Representatives Foster and Massie and then the Senate. Senators Van Hollen and Murkowski are introducing a bill to do in the U.S. What Barcelona did which is universal access to digital fabrication a new notion of a national laboratory made out of connected local labs and so in a world where you do that a lot of what a government does today you don’t need if you just make a product in the lab the way it’s done today. It comes in say in a ship to a port you need a port you need somebody to build the port you need to figure out how you tariff the thing coming in. Then it needs to go down say to a train then it needs to go to a highway then it needs to get to a building and then you need a cash register and you need to account for the sale. If you just make it for yourself you eliminate much of that global supply chain and much of the function of what a government does. On the other hand if you have the ability to do it yourself you need a whole bunch of new functions government doesn’t do today about empowering and enabling this to make it efficient effective safe and all of that. But in the world where this is so distributed you can’t do it by command and control you can’t legislate it. You have to opt in and add value to the networks. And so merging fabrication with communication and computation fundamentally challenges how an economy works. It fundamentally challenges what are the functions of government performs. But it gives a real hopeful opportunity to not keep fighting the same battles we’ve been fighting for many years but step around them and empower anybody to make almost anything anywhere. So now to step back and conclude I started by talking about foundations of computing and how it’s led to what seems to be a breakthrough in a i i then connected with. The most profound part of a I for me is not how we think but how we evolved the ability to think the creation of thinking so not just A.I. but to create A.I. that creates a I that involves intelligence but it’s not simply software it’s a molecular intelligence that goes from digital the physical we’re not learning to build that technology ourselves it’s growing exponentially and it’s one of the most significant and disruptive things I know for the future of the planet. To do that we’ve been initially frustrated by working with schools for education and non-profits for aid and governments for governance and businesses for economy and we kept tripping over that because if anybody can make anything anywhere it breaks all of those boundaries. And so probably the hardest part of this whole story but the most interesting part is we had to build a whole new set of organizations around anybody being able to make anything anywhere. How do you live learn work and play. And so there’s a really interesting group of organizational innovators helping lead that story and I’ll share some references afterwards with that. One of them is I recently wrote a book with my called Designing reality with my younger brother Allen who ran the biggest videogame studio and my older brother Joel who led the National Labor Relations organization tracing out how this many technical roadmaps relates to the social roadmap for the impact.

 [00:25:05] And so that’s what it means for a guy to become body to become physical. And I invite you to help join us in building that new world.

 [00:25:14] Thank you.

 – End of Transcript –

Professor Neil Gershenfeld speaks at AI World Society – G7 Summit Conference at Loeb House, Harvard University, April 25, 2019.

A summary this speech can be found here.

 

Vint Cerf talks about Human Thinking vs. Machine Thinking

Vint Cerf talks about Human Thinking vs. Machine Thinking

Cerf, universally regarded as one of the “co-fathers” of the Internet, presented “What IF Machines Thought Like Humans?” as part of Purdue University’s Ideas Festival, the centerpiece of the University’s Giant Leaps Sesquicentennial Campaign. Cerf’s talk aligned with the festival’s theme of AI, Algorithms and Automation: Balancing Humanity and Technology. The April 5 event held was sponsored by the College of Engineering.

Mr. Nguyen Anh Tuan, CEO of The Boston Global Forum, presents the plaque of World Leader in AI World Society Award to the Father of the Internet Vint Cerf.

“The topic, in my brief introduction here, is about artificial intelligence. I confess to you I always thought artificial intelligence might be best described as ‘Artificial Idiocy,’” Cerf said. “Machine learning and multi-layer neural networking are not necessarily the same as general artificial intelligence. General artificial intelligence has to do with the ability of a system to take a lot of input and formulate a real-world model of a conceptual idea in order to reason about the model.”

Cerf asked the audience to think of a simple table to further explain the difference between human thinking and a machine’s way of thinking.

He explained that we do not usually think of a table as a “flat surface that is perpendicular to the gravitational field” but that is what it is. Instead, we think of a table as an object to sit things upon. After humans understand the properties of this table, then they can easily identify other objects that can be used as tables. This is how humans can generalize abstract concepts of things quickly compared to a machine where abstract concepts are harder to learn from a simple input.

“As humans, we take-real world objects, abstract these models, reason about them and then apply what we have learned. I find that way of thinking to be missing in most artificial intelligence projects,” Cerf said.

More details of his speech can be found here.

The Boston Global Forum honored Vint Cerf as World Leader in AI World Society Award at AIWS-G7 Summit Conference April 25, 2019 at Loeb House, Harvard University. His acceptance speech can be found here.

Allan Cytryn’s discussion with Professor Neil Gershenfeld

Allan Cytryn’s discussion with Professor Neil Gershenfeld

Allan Cytryn, member of AI World Society Standards and Practice Committee, discussed with Professor Neil Gershenfeld with questions below:

  1. The historical barrier between computation and production has been a de facto constraint, or control, on what AI might do. But once AI can link up with manufacturing, that control is eliminated. What are the implications and what actions should be taken?
  2. In discussing the corollaries between computation and genetics, the question then arises, “What is life”? And will the linking of bits and atoms therefore allow AI to create life? Can we then interfere with the creation of life by AI, even if the life is unknown and unrecognizable to us?
  3. The discussion begins with a review of the scaling of computer power to brain power, but other researchers have said that computers do not possess cognitive skills, they lack the structures in the brain that produce cognitive behavior. Is this distinction material? If not, why not? If so, what are the implications?
  4. A key issue on AI is “transparency” and there are legitimate efforts to pursue increased transparency. But many people are now viewing this as potentially limited, since it assumes an anthropomorphic notion of intelligence and communication, which may not be relevant to non-humans. If we can’t understand what the machines are doing, how do we know what they might be building and whether that is good or bad?
  5. If the achievement of bits and atoms devolves all existing social structures, doesn’t that return mankind to a Hobbesian state. Consider, if the state is providing the universal fab infrastructure, but that same infrastructure destroys the state, then won’t there be a survival of the fittest, mad-max type of competition to lawlessly (if the state has devolved, there is no law) seize control of the means of production for power and advantage?

Professor Gershenfeld’s full speech at AIWS Summit 2019 can be found here.

Professor Alex ‘Sandy’ Pentland’s talk at the AIWS Summit

Professor Alex ‘Sandy’ Pentland’s talk at the AIWS Summit

Former Governor Michael Dukakis wrote in his letter calling for contributions to the AI World Society (AIWS) Summit, “The real world applications of AI will bring revolutionary changes and will have profound effects on the future of humanity. The changes will bring challenges to societal norms and economic models that we have relied on for decades. And we would be wise to prepare for all that will mean…” But, “our national governments have been slow to act. And international bodies such as the United Nations have yet to effectively address the problem.”

The AIWS Summit is filling in this void, serving as a place where the brightest minds on the planet can work together, to find the innovative solutions that will help us build a brighter future. This week, we are pleased to present a talk by MIT Professor Alex ‘Sandy’ Pentland for the AIWS Summit.

Professor Alex ‘Sandy’ Pentland speaks with Mr. Nguyen Anh Tuan, Director of Michael Dukakis Institute for Leadership and Innovation.

Professor Pentland directs the Connection Science and Human Dynamics labs at MIT. He is one of the most-cited scientists in the world, and Forbes recently declared him one of the “7 most powerful data scientists in the world” along with Google founders and the Chief Technical Officer of the United States. He co-led the World Economic Forum discussion in Davos that led to the EU privacy regulation GDPR, and was central in forging the transparency and accountability mechanisms in the UN’s Sustainable Development Goals.  He has received numerous awards and prizes such as the McKinsey Award from Harvard Business Review, the 40th Anniversary of the Internet from DARPA, and the Brandeis Award for work in privacy.

In his talk for AIWS Summit, Professor Pentland introduced a key project of his research group about techniques and open-source software for helping countries and companies deal with AI in a way that is effective, efficient, but also ethical. In today’s world where data is everywhere, most in the hands of private companies, he raised the question as to how this data can be used by governments, social and civic systems such that it is trustworthy and unbiased and that people understand what is happening.

He talked about his method of Open Algorithms as a way to address this question. He advocates the idea of leaving data where it is collected and have open algorithms answer inquiries about the data, instead of transferring all the data into one single pool. The latter is vulnerable to concerns about security, ownership, and privacy.

In his proposed framework, there should be a decentralized federation of different players and interests that agree to answer certain questions for certain functions such that everyone can audit. We can keep track of what questions are being asked about what data, and the people who collected or own the data can monitor the entire process. Decisions made by a country can be audited, or questions about fairness or bias can be answered, because we now have a record of what was done with the data and who did it.

Several countries, including Estonia, Israel, and Australia, have adopted this framework and conducted pilot projects to explore how to get better insights about the country from the public-private data partnership and come up with better policies to serve their people.

The full video of Professor Penland’s talk can be seen here.

THE FUTURE OF AI AND HOW THE DIGITAL WORLD RELATES TO THE PHYSICAL WORLD – PROF. GERSHENFELD’S TALK AT AIWS SUMMIT 2019

THE FUTURE OF AI AND HOW THE DIGITAL WORLD RELATES TO THE PHYSICAL WORLD – PROF. GERSHENFELD’S TALK AT AIWS SUMMIT 2019

The field of AI research was founded more than 50 years ago. In June of 1956, a few dozen scientists from all around the country gathered for a meeting on the campus of Dartmouth College. What they were talking about was how to build a machine that could think.

Many years later, in 2009, some of the pioneers of the field, joined by later generations of thinkers, were gearing up for a massive “do-over” of the whole idea. The new project was called the Mind Machine Project (MMP). Prof. Neil Gershenfeld, Director of MIT’s Center for Bits and Atoms, is one of the leaders of MMP. One of the project’s goals was to create intelligent machines — “whatever that means,” he recalled.

On May 15, 2019, at MIT’s Center for Bits and Atoms, Prof. Gershenfeld gave a keynote talk at the AI World Society Summit 2019 about the future of AI and how the digital world relates to the physical world – the boundary between them.

“It appears that we are in an AI revolution, but it is really important to be aware that we’re now in its fifth boom-and-bust cycle,” said Gershenfeld. The boom and bust cycle refers to the alternating phases of economic growth and decline. What he meant is that, “there are cycles where AI is going to solve all the problems and where AI is going to fail, and we have been through five of those”. What is different today, he explained, is thanks to the advances in computing technology, the computers have caught up to the capability of the brain in terms of the number of operations that can be performed.

Gershenfeld talked about two of the fathers of Computing, Alan Turing and John Von Neumann, emphasizing that Turing’s final study was about how genes give rise to form and Von Neumann’s final study was about self-replicating machines, how a machine can communicate its computation for its own construction. “Literally, the mother of all AI problems is the revolution of AI itself, how intelligence creates intelligence,” said Gershenfeld.

He considered finding representations being the heart of AI. “How to search data has not really changed. What AI algorithms do is to represent where is an interesting place to search. In the same sense, evolution searches over programs that create lives by finding the beautiful representation for the evolutionary search.”

He focused his talk on where we would be ahead of the scaling curve of AI. “We are really living through the third digital revolution”. The first two were digital computing and digital communication; in a nutshell, by digitalization, we can really perform reliable operations using imperfect devices.

The third digital revolution extends this insight into fabrication. He proposed that, with digital fabrication, we can digitalize not just the description of a design but also the materials that it is made from, in the same way that living systems are assembled from a small set of amino acids. A problem with today’s AI, he said, is that AI does not have a “body”, and with digital fabrication, we are getting closer to real AI.

Digital fabrication is challenging fundamental assumptions about the nature of work, money and government. It is a significant breakthrough and will have a big impact on shaping the future of AI. The full video of Professor Gershenfeld’s talk can be found here.