Max Tegmark Looks at Artificial Intelligence and the Need for Wisdom-Driven Technology

Feb 19, 2018News

A profile of Max Tegmark, author of Life 3.0, who spoke recently before the Boston Global Forum-Michael Dukakis Institute on Cybersecurity Day 2017, 12/12/17

Max Tegmark, author of Life 3.0, addressing AIWS Round Table

Q. You are the author of Life 3.0. It’s about the future and focuses on Artificial Intelligence. Should we fear Artificial Intelligence, will it take our jobs or create robots that can harm us?

A. You’ll notice that my book is not called “Doom and Gloom 3.0.” I think it’s important, also, to remember all the wonderful upsides technology can bring if we do it right. Now even though I spent the last week in California at a Conference on technical AI – which has 8,000 people now and keeps doubling basically every year – I think it’s very important to broaden this conversation beyond nerds like myself, because the question of how we use this beneficially for the future is a conversation everyone has to contribute to. It’s particularly important for people who have knowledge in policymaking. So, let’s start on an optimistic note — the Apollo 11 the moon mission.

This was not only a successful mission, but also inspiring, because it shows that when we humans use technology wisely, we can accomplish things that our ancestors could only dream of. But, as my Estonian Friend Jaan Tallinn likes to point out, rocketry is also a great metaphor for how we use all of technology, because it’s not enough to simply make technology powerful. You also have to steer it and you have to figure out where you want to go. They didn’t first launch this and then: “Wait, how do we steer this? Now they have these great engines. Where do we go, maybe to Mars?”

Q. That was decades ago. The power of those huge banks of computers at mission control can now fit in my smart phone. Are we getting ahead of ourselves? AI isn’t just a small step, it’s a huge leap for mankind.

A. Let’s talk just for a about another journey empowered by something much more powerful than rocket engines, where the passengers aren’t just three astronauts but all of humanity. Let’s talk about our collective journey into the future with artificial intelligence, the most powerful technology that we will ever have seen. In the grand scheme of things, if we take a step back, technology is incredibly empowering. You know, we’re still stuck on an imperceptibly small little speck here in a seemingly lifeless and dead universe, yet technology offers us the opportunity for life to flourish, not just for the next election cycle on our little planet but for billions of years. And there’s no shortage of resources, either, though we have to stop quibbling.

Q. So what’s the plus side?

A. Let’s, talk about the growing power of AI, and then a bit about steering it, and then a bit about where we want to go with it. The power of AI is improving dramatically. Whereas past accomplishments like when we got overthrown in chess by IBM Deep Blue, we hear the intelligence of the machine was largely programmed in by humans who knew how to play chess. It won simply because it could think fast and remember more. But, in contrast, Alpha Go and the new Alpha Zero that was just announced a few days ago took 3,000 years of human Go games, millions of them and Go books and poems and wisdom and threw them all in the trash. And just learned from scratch one day and was able to blow away everything.

Q. But these are games. What about real life?

A. The same software now also beats the world’s best – not just the world’s best chess players – but, also all the world’s best computer chess programs. So Alpha Zero has shown in chess now that it’s not – the big deal isn’t that it can blow away human players, but that it can blow away human AI programmers who spent over thirty years making all this chess software. In just four hours, it got better than all of them and played against Stockfish, the world’s best chess program. Played 100 games, didn’t lose a single one and even discovered some pretty profound things.

You can just look at a computer learning to play a computer game. You can see it sucks here badly, it misses the ball almost all the time because it doesn’t know what a ball is, or what a paddle is, or what a game is. But, just by a very, very simple reinforcement learning algorithm loosely inspired by our brain, it gradually gets better and pretty soon gets to the point where it won’t miss the ball. It does better than I am and then, if you just keep training it – this is now such a simple thing, I can train it up in my lab at MIT on a GPU very quickly – it discovers something where the people at Google DeepMind didn’t know about, which is that if you –

Q. Go on….

A. It’s not just computer games. There’s been a lot of progress now at taking robots or simulating robots and seeing if they can learn to walk from scratch. They can, even if you don’t ever show them any videos of what watching looks like and have no human intervention. You just make it into a game, again, where they get a point, basically, whenever they move one centimeter to the right. It works for bipedal life forms, quadrupeds, all sorts of things.

Q. How far is this progress going to continue? What’s going to happen eventually?

A. I like to think about it in terms of this landscape of different tasks, where the elevation represents how difficult it is for machines to do them. The sea level represents how good machines are at doing them today. So, chess, arithmetic, and so on have of course long been submerged. The sea level is rising, so in the short-term, obviously, we should not advise our near and dear to take jobs right on the waterfront, because they’re going to be the first to be automated away.

Q. Eventually won’t everything get flooded? Or will AI ultimately fail to live up to its original ambition to do everything?

A. This is a controversy, where you have top AI researchers on both sides of it, but most people – most AI researchers in recent polls think that AI will succeed maybe in a few decades. Opinions are very dated. As long as there’s a possibility, it’s very valuable to talk in the broader society about what that mean.

Q. How can we make sure this becomes the best thing ever, rather than the worst thing ever?

A. Yes, how do we steer this technology in a good direction? How can we control it? So, in addition to my day job working at MIT, I founded this little nonprofit, the Future of Life Institute, with my Estonian cofounder Jaan Tallinn, and we are very fortunate to have a lot of great people. Our mission has the word “steer” in it here. We are optimistic that we can create an exciting future – an inspiring future – with technology, as long as we figure out the steering part. So, I’m optimistic that we can create an inspiring future as long as we meet the growing power of this technology with the growing wisdom with which we manage it. And here, I think, is a really big challenge for policymaking and for society as a whole because, to win this wisdom race, I feel we need a strategy change.

Q. But technology developers don’t like regulation, constraints, right?

A. In the past, we always stayed ahead in the wisdom race by learning from mistakes. Fire – “oops,” we screwed up a bunch of times and then we invented the fire extinguisher. We invented the automobile, screwed up a bunch of times, invented the seatbelt, the airbag, the traffic light – and it works pretty well, right? But as the technology gets ever more powerful, at some point we reach a threshold where it’s so powerful that we don’t want to learn from mistakes any more – when that becomes a bad idea. We want to be proactive, plan ahead, to get things right the first time because that might be the only time we have.

Q. Some folks would say you are scare-mongering?

A. No, this isn’t scare-mongering. This is what we over at MIT call ‘safety engineering.’” Think about the Apollo 11 moon launch. NASA systematically thought through everything that could possibly go wrong with this. Was that care-mongering? No, that was exactly the safety engineering that guaranteed success of the mission. So that’s what I’m advocating here, as well. We should have a ‘red team’ approach where we think through things that can go wrong with these technologies, precisely so that we as a society can make sure they go right instead.

With the Future of Life Institute, we’ve organized conferences where we’ve brought together world leaders in AI – from both industry and academia – to talk specifically about this. Not about how to make AI more powerful, which there are plenty enough of conferences about already, but on how to make it beneficial.

Q. What was the upshot? Did the scientists support you?

A. And the outcome of the most recent meeting we held in Asilomar, California, was the 23 Asilomar AI Principles (https://futureoflife.org/ai-principles/), which have been signed by AI researchers from around the world. It’s a sort of amazing list of people. You have the CEO of Google DeepMind, responsible for those videos I showed you here. Ilya Sutskever from Open AI. Yann LeCun from Facebook. Microsoft, IBM, Apple, and so on. Google Brain and academics from around the world.

Q. Twenty-three Guiding Principles, how about some examples?

A. One of these principles is about making sure that AI is primarily used for new ways of helping people, rather than ways of harming people. There is a good precedent here. Any science, of course, can be used for new ways of helping people or new ways of harming people. Science itself is completely morally neutral. It’s an amplifier of our human power.

Today, if you look at people who graduate from Harvard, for example, with biology and chemistry degrees, they will pretty much all of them be going into biotech and other positive applications of AI, rather than building bioweapons. It didn’t have to be that way, but biologists pushed very, very hard – it was, in fact, a Harvard biologist – who persuaded Henry Kissinger, who persuaded Richard Nixon, to push for an international treaty limiting biological weapons which created a huge stigma against bioweapons.

Q. Go on…

A. Same thing has happened with chemistry. AI researchers are quite united in wanting to do the same thing with AI. This is very touch and go. There was a meeting at the United Nations in Geneva a few weeks ago, which was a bit of a flop. This has nothing to do with superintelligence or human-level AI. This is something that can happen right now, by just integrating the technologies we already have and mass-producing them.

A second one where there was huge consensus is that this great wealth, that can obviously be created if we have machines making ever more of our goods and services, should be distributed in a wide way so that they actually make everybody better off and we get a future where people can look forward to a future more like this than like that.

Q. Does government have a role?

A. A third principle is that we, by we I mean governments, should invest heavily in AI safety research. So, raise your hand if your computer has ever crashed. [Many hands raise.] And you spoke about cybersecurity this morning, so you’re very well aware of the fact that, if we can’t get our act together with these issues, then all the wonderful technology we build can cause problems, by either malfunctioning or actually getting hacked and turned against us. I feel we need significantly more investment in this – and not just in near-term things like cybersecurity – but also as machines get ever-more capable, also in the question of: “How can you make machines really understand human goals and really adopt human goals, and have the guarantee that they will retain these goals as they get ever more capable?” Right now, there’s almost no funding for these sorts of questions from government agencies. We gave out around 37 grants with the help of Elon Musk to sort of kickstart this. There were a number of sessions at the NIPS Conference (https://nips.cc/Conferences/2017), where it was clear that researchers wanted to work on this, but they have to pay their grad students. Real opportunity to invest in this aspect of the wisdom.

Q. So it’s not a technology race, but a wisdom race.

A. Exactly and to win the wisdom race – to win any race, right – there are two strategies. If you want to win the Boston Marathon, you can either slow down the competition by serving them two-week-old shrimp the night before, or you can try to run faster yourself. I think the way to do this is not to try to slow down the development of AI – I think that’s both unrealistic and undesirable – but rather to invest in these complementary questions of how you can make sure it gets used wisely.

Last, but not least, when you launch a rocket, you think through in advance where you want to go with it. I think we are so focused on tomorrow and the next election cycle and the next product cycle we can launch with AI, and we have a tendency to fall in love with technology just because it’s cool. If we are on the cusp of creating something so powerful then maybe one day it can do all our jobs, and maybe even be thought of as a new life form – at the minimum utterly transform our society – we should look a little bit farther than the next election cycle. We should ask, “What kind of future are we trying to create?”

Q. What would you tell someone graduating from high school today?

A. I often get students walking into my office at MIT for career advice, and I always ask them, “Where do you want to be in ten years?” And if all she can say is: “Uh, maybe I’ll be murdered, and maybe I’ll get cancer.” Terrible approach to career planning, right? But that is exactly what we’re doing as a species when we think about the future of AI. Every time we go to the movies, there’s some new doomsday scenario – oh, it’s Terminator, oh it’s Blade Runner, this dystopia, that dystopia – which makes us paralyzed with fear. It’s crucial, I feel, that we can form positive visions – shared positive visions – that we can aspire to. Because if we can, we’re much more likely to get them.

Former Estonian President Toomas Hendrik Ilves speaking at AIWS Round Table in a discussion with Max Tegmark