SpaceX Launches First Broadband Internet Satellites

SpaceX Launches First Broadband Internet Satellites

SpaceX just launched two test satellites, Tintin A and Tintin B, which the company and its head Elon Musk hope will be the beginning of a space-based broadband network dubbed Starlink. Both launched on a SpaceX Falcon9 rocket and were orbited successfully. “First two Starlink demo satellites, called Tintin A & B, deployed and communicating to Earth stations,” Musk said in a tweet. Starlink is still far from operational, but these tests prove that SpaceX can likely field the required satellite constellation to provide global internet. During the launch, SpaceX also attempted (unsuccessfully) to recover the nose cone via a boat with a net, missing just barely.

Projects like Musk’s Starlink are connecting the world more each day, oftentimes for the better. At the same time, as technology advances, so too do threats it can be misused. To that end, the Boston Global Forum – Michael Dukakis Institute has published the Ethics Code of Conduct for Cyber Peace and Security (ECCC). BGF, MDI, and AIWS are dedicated to engaging technology and policy leaders in order to create responsible concepts and guidelines to minimize risks and ensure the Digital Age benefits everyone.

Seven Layers of AI Ethics

Seven Layers of AI Ethics

At our April BGF-G7 Summit, AIWS will announce its Ethical Framework for AI, which outlines beneficial and safe uses for the technology. The framework will involve seven layers, which are listed below, covering concepts for public policy, technology, ethics, and more:

 

Artificial Intelligence World Society (AIWS) Model

Layer 7 (Application Layer)  Application and Services

Layer 6 (Public Service Layer) Public Services: Transportation, Healthcare, Education, Justice System

Layer 5 (Policy Layer) Policy, Regulation, Convention, Norms

Layer 4 (Legislation Layer) Law and Legislation

Layer 3 (Tech Layer) Technical Management, Data Governance, Algorithm accountabilty, Standards, IT Experts Management.

Layer  2 (Ethics Layer) Ethical Frameworks, Criteria, Principles

Layer 1 (Charter Layer) Charter, Concepts

 

Rising Concerns Over Malicious Uses of AI

AI technology is advancing rapidly and, with it, the risks of it being hijacked for malicious purposes. 25 experts recently published the Malicious AI Report, which explores threats to digital, political, and physical security within the next five years that might be brought on by AI technology. Among these are the ability of AI to automate cyberattacks or generate false images, audio, and even entire personas to manipulate public opinion. One such example was last years “deepfake” incident, where celebrity faces were realistically (but deceitfully) edited into pornographic videos. This technology is already seeing misuse today, making it crucial to create frameworks and ethical guidelines to maintain its security. To that end, AIWS will be announcing a 7-layer Ethical Framework for AI this April, at our BGF-G7 Summit conference at Harvard University.

Club de Madrid Hosts Round Table on the Future of Democracy

Club de Madrid Hosts Round Table on the Future of Democracy

This weekend, the World Leadership Alliance – Club de Madrid is hosting the Next Generation Democracy (NGD) Round Table for North America in San Francisco. At the event’s core is a discussion on the current state and future of democracy, including discussion about trust and distrust in institutions, exclusionary nationalism, and the impact of the tech sector on governance. The World Leadership Alliance – Club de Madrid is lead by former Latvian President Vaira Vike-Frieberga, a member of both the BGF and AIWS Boards of Thinkers. AIWS is proud to work with President Vike-Freiberga, who was recently presented an award by the Boston Global Forum and Michael Dukakis Institute. AIWS considers discussions such as the NGD Round Table essential for a positive AI future, especially considering its implications on governance and public policy.

Columbia University Researchers Create Program to ‘Retrain’ AI

Columbia University Researchers Create Program to ‘Retrain’ AI

Researchers from Colombia University have created a new program called DeepXplore, a program that reverse engineers how AI learn, in order to find bugs. One application is self-driving cars, which depend on visual data to form neural networks and “learn.” Columbia News cited one example of a self-driving car that collided with a truck after mistaking it for a cloud, killing the passenger. The researchers tested DeepXplore on 15 state-of-the-art deep learning systems and found thousands of bugs that, once corrected, substantially improved accuracy. With this technology, researchers can not only “retrain” AI systems to recognize and correct bugs affecting them, but also for identifying Malware hiding in anti-virus software, and more.

Understanding why AI makes the decisions it does will go a long way in making people comfortable with self-driving cars and other safety-critical systems (aka, machines that can kill you). AI innovation is imperative, but steps must be made to ensure that it is responsible innovation. This was the topic of our talk at the AIWS Roundtable with Max Tegmark, author of Life 3.0. AI will change our society rapidly, and both technologists and policy makers must ensure that it changes society for the better.

Angela Merkel’s Speech at the World Economic Forum

Angela Merkel’s Speech at the World Economic Forum

At the World Economic Forum in Davos last month, German Chancellor Angela Merkel gave a speech on the challenges facing her country and Europe, and what can be done to meet them. Opening by invoking the centennial of World War I, she reminded the audience of the challenges and conflicts Europe suffered in the 20th century. In order to solve crises of this century and ensure the safety and well-being of her people, Chancellor Merkel stressed the importance of digital technology.

Among the best applications of this technology, Merkel said, are digital education and means for citizens of “communicating with their state.” She also touched on the dilemma on who has access to private data, which she called “the raw material of the 21st century.” Chancellor Merkel then went on to discuss the importance of maintaining a unified Europe in the face of both the Eurozone and Refugee Crises.

In 2015, the Boston Global Forum presented Chancellor Merkel with the World Leader in Peace, Security, and Development Award. In her Davos speech she also specifically cited the example of Estonia, whose former president Toomas Hendrik Ilves was awarded BGF – MDI’s World Leader in Cybersecurity Award this past December. We at BGF, MDI, and AIWS are proud to honor leaders such as these for there ongoing contributions to world peace and stability.

Elon Musk Leaves OpenAI

Elon Musk Leaves OpenAI

Elon Musk has recently stepped down from OpenAI, a research group on ethical applications of AI that he co-founded in 2015. OpenAI cited a desire to avoid any conflict of interest for Musk between the group and his car company Tesla, which has begun its own research into AI tech. Musk, who previously described AI as “humanity’s biggest existential threat,” will continue to advise and donate to OpenAI after his departure. OpenAI also recently contributed to the Malicious Use of Artificial Intelligence report, which outlines threat scenarios regarding rogue or hijacked AI.

AIWS shares OpenAI’s mission to ensure that AI becomes safe and beneficial for humanity. To that end, in April, we will announce a new Ethical Framework for AI at our annual BGF-G7 Summit, which will outline responsible and benevolent uses of Artificial Intelligence.

Max Tegmark Looks at Artificial Intelligence and the Need for Wisdom-Driven Technology

Max Tegmark Looks at Artificial Intelligence and the Need for Wisdom-Driven Technology

A profile of Max Tegmark, author of Life 3.0, who spoke recently before the Boston Global Forum-Michael Dukakis Institute on Cybersecurity Day 2017, 12/12/17

Max Tegmark, author of Life 3.0, addressing AIWS Round Table

Q. You are the author of Life 3.0. It’s about the future and focuses on Artificial Intelligence. Should we fear Artificial Intelligence, will it take our jobs or create robots that can harm us?

A. You’ll notice that my book is not called “Doom and Gloom 3.0.” I think it’s important, also, to remember all the wonderful upsides technology can bring if we do it right. Now even though I spent the last week in California at a Conference on technical AI – which has 8,000 people now and keeps doubling basically every year – I think it’s very important to broaden this conversation beyond nerds like myself, because the question of how we use this beneficially for the future is a conversation everyone has to contribute to. It’s particularly important for people who have knowledge in policymaking. So, let’s start on an optimistic note — the Apollo 11 the moon mission.

This was not only a successful mission, but also inspiring, because it shows that when we humans use technology wisely, we can accomplish things that our ancestors could only dream of. But, as my Estonian Friend Jaan Tallinn likes to point out, rocketry is also a great metaphor for how we use all of technology, because it’s not enough to simply make technology powerful. You also have to steer it and you have to figure out where you want to go. They didn’t first launch this and then: “Wait, how do we steer this? Now they have these great engines. Where do we go, maybe to Mars?”

Q. That was decades ago. The power of those huge banks of computers at mission control can now fit in my smart phone. Are we getting ahead of ourselves? AI isn’t just a small step, it’s a huge leap for mankind.

A. Let’s talk just for a about another journey empowered by something much more powerful than rocket engines, where the passengers aren’t just three astronauts but all of humanity. Let’s talk about our collective journey into the future with artificial intelligence, the most powerful technology that we will ever have seen. In the grand scheme of things, if we take a step back, technology is incredibly empowering. You know, we’re still stuck on an imperceptibly small little speck here in a seemingly lifeless and dead universe, yet technology offers us the opportunity for life to flourish, not just for the next election cycle on our little planet but for billions of years. And there’s no shortage of resources, either, though we have to stop quibbling.

Q. So what’s the plus side?

A. Let’s, talk about the growing power of AI, and then a bit about steering it, and then a bit about where we want to go with it. The power of AI is improving dramatically. Whereas past accomplishments like when we got overthrown in chess by IBM Deep Blue, we hear the intelligence of the machine was largely programmed in by humans who knew how to play chess. It won simply because it could think fast and remember more. But, in contrast, Alpha Go and the new Alpha Zero that was just announced a few days ago took 3,000 years of human Go games, millions of them and Go books and poems and wisdom and threw them all in the trash. And just learned from scratch one day and was able to blow away everything.

Q. But these are games. What about real life?

A. The same software now also beats the world’s best – not just the world’s best chess players – but, also all the world’s best computer chess programs. So Alpha Zero has shown in chess now that it’s not – the big deal isn’t that it can blow away human players, but that it can blow away human AI programmers who spent over thirty years making all this chess software. In just four hours, it got better than all of them and played against Stockfish, the world’s best chess program. Played 100 games, didn’t lose a single one and even discovered some pretty profound things.

You can just look at a computer learning to play a computer game. You can see it sucks here badly, it misses the ball almost all the time because it doesn’t know what a ball is, or what a paddle is, or what a game is. But, just by a very, very simple reinforcement learning algorithm loosely inspired by our brain, it gradually gets better and pretty soon gets to the point where it won’t miss the ball. It does better than I am and then, if you just keep training it – this is now such a simple thing, I can train it up in my lab at MIT on a GPU very quickly – it discovers something where the people at Google DeepMind didn’t know about, which is that if you –

Q. Go on….

A. It’s not just computer games. There’s been a lot of progress now at taking robots or simulating robots and seeing if they can learn to walk from scratch. They can, even if you don’t ever show them any videos of what watching looks like and have no human intervention. You just make it into a game, again, where they get a point, basically, whenever they move one centimeter to the right. It works for bipedal life forms, quadrupeds, all sorts of things.

Q. How far is this progress going to continue? What’s going to happen eventually?

A. I like to think about it in terms of this landscape of different tasks, where the elevation represents how difficult it is for machines to do them. The sea level represents how good machines are at doing them today. So, chess, arithmetic, and so on have of course long been submerged. The sea level is rising, so in the short-term, obviously, we should not advise our near and dear to take jobs right on the waterfront, because they’re going to be the first to be automated away.

Q. Eventually won’t everything get flooded? Or will AI ultimately fail to live up to its original ambition to do everything?

A. This is a controversy, where you have top AI researchers on both sides of it, but most people – most AI researchers in recent polls think that AI will succeed maybe in a few decades. Opinions are very dated. As long as there’s a possibility, it’s very valuable to talk in the broader society about what that mean.

Q. How can we make sure this becomes the best thing ever, rather than the worst thing ever?

A. Yes, how do we steer this technology in a good direction? How can we control it? So, in addition to my day job working at MIT, I founded this little nonprofit, the Future of Life Institute, with my Estonian cofounder Jaan Tallinn, and we are very fortunate to have a lot of great people. Our mission has the word “steer” in it here. We are optimistic that we can create an exciting future – an inspiring future – with technology, as long as we figure out the steering part. So, I’m optimistic that we can create an inspiring future as long as we meet the growing power of this technology with the growing wisdom with which we manage it. And here, I think, is a really big challenge for policymaking and for society as a whole because, to win this wisdom race, I feel we need a strategy change.

Q. But technology developers don’t like regulation, constraints, right?

A. In the past, we always stayed ahead in the wisdom race by learning from mistakes. Fire – “oops,” we screwed up a bunch of times and then we invented the fire extinguisher. We invented the automobile, screwed up a bunch of times, invented the seatbelt, the airbag, the traffic light – and it works pretty well, right? But as the technology gets ever more powerful, at some point we reach a threshold where it’s so powerful that we don’t want to learn from mistakes any more – when that becomes a bad idea. We want to be proactive, plan ahead, to get things right the first time because that might be the only time we have.

Q. Some folks would say you are scare-mongering?

A. No, this isn’t scare-mongering. This is what we over at MIT call ‘safety engineering.’” Think about the Apollo 11 moon launch. NASA systematically thought through everything that could possibly go wrong with this. Was that care-mongering? No, that was exactly the safety engineering that guaranteed success of the mission. So that’s what I’m advocating here, as well. We should have a ‘red team’ approach where we think through things that can go wrong with these technologies, precisely so that we as a society can make sure they go right instead.

With the Future of Life Institute, we’ve organized conferences where we’ve brought together world leaders in AI – from both industry and academia – to talk specifically about this. Not about how to make AI more powerful, which there are plenty enough of conferences about already, but on how to make it beneficial.

Q. What was the upshot? Did the scientists support you?

A. And the outcome of the most recent meeting we held in Asilomar, California, was the 23 Asilomar AI Principles (https://futureoflife.org/ai-principles/), which have been signed by AI researchers from around the world. It’s a sort of amazing list of people. You have the CEO of Google DeepMind, responsible for those videos I showed you here. Ilya Sutskever from Open AI. Yann LeCun from Facebook. Microsoft, IBM, Apple, and so on. Google Brain and academics from around the world.

Q. Twenty-three Guiding Principles, how about some examples?

A. One of these principles is about making sure that AI is primarily used for new ways of helping people, rather than ways of harming people. There is a good precedent here. Any science, of course, can be used for new ways of helping people or new ways of harming people. Science itself is completely morally neutral. It’s an amplifier of our human power.

Today, if you look at people who graduate from Harvard, for example, with biology and chemistry degrees, they will pretty much all of them be going into biotech and other positive applications of AI, rather than building bioweapons. It didn’t have to be that way, but biologists pushed very, very hard – it was, in fact, a Harvard biologist – who persuaded Henry Kissinger, who persuaded Richard Nixon, to push for an international treaty limiting biological weapons which created a huge stigma against bioweapons.

Q. Go on…

A. Same thing has happened with chemistry. AI researchers are quite united in wanting to do the same thing with AI. This is very touch and go. There was a meeting at the United Nations in Geneva a few weeks ago, which was a bit of a flop. This has nothing to do with superintelligence or human-level AI. This is something that can happen right now, by just integrating the technologies we already have and mass-producing them.

A second one where there was huge consensus is that this great wealth, that can obviously be created if we have machines making ever more of our goods and services, should be distributed in a wide way so that they actually make everybody better off and we get a future where people can look forward to a future more like this than like that.

Q. Does government have a role?

A. A third principle is that we, by we I mean governments, should invest heavily in AI safety research. So, raise your hand if your computer has ever crashed. [Many hands raise.] And you spoke about cybersecurity this morning, so you’re very well aware of the fact that, if we can’t get our act together with these issues, then all the wonderful technology we build can cause problems, by either malfunctioning or actually getting hacked and turned against us. I feel we need significantly more investment in this – and not just in near-term things like cybersecurity – but also as machines get ever-more capable, also in the question of: “How can you make machines really understand human goals and really adopt human goals, and have the guarantee that they will retain these goals as they get ever more capable?” Right now, there’s almost no funding for these sorts of questions from government agencies. We gave out around 37 grants with the help of Elon Musk to sort of kickstart this. There were a number of sessions at the NIPS Conference (https://nips.cc/Conferences/2017), where it was clear that researchers wanted to work on this, but they have to pay their grad students. Real opportunity to invest in this aspect of the wisdom.

Q. So it’s not a technology race, but a wisdom race.

A. Exactly and to win the wisdom race – to win any race, right – there are two strategies. If you want to win the Boston Marathon, you can either slow down the competition by serving them two-week-old shrimp the night before, or you can try to run faster yourself. I think the way to do this is not to try to slow down the development of AI – I think that’s both unrealistic and undesirable – but rather to invest in these complementary questions of how you can make sure it gets used wisely.

Last, but not least, when you launch a rocket, you think through in advance where you want to go with it. I think we are so focused on tomorrow and the next election cycle and the next product cycle we can launch with AI, and we have a tendency to fall in love with technology just because it’s cool. If we are on the cusp of creating something so powerful then maybe one day it can do all our jobs, and maybe even be thought of as a new life form – at the minimum utterly transform our society – we should look a little bit farther than the next election cycle. We should ask, “What kind of future are we trying to create?”

Q. What would you tell someone graduating from high school today?

A. I often get students walking into my office at MIT for career advice, and I always ask them, “Where do you want to be in ten years?” And if all she can say is: “Uh, maybe I’ll be murdered, and maybe I’ll get cancer.” Terrible approach to career planning, right? But that is exactly what we’re doing as a species when we think about the future of AI. Every time we go to the movies, there’s some new doomsday scenario – oh, it’s Terminator, oh it’s Blade Runner, this dystopia, that dystopia – which makes us paralyzed with fear. It’s crucial, I feel, that we can form positive visions – shared positive visions – that we can aspire to. Because if we can, we’re much more likely to get them.

Former Estonian President Toomas Hendrik Ilves speaking at AIWS Round Table in a discussion with Max Tegmark

A New Race in AI?

China is launching a new multi-billion dollar initiative to advance its AI technologies. Private corporations are working with government agencies to start research and projects. At the same time, the United States is cutting back its research funding in the field. Some researchers and policy thinkers believe this is the start of a new race – just one the U.S. hasn’t started running yet. Meanwhile, China is pouring money into the field, including into private U.S. companies researching AI technology.

AIWS was established in order to consider the implications of AI research, including implications for government and public policy. Establishing strong international law in cybersecurity and AI was the theme of our Cybersecurity Day conference on December 12, 2017 at Harvard University and will be an ongoing discussion within AIWS.

Read more: http://www.innovationtoronto.com/2017/05/the-artificial-intelligence-race/