Nguyen Anh Tuan

Nguyen Anh Tuan

Director of The Michael Dukakis Institute for Leadership and Innovation

Co-Founder, and Chief Executive Officer of The Boston Global Forum

Mr. Nguyen Anh Tuan is co-founder and Director of The Michael Dukakis Institute for Leadership and Innovation (MDI), and co-founder and CEO of The Boston Global Forum (BGF).

Tuan is recognized globally for his pivotal role as a Vietnam Government reformist, who has successfully fostered freedom-of-expression, vigorous open debate and private enterprise in a nation that has become a leader in commerce, culture, and the innovation as well as a close ally of the West.

For his AI World Society Initiative and the concepts of AI-Government he developed, Vietnam National Television (VTV) named him Person of The Year 2018.

He is the Founder and Chairman of the VietNamNet Media Group and the Founder and Editor-in-Chief of VietNamNet, Vietnam’s preeminent online newspaper. Additionally, Tuan was the Founder and CEO of VASC Software and Media Company and VietNet, the first Internet service provider in Vietnam.

In recognition of his contributions to his native country, the Government of Vietnam named Tuan one of the nation’s 10 most outstanding young talents in 1996.

Under Tuan’s leadership, VietNamNet has raised significant political issues resulting in greater Vietnamese Government transparency and freedoms. He pioneered an interactive live format called the VietNamNet Online Roundtable that allowed online Vietnamese citizens to participate in interviews with leading political, social and cultural figures as well as foreign dignitaries. In 2009, Tuan conceived of an annual global initiative making September 9th World Compassion and Reconciliation Day. Additionally, he founded and organized the Vietnam National Concert to be held annually on September 2nd, Vietnam’s National Day holiday.

In 2011, he became a Pacific Leadership Fellow at the School of International Relations and Pacific Studies at the University of California San Diego. That year he addressed the prestigious Club de Madrid Conference, a gathering of former prime ministers and presidents, in a speech titled Democracy and Digital Technology.

From February 2011 to July 2014 Tuan was an Associate of the Shorenstein Center on Media, Politics and Public Policy, John F. Kennedy School of Government, Harvard University.

He later became a Visiting Scholar at the College of Communication, Boston University for the academic years 2014-2015, and 2015-2016.

As a Shorenstein Fellow at Harvard Kennedy School in 2007, Tuan researched major trends in the development of electronic media in Vietnam.

Tuan served on the Harvard Business School Global Advisory Board from 2008 to 2016. He also serves on the Board of Trustees of the Free-for-All Concert Fund in Boston. Since July of 2015 to November of 2017 he served as Chair of the International Advisory Committee of UCLA – UNESCO Chair on Global Learning and Global Citizenship Education at the University of California Los Angeles.

Tuan is a co-founder, and Chief Executive Officer of the Global Citizenship Education Network (GCEN), a collaboration between the Boston Global Forum and the UNESCO-UCLA Chair on Global Learning and Global Citizenship Education as well as being co-founder and Former Associate Editor of UCLA’s Global Commons Review.

In an effort to enhance cybersecurity worldwide, Tuan created Global Cybersecurity Day, produced the recent BGF-G7 Summit Initiative, and coauthored the Ethics Code of Conduct for Cyber Peace and Security (ECCC).

In November of 2017, Tuan and Governor Michael Dukakis founded AI World Society Initiative, and on June 25, 2018, Tuan and Governor Dukakis, Professor Thomas Patterson, Professor Nazli Choucri announced the Concepts of AI-Government. In 2018, Tuan created the World Leader in AI World Society Award, and the AI World Society Distinguished Lecture, and became the co-author of AI World Society Ethics and Practices Index.

Merkel ‘highly qualified’ for EU post: Juncker

Merkel ‘highly qualified’ for EU post: Juncker

Angela Merkel will bid farewell to the chancellor’s office in Berlin in 2021. The outgoing president of the European Commission thinks she is “a complete and endearing work of art” who would do well in Brussels.

European Commission President Jean-Claude Juncker told Germany’s Funke Media Group on Saturday that German Chancellor Angela Merkel is “highly qualified” for a top European Union job.

Asked whether he could imagine her assuming an EU office after her term as chancellor ends in 2021, Juncker said he “could not imagine” Merkel “disappearing into thin air.”

“She is not only a person of respect, but also a complete and endearing work of art,” Juncker said.

Merkel steered the bloc through a period of economic crisis and political turbulence after becoming chancellor in 2005, earning her the reputation of being Europe’s most powerful leader.

Upon announcing her intention to step down as leader of Germany’s Christian Democratic Union (CDU), Merkel said she would not seek any other political offices after 2021. Her longtime ally Annegret Kramp-Karrenbauer has succeeded her as party leader and is widely seen as a contender for the chancellorship.

Juncker’s historic hope

Juncker will step down as the head of the EU’s executive branch on October 31 after a single term in office.

The former prime minister of Luxembourg was appointed in 2014, after the European Parliament grouping that includes Merkel’s conservatives (CDU/CSU) and the European People’s Party (EPP) nominated him for the post and won the largest share of the vote in parliamentary elections.

Asked about what he would like historians to write about his presidency, Juncker said: “He tried his best … Perhaps it would be nice to add that he put some things in order.”

Some of the smartest people in technology pondered how to make AI trustworthy.

Some of the smartest people in technology pondered how to make AI trustworthy.

New York Times recently reported about the New York Summit/Leading in the Age of AI Conference in Half Moon Bay, California, where some of the top minds shared their outlooks for AI and its applications. The summit ended with a lot of room to debate about what we should do with AI ethics.

Is government regulation the answer? At least, that is suggested by Amazon and Microsoft. “Law is needed,” said Brad Smith, Microsoft’s president and chief legal officer.

Many employees of technology companies, however, think differently. They argue that the immediate responsibility rested with the company itself. “Regulation slows progress, and our species needs progress to survive the many threats that face us today”, according to the employees of Clarifai, a tech company that develops AI-powered products for the Pentagon.

Other activists and researchers like Meredith Whittaker, the co-founder of the AI Now Institute, call for both ethical actions from the company side and regulations from the government. The need for the latter is due to the forces of capitalism that continues to drive tech companies toward greater profits.

The debate about AI Ethics has gotten so divided even between the leaders of a company and its employees. Many technology companies have already created corporate principles and set up ethics officers or review boards to ensure their systems are designed and deployed in an ethical way. Still, many employees left their company because they were disappointed with the lack of actual actions. “You functionally have situations where the foxes are guarding the henhouse,” said Liz Fong-Jones, a former Google employee who left the company late last year due to the matter.

 

So, is ethical AI even possible?

This question remains open, but we believe that ethical AI will be impossible if we do not take into account the fact that governments of large countries have significant influences over the development of the world. Therefore, we need a framework for cooperation between major governments given the conditions of uncertainty and complexity in the AI ecosystem. For this reason, AIWS proposed a model of Government AIWS Ethics and Practices Index and look at the strategies, activities and progresses of major governments (including G7 countries and other influential countries such as Russia, China, India) in the field of AI. We hope that this effort is helpful toward a feasible solution for ethical AI.

The out-there AI ideas designed to keep the US ahead of China

The out-there AI ideas designed to keep the US ahead of China

China is developing artificial intelligence on an unparalleled scale, but the US aims to beat it by inventing the next big ideas.

This week, the Defense Advanced Research Projects Agency (DARPA) showcased projects that are part of a new five-year, $2 billion plan to foster the next round of out-there concepts that will bring about new advances in AI. These include efforts to give machines common sense; to have them learn faster, using less data; and to create chips that reconfigure themselves to unlock new AI capabilities.

Speaking at the event, Michael Kratsios, deputy assistant to the president for technology policy at the White House, said the agency’s efforts are a key part of the government’s plan to stay ahead in AI. “This administration supports DARPA’s commitment, and shares its intense interest in developing and applying artificial intelligence,” Kratsios said.

President Trump signed an executive order last month to launch the US government’s AI strategy, called the American AI Initiative. Kratsios, who is also the government’s deputy chief technology officer, has been the driving force behind White House strategy on AI. The American AI Initiative calls for more funding  and will make data and computing resources available to AI researchers. “DARPA has a long history of making early investments in fundamental research that has had amazing benefits,” Kratsios said. “[It] is building on this success in artificial-intelligence research.”

Since DARPA’s inception in 1957, it’s had something of a mixed track record, with many projects failing to deliver big breakthroughs. But the agency has had some notable successes. In the ’60s, it developed a networking technology that eventually evolved into the internet. More recently, it funded a personal-assistant project that led to Siri, the AI helper acquired by Apple in 2011.

But many of the algorithms now considered AI were developed many years ago, and they are fundamentally limited. “We are harvesting the intellectual fruit that was planted decades ago,” says John Everett, deputy director of DARPA’s Information Innovation Office. “That’s why we’re looking at far forward challenges—challenges that might not come to fruition for a decade.”

Through its AI Next program, DARPA has launched nine major research projects meant to tackle those limitations. They include a major effort to teach AI programs common sense, a weakness that often causes today’s systems to fail. Giving AI a broader understanding of the world—something that humans take for granted—could eventually make personal assistants more helpful and easier to chat with, and it could help robots navigate unfamiliar environments.

Another DARPA project will seek to develop AI programs that learn using less data. Training data is the lifeblood of machine learning, and algorithms that can ingest more of it can leap ahead of the competition. An innovation in this area could knock out a key advantage of tech companies operating in China, for example, which thrive on their access to an abundance of data. Other projects being funded focus on designing more efficient AI chips; exploring ways to explain the decision-making of opaque machine-learning tools; and making AI programs more secure.

To some degree, though, the AI Next initiative shows how tricky it is to gauge progress and prowess in AI. Much has been made of China’s efforts, and its government has declared an ambitious plan to “dominate” the technology. Other countries have also announced AI plans, and are pouring billions into them. But the US still spends more than any other nation on technology research and development.

Total investment matters, of course—but it’s only one part of the equation. The US has long been focused on funding emerging research through academia and agencies like DARPA. And that, in turn, has shaped the technological landscape in ways that weren’t always evident at first.

Take self-driving cars, for example. A decade ago, DARPA organized a series of driverless-vehicle contests in desert and urban settings. The competitions triggered a wave of excitement about the potential for automated driving, and a huge wave of investment followed. Many researchers who took part went on to start Google’s driverless-car effort. It’s still unclear how automated driving will change transportation, but some cars, such as those sold by Tesla, already offer limited forms of automation.

“Without DARPA coming in, [the self-driving-car boom] probably wouldn’t have happened at that scale at that time,” says Peter Stone, a professor at the University of Texas who took part in the car contest. He believes it’s vital for the US government to identify an unsolved AI problem and tackle it. “It may not happen, but if it works it will have huge implications,” he says.

An AI Beat the Top Humans at a Modern Video Game Thanks to the Power of Teamwork

An AI Beat the Top Humans at a Modern Video Game Thanks to the Power of Teamwork

Startup OpenAI spent years developing an AI that could play the classic 5v5 game Dota 2.

In a remarkable breakthrough for artificial intelligence, a team of AI has defeated the world champions of the competitive video game Dota 2. While this victory over humans isn’t the first for a game-playing AI, given the success of software in playing Go and poker, the Dota-playing AI had to master the art of teamwork to work alongside both other AI and human players.

Collaborative Intelligence

When Deepmind’s Go-playing AI, AlphaGo, recently defeat the world Go champion, that victory was remarkable because of the sheer number of possible moves and combinations in the game. Go is so complex even a supercomputer can’t calculate good moves by brute force. Instead, AlphaGo had to rely on intuition—or at least the machine learning equivalent, learning the game from scratch and then inventing moves humans never would have considered. But Dota 2 is a different kind of challenge for AI, which tends to struggle with concepts such as abstract reasoning and teamwork, qualities the game has in spades.

In Dota 2, ten players form two teams of five who fight to take objectives on the map. Neither team has a full vision of everything going on at all times. Players must work together to be victorious. These qualities that make Dota 2 a challenging game for many humans also make it an ideal testing ground for next-gen AI.

Startup OpenAI has been developing a Dota-playing AI for a few years now. This weekend, its team of AIs faced the ultimate test: playing a five-on-five match against OG, the Dota 2 reigning world champions, in a best-of-three series. In an exhibition match in San Francisco on Saturday, OpenAI claimed victory with a clean sweep.

Perfect Strangers

Although both games were close, the human players were eventually outmaneuvered. In the human players’ defense, however, OpenAI limited the complexity of the game somewhat, banning several strategies including some that the members OG like to use. In that sense, OG came into this game with a handicap.

Still, this competition isn’t really about who won and who lost. OpenAI’s goal is to build an artificial intelligence that could make judgement calls based on incomplete data and could cooperate with a stranger, the kinds of things humans do all day but which are terribly difficult to teach to a machine. Each of OpenAI’s five bots worked independently, so they might as well have been playing with perfect strangers.

In fact, in some of the matches they were. During another match at the exhibition, OpenAI changed up the sides, allowing two teams, each made up of two humans and three AI, to battle. OpenAI says even their own researchers were surprised at how well the bots were able to work with humans they’d never met.

OpenAI is done with Dota 2 for now, but the lessons learned could to design collaborative AI for all sorts of applications. The AI the company builds will likely have to work alongside humans someday, and they’ll be well-equipped to do so.