The new benchmark quantum computers must beat to achieve quantum supremacy

The new benchmark quantum computers must beat to achieve quantum supremacy

Physicists are confident that a quantum computer will soon outperform the world’s most powerful supercomputer. To prove it, they have developed a test that will pit one against the other.

Twice a year, the TOP500 project publishes a ranking of the world’s most powerful computers. The list is eagerly awaited and hugely influential. Global superpowers compete to dominate the rankings, and at the time of writing China looms largest, with 229 devices on the list.

The US has just 121, but this includes the world’s most powerful: the Summit supercomputer at Oak Ridge National Laboratory in Tennessee, which was clocked at 143 petaflops (143 thousand million million floating point operations per second).

The ranking is determined by a benchmarking program called Linpack, which is a collection of Fortran subroutines that solve a range of linear equations. The time taken to solve the equations is a measure of the computer’s speed.

There is no shortage of controversy over this choice of benchmark. Computer architectures are usually optimized to solve specific problems, and many of these are very different from the Linpack challenge. Quantum computers, for example, are entirely unsuited to solving these kinds of problems.

And that raises an important question. Quantum computers are on the verge of outperforming the most powerful supercomputers for certain kinds of problems, but exactly how powerful are they? At issue is the question of how to measure their performance and compare it with that of classical computers.

Today we get an answer thanks to the work of Benjamin Villalonga at the Quantum Artificial Intelligence Lab at NASA Ames Research Center in Mountain View, California, and a group of colleagues who have developed a benchmarking test that works on both classical and quantum devices. In this way, it is possible to compare their performance.

What’s more, the team has used the new test to put the Summit, the world’s most powerful supercomputer, through its paces running at 281 petaflops. The result is the benchmark that quantum computers must beat to finally establish their supremacy in the rankings.

Finding a good measure of quantum computing power is no easy task. For a start, computer scientists have long known that quantum computers can outperform their classical counterparts in only a limited number of highly specialized tasks. And even then, no quantum computer is currently powerful enough to perform any of them particularly well because, for example, they are incapable of error correction.

quantum computing

So Villalonga and co looked for a much more fundamental test of quantum computing power that would work equally well for today’s primitive devices and tomorrow’s more advanced quantum machines, and could also be simulated on classical machines.

Their chosen problem is to simulate the evolution of quantum chaos using random quantum circuits. Simple quantum computers can do this because the process does not require powerful error correction, and it is relatively straightforward to filter out results that have been overwhelmed by noise.

It is also straightforward for classical machines to simulate quantum chaos. But the classical computing power required to do this rises exponentially with the number of qubits involved.

Two years ago, physicists determined that quantum computers with at least 50 qubits should achieve quantum supremacy over a classical supercomputer at that time.

But the goalposts are constantly moving as supercomputers are upgraded. For example, Summit is capable of significantly more petaflops now than in the last ranking in November, when it tipped the scales at 143 petaflops. Indeed, Oak Ridge National Labs this week unveiled plans to build a 1.5-exaflop machine by 2021. So being able to continually benchmark these machines against the emerging quantum computers is increasingly important.

Researchers at NASA and Google have created an algorithm called qFlex that simulate random quantum circuits on a classical machine. Last year, they showed that qFlex could simulate and benchmark the performance of a Google quantum computer called Bristlecone, which has 72 qubits. To do this, they used a supercomputer at NASA Ames with 20 petaflops of number-crunching power.

Now they’ve shown that the Summit supercomputer can simulate the performance of a much larger quantum device. “On Summit, we were able to achieve a sustained performance of 281 Pflop/s (single precision) over the entire supercomputer, simulating circuits of 49 and 121 qubits,” they say.

This 121 qubits is beyond the capability of any existing quantum computer. So classical computers remain a hair’s breadth ahead in the rankings.

But this is a race they are destined to lose. Plans are already afoot to build quantum computers with 100+ qubits within the next few years. And as quantum capabilities accelerate, the challenge of building ever more powerful classical machines is already coming up against the buffers.

The limiting factor for new machines is no longer the hardware but the power available to keep them humming. The Summit machine already requires a 14-megawatt power supply. That’s enough to light up an entire a medium-sized town. “To scale such a system by 10x would require 140 MW of power, which would be prohibitively expensive,” say Villalonga and co.

By contrast, quantum computers are frugal. Their main power requirement is the cooling for superconducting components. So a 72-qubit computer like Google’s Bristlecone, for example, requires about 14 kw.  “Even as qubit systems scale up, this amount is unlikely to significantly grow,” say Villalonga and co.

So in the efficiency rankings, quantum computers are destined to wipe the floor with their classical counterparts sooner rather than later.

One way or another, quantum supremacy is coming. If this work is anything to go by, the benchmark that will prove it is likely to be qFlex.

Ref: arxiv.org/abs/1905.00444 : Establishing the Quantum Supremacy Frontier with a 281 Pflop/s Simulation

Jeff Bezos has unveiled Blue Origin’s lunar lander

Jeff Bezos has unveiled Blue Origin’s lunar lander

He revealed details about the company’s new rocket, engine, and lunar lander at a private event in Washington, DC, May 9.

Behind curtain number one … Jeff Bezos unveiled Blue Moon, the company’s lunar lander that has been in the works for the past three years. It will be able to land a 6.5-metric-ton payload on the moon’s surface. Watch Blue Origin’s video rendering of what Blue Moon’s lunar landing could look like here.

Who’s hitching a ride? The company announced a number of customers that will fly on Blue Moon, including Airbus, MIT, Johns Hopkins, and Arizona State University.

How are they getting there? New Glenn. This is by far the biggest rocket Blue Origin has ever built. The size will allow the massive Blue Moon lander to fit inside. It will have fewer weather constraints and will be rated to carry humans from the start. As with New Shepard, the company’s suborbital rocket, as well as SpaceX’s rockets, the first stage will be reusable, landing again after completing its mission. The first launch is targeted for 2021.

The new engine: Blue Origin also announced that the new BE-7 engine, which packs 10,000 pounds of thrust, will undergo its first hot fire test this year. This engine will propel Blue Moon

People are calling for Zuckerberg’s resignation. Here are just five of the reasons why

People are calling for Zuckerberg’s resignation. Here are just five of the reasons why

Facebook has been beset by scandals over the last year and many believe that nothing will change until its founder and CEO is gone.

A petition has been launched with one simple objective: to force Mark Zuckerberg to resign as CEO of Facebook.

The campaign group behind it, Fight for the Future, says that although there’s no “silver bullet” to “fix” Facebook, the company cannot address its underlying problems while Zuckerberg remains in charge.

The petition is highly unlikely to succeed, of course. It’s hard to imagine Zuckerberg stepping down voluntarily. And there’s not much Facebook’s board can do either, even if they wanted to. Zuckerberg controls about 60% of all voting shares in Facebook. He’s pretty much untouchable, both as CEO and as board chairman. Despite near-weekly scandals, the company is still growing, and it’s one of the most profitable business ventures in human history.

(Another potential solution, as described in a piece in the New York Timeswritten by one of Facebook’s cofounders, is to break the company up and implement new data privacy regulations in the US.)

Need a reminder as to why everyone is so angry with Facebook and Mark Zuckerberg anyway? Here’s a handy cut-out-and-keep list of just some of the most significant scandals involving the tech giant over the last year or so. (Not to mention all the wider problems of fake news or echo chambers or the decimation of the media. Or dodgy PR practices.)

The high-impact one

Back in March 2018, a whistleblower revealed that political consultancy Cambridge Analytica had collected private information from more than 87 million Facebook profiles without the users’ consent. Facebook let third parties scrape data from applications: in Cambridge Analytica’s case, a personality quiz developed by a Cambridge University academic, Aleksandr Kogan. Mark Zuckerberg responded by admitting “we made mistakes” and promising to restrict data sharing with third-party apps in the future.

What made it particularly explosive were claims that the data-mining operations might have affected Trump’s election and the Brexit vote.

The many data mishaps

In September 2018, Facebook admitted that 50 million users had had their personal information exposed by a hack on its systems. The number was later revised down to 30 million, which still makes it the biggest breach in Facebook’s history.

In March 2019 it turned out Facebook had been storing up to 600 million users’ passwords insecurely since 2012. Just days later, we learned that half a billion Facebook records had been left exposed on the public internet.  

The discriminatory advertising practices

Facebook’s ad-serving algorithm automatically discriminates by gender and race, even when no one tells it to. Advertisers can also explicitly discriminateagainst certain areas when showing housing ads on Facebook, even though it’s illegal. Facebook has known about this problem since 2016. It still hasn’t fixed it.

The dodgy data deals

Facebook gave over 150 companies more intrusive access to users’ data than previously revealed, via special partnerships. We learned a bit more about this, and other dodgy data practices, in a cache of documents seized by the UK Parliament in November 2018. Facebook expects to be fined up to $5 billion for this and other instances of malpractice.

The vehicle for hate speech

The Christchurch, New Zealand, shooter used Facebook to live-stream his murder of 50 people. The broadcast was up for 20 minutes before any action was taken. We’re still waiting to hear what, if anything, Facebook will do about this issue (for example, it could choose to end its “Facebook Live” feature). It’s well established now that Facebook can help to fuel violence in the real world. But any response from Facebook has been piecemeal. It’s also a reminder of just how much power we’ve given Facebook (and its low-paid moderators) to decide what is and isn’t acceptable.

A new way to build tiny neural networks could create powerful AI on your phone

A new way to build tiny neural networks could create powerful AI on your phone

We’ve been wasting our processing power to train neural networks that are ten times too big.

Neural networks are the core software of deep learning. Even though they’re so widespread, however, they’re really poorly understood. Researchers have observed their emergent properties without actually understanding why they work the way they do.

Now a new paper out of MIT has taken a major step toward answering this question. And in the process the researchers have made a simple but dramatic discovery: we’ve been using neural networks far bigger than we actually need. In some cases they’re 10—even 100—times bigger, so training them costs us orders of magnitude more time and computational power than necessary.

Put another way, within every neural network exists a far smaller one that can be trained to achieve the same performance as its oversize parent. This isn’t just exciting news for AI researchers. The finding has the potential to unlock new applications—some of which we can’t yet fathom—that could improve our day-to-day lives. More on that later.

But first, let’s dive into how neural networks work to understand why this is possible.

An image of a neural network design.
A diagram of a neural network learning to recognize a lion.

JEFF CLUNE/SCREENSHOT

How neural networks work

You may have seen neural networks depicted in diagrams like the one above: they’re composed of stacked layers of simple computational nodes that are connected in order to compute patterns in data.

The connections are what’s important. Before a neural network is trained, these connections are assigned random values between 0 and 1 that represent their intensity. (This is called the “initialization” process.) During training, as the network is fed a series of, say, animal photos, it tweaks and tunes those intensities—sort of like the way your brain strengthens or weakens different neuron connections as you accumulate experience and knowledge. After training, the final connection intensities are then used in perpetuity to recognize animals in new photos.

While the mechanics of neural networks are well understood, the reason they work the way they do has remained a mystery. Through lots of experimentation, however, researchers have observed two properties of neural networks that have proved useful.

Observation #1. When a network is initialized before the training process, there’s always some likelihood that the randomly assigned connection strengths end up in an untrainable configuration. In other words, no matter how many animal photos you feed the neural network, it won’t achieve a decent performance, and you just have to reinitialize it to a new configuration. The larger the network (the more layers and nodes it has), the less likely that is. Whereas a tiny neural network may be trainable in only one of every five initializations, a larger network may be trainable in four of every five. Again, why this happens had been a mystery, but that’s why researchers typically use very large networks for their deep-learning tasks. They want to increase their chances of achieving a successful model.

Observation #2. The consequence is that a neural network usually starts off bigger than it needs to be. Once it’s done training, typically only a fraction of its connections remain strong, while the others end up pretty weak—so weak that you can actually delete, or “prune,” them without affecting the network’s performance.

For many years now, researchers have exploited this second observation to shrink their networks after training to lower the time and computational costs involved in running them. But no one thought it was possible to shrink their networks before training. It was assumed that you had to start with an oversize network and the training process had to run its course in order to separate the relevant connections from the irrelevant ones.

Jonathan Frankle, the MIT PhD student who coauthored the paper, questioned that assumption. “If you need way fewer connections than what you started with,” he says, “why can’t we just train the smaller network without the extra connections?” Turns out you can.

Michael Carbin and Jonathan Frankle, the authors of the paper, pose on a staircase.
Michael Carbin (left) and Jonathan Frankle (right), the authors of the paper.

JASON DORFMAN, MIT CSAIL

The lottery ticket hypothesis

The discovery hinges on the reality that the random connection strengths assigned during initialization aren’t, in fact, random in their consequences: they predispose different parts of the network to fail or succeed before training even happens. Put another way, the initial configuration influences which final configuration the network will arrive at.

By focusing on this idea, the researchers found that if you prune an oversize network after training, you can actually reuse the resultant smaller network to train on new data and preserve high performance—as long as you reset each connection within this downsized network back to its initial strength.

From this finding, Frankle and his coauthor Michael Carbin, an assistant professor at MIT, propose what they call the “lottery ticket hypothesis.” When you randomly initialize a neural network’s connection strengths, it’s almost like buying a bag of lottery tickets. Within your bag, you hope, is a winning ticket—i.e., an initial configuration that will be easy to train and result in a successful model.

This also explains why observation #1 holds true. Starting with a larger network is like buying more lottery tickets. You’re not increasing the amount of power that you’re throwing at your deep-learning problem; you’re simply increasing the likelihood that you will have a winning configuration. Once you find the winning configuration, you should be able to reuse it again and again, rather than continue to replay the lottery.

Next steps

This raises a lot of questions. First, how do you find the winning ticket? In their paper, Frankle and Carbin took a brute-force approach of training and pruning an oversize network with one data set to extract the winning ticket for another data set. In theory, there should be much more efficient ways of finding—or even designing—a winning configuration from the start.

Second, what are the training limits of a winning configuration? Presumably, different kinds of data and different deep-learning tasks would require different configurations.

Third, what is the smallest possible neural network that you can get away with while still achieving high performance? Frankle found that through an iterative training and pruning process, he was able to consistently reduce the starting network to between 10% and 20% of its original size. But he thinks there’s a chance for it to be even smaller.

Already, many research teams within the AI community have begun to conduct follow-up work. A researcher at Princeton recently teased the results of a forthcoming paper addressing the second question. A team at Uber also published a new paper on several experiments investigating the nature of the metaphorical lottery tickets. Most surprising, they found that once a winning configuration has been found, it already achieves significantly better performance than the original untrained oversize network before any training whatsoever. In other words, the act of pruning a network to extract a winning configuration is itself an important method of training.

Neural network nirvana

Frankle imagines a future where the research community will have an open-source database of all the different configurations they’ve found, with descriptions for what tasks they’re good for. He jokingly calls this “neural network nirvana.” He believes it would dramatically accelerate and democratize AI research by lowering the cost and speed of training, and by allowing people without giant data servers to do this work directly on small laptops or even mobile phones.

It could also change the nature of AI applications. If you can train a neural network locally on a device instead of in the cloud, you can improve the speed of the training process and the security of the data. Imagine a machine-learning-based medical device, for example, that could improve itself through use without needing to send patient data to Google’s or Amazon’s servers.

“We’re constantly bumping up against the edge of what we can train,” says Jason Yosinski, a founding member of Uber AI Labs who coauthored the follow-up Uber paper, “meaning the biggest networks you can fit on a GPU or the longest we can tolerate waiting before we get a result back.” If researchers could figure out how to identify winning configurations from the get-go, it would reduce the size of neural networks by a factor of 10, even 100. The ceiling of possibility would dramatically increase, opening a new world of potential uses.

Who to Sue When a Robot Loses Your Fortune

Who to Sue When a Robot Loses Your Fortune

The first known case of humans going to court over investment losses triggered by autonomous machines will test the limits of liability.

Robots are getting more humanoid every day, but they still can’t be sued.

So a Hong Kong tycoon is doing the next best thing. He’s going after the salesman who persuaded him to entrust a chunk of his fortune to the supercomputer whose trades cost him more than $20 million.

The case pits Samathur Li Kin-kan, whose father is a major investor in Shaftesbury Plc, which owns much of London’s Chinatown, Covent Garden and Carnaby Street, against Raffaele Costa, who has spent much of his career selling investment funds for the likes of Man Group Plc and GLG Partners Inc. It’s the first-known instance of humans going to court over investment losses triggered by autonomous machines and throws the spotlight on the “black box” problem: If people don’t know how the computer is making decisions, who’s responsible when things go wrong?

“People tend to assume that algorithms are faster and better decision-makers than human traders,” said Mark Lemley, a law professor at Stanford University who directs the university’s Law, Science and Technology program. “That may often be true, but when it’s not, or when they quickly go astray, investors want someone to blame.”

DeGrisogono "Love On The Rocks" Party - The 70th Annual Cannes Film Festival
Raffaele Costa
Photographer: Andreas Rentz/Getty Images

The timeline leading up to the legal battle was drawn from filings to the commercial court in London where the trial is scheduled to begin next April. It all started over lunch at a Dubai restaurant on March 19, 2017. It was the first time 45-year-old Li, met Costa, the 49-year-old Italian who’s often known by peers in the industry as “Captain Magic.” During their meal, Costa described a robot hedge fund his company London-based Tyndaris Investments would soon offer to manage money entirely using AI, or artificial intelligence.

Developed by Austria-based AI company 42.cx, the supercomputer named K1 would comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.

The idea of a fully automated money manager inspired Li instantly. He met Costa for dinner three days later, saying in an e-mail beforehand that the AI fund “is exactly my kind of thing.”

Over the following months, Costa shared simulations with Li showing K1 making double-digit returns, although the two now dispute the thoroughness of the back-testing. Li eventually let K1 manage $2.5 billion—$250 million of his own cash and the rest leverage from Citigroup Inc. The plan was to double that over time.

But Li’s affection for K1 waned almost as soon as the computer started trading in late 2017. By February 2018, it was regularly losing money, including over $20 million in a single day—Feb. 14—due to a stop-loss order Li’s lawyers argue wouldn’t have been triggered if K1 was as sophisticated as Costa led him to believe.

Li is now suing Tyndaris for about $23 million for allegedly exaggerating what the supercomputer could do. Lawyers for Tyndaris, which is suing Li for $3 million in unpaid fees, deny that Costa overplayed K1’s capabilities. They say he was never guaranteed the AI strategy would make money.

Sarah McAtominey, a lawyer representing Li’s investment company that is suing Tyndaris, declined to comment on his behalf. Rob White, a spokesman for Tyndaris, declined to make Costa available for interview.

The legal battle is a sign of what’s in store as AI is incorporated into all facets of life, from self-driving cars to virtual assistants. When the technology misfires, where the blame lies is open to interpretation. In March, U.S. criminal prosecutors let Uber Technologies Inc. off the hook for the death of a 49-year-old pedestrian killed by one of its autonomous cars.

Robot Investors

AI hedge fund managers are beating human peers, but not stock benchmarks

Source: Eurekahedge, Hedge Fund Research, Inc., Bloomberg

2019 gains through March for every $100 invested in 2014; S&P 500 returns are with dividends reinvested; *HFRI Fund Weighted Composite Index

In the hedge fund world, pursuing AI has become a matter of necessity after years of underperformance by human managers. Quantitative investors—computers designed to identify and execute trades—are already popular. More rare are pure AI funds that automatically learn and improve from experience rather than being explicitly programmed. Once an AI develops a mind of its own, even its creators won’t understand why it makes the decisions it makes.

“You might be in a position where you just can’t explain why you are holding a position,” said Anthony Todd, the co-founder of London-based Aspect Capital, which is experimenting with AI strategies before letting them invest clients’ cash. “One of our concerns about the application of machine-learning-type techniques is that you are losing any explicit hypothesis about market behavior.”

Li’s lawyers argue Costa won his trust by hyping up the qualifications of the technicians building K1’s algorithm, saying, for instance, they were involved in Deep Blue, the chess-playing computer designed by IBM Corp. that signaled the dawn of the AI era when it beat the world champion in 1997. Tyndaris declined to answer Bloomberg questions on this claim, which was made in one of Li’s more-recent filings.

Fans watch the fifth game between World Chess Cham
Garry Kasparov plays against IBM’s Deep Blue computer in 1997.
Photographer: Stan Honda/AFP via Getty Images

Speaking to Bloomberg, 42.cx founder Daniel Mattes said none of the computer scientists advising him were involved with Deep Blue, but one, Vladimir Arlazarov, developed a 1960s chess program in the Soviet Union known as Kaissa. He acknowledged that experience may not be entirely relevant to investing. Algorithms have gotten really good at beating humans in games because there are clear rules that can be simulated, something stock markets decidedly lack. Arlazarov told Bloomberg that he did give Mattes general advice but didn’t work on K1 specifically.

Inspired by a 2015 European Central Bank study measuring investor sentiment on Twitter, 42.cx created software that could generate sentiment signals, said Mattes, who recently agreed to pay $17 million to the U.S. Securities and Exchange Commission to settle charges of defrauding investors at his mobile-payments company, Jumio Inc., earlier this decade. Whether and how to act on those signals was up to Tyndaris, he said.

“It’s a beautiful piece of software that was written,” Mattes said by phone. “The signals we have been provided have a strong scientific foundation. I think we did a pretty decent job. I know I can detect sentiment. I’m not a trader.”

There’s a lot of back and forth in court papers over whether Li was misled about K1’s capacities. For instance, the machine generated a single trade in the morning if it deciphered a clear sentiment signal, whereas Li claims he was under the impression it would make trades at optimal times during the day. In rebuttal, Costa’s lawyers say he told Li that buying or selling futures based on multiple trading signals was an eventual ambition, but wouldn’t happen right away.

For days, K1 made no trades at all because it didn’t identify a strong enough trend. In one message to Costa, Li complained that K1 sat back while taking adverse movements “on the chin, hoping that it won’t strike stop loss.” A stop loss is a pre-set level at which a broker will sell to limit the damage when prices suddenly fall.

That’s what happened on Valentine’s Day 2018. In the morning, K1 placed an order with its broker, Goldman Sachs Group Inc., for $1.5 billion of S&P 500 futures, predicting the index would gain. It went in the opposite direction when data showed U.S. inflation had risen more quickly than expected, triggering K1’s 1.4 percent stop-loss and leaving the fund $20.5 million poorer. But the S&P rebounded within hours, something Li’s lawyers argue shows K1’s stop-loss threshold for the day was “crude and inappropriate.”

Li claims he was told K1 would use its own “deep-learning capability” daily to determine an appropriate stop loss based on market factors like volatility. Costa denies saying this and claims he told Li the level would be set by humans.

In his interview, Mattes said K1 wasn’t designed to decide on stop losses at all—only to generate two types of sentiment signals: a general one that Tyndaris could have used to enter a position and a dynamic one that it could have used to exit or change a position. While Tyndaris also marketed a K1-driven fund to other investors, a spokesman declined to comment on whether the fund had ever managed money. Any reference to the supercomputer was removed from its website last month.

Investors like Marcus Storr say they’re wary when AI fund marketers come knocking, especially considering funds incorporating AI into their core strategy made less than half the returns of the S&P 500 in the three years to 2018, according to Eurekahedge AI Hedge Fund Index data.

“We can’t judge the codes,” said Storr, who decides on hedge fund investments for Bad Homburg, Germany-based Feri Trust GmbH. “For us it then comes down to judging the setups and research capacity.”

But what happens when autonomous chatbots are used by companies to sell products to customers? Even suing the salesperson may not be possible, added Karishma Paroha, a London-based lawyer at Kennedys who specializes in product liability.

“Misrepresentation is about what a person said to you,” she said. “What happens when we’re not being sold to by a human?”

Facebook will open its data up to academics to see how it impacts elections

Facebook will open its data up to academics to see how it impacts elections

More than 60 researchers from 30 institutions will get access to Facebook user data to study its impact on elections and democracy, and how it’s used by advertisers and publishers.

A vast trove: Facebook will let academics see which websites its users linked to from January 2017 to February 2019. Notably, that means they won’t be able to look at the platform’s impact on the US presidential election in 2016, or on the Brexit referendum in the UK in the same year.

Despite this slightly glaring omission, it’s still hard to wrap your head around the scale of the data that will be shared, given that Facebook is used by 1.6 billion people every day. That’s more people than live in all of China, the most populous country on Earth. It will be one of the largest data sets on human behavior online to ever be released.

The process: Facebook didn’t pick the researchers. They were chosen by the Social Science Research Council, a US nonprofit. Facebook has been working on this project for over a year, as it tries to balance research interests against user privacy and confidentiality.

Privacy: In a blog post, Facebook said it will use a number of statistical techniques to make sure the data set can’t be used to identify individuals. Researchers will be able to access it only via a secure portal that uses a VPN and two-factor authentication, and there will be limits on the number of queries they can each run.

The context: Facebook is keen to improve its reputation after months of scandals over data privacysecurity, and its role in elections and democracy. If it opens up its data as promised, it could introduce some much-needed light into what’s often a very heated debate.

The industry shaping the future of AI?

The industry shaping the future of AI?

Industry has mobilized to shape the science, morality and laws of artificial intelligence. On 10 May, letters of intent are due to the US National Science Foundation (NSF) for a new funding programme for projects on Fairness in Artificial Intelligence, in collaboration with Amazon. In April, after the European Commission released the Ethics Guidelines for Trustworthy AI, an academic member of the expert group that produced them described their creation as industry-dominated “ethics washing”. In March, Google formed an AI ethics board, which was dissolved a week later amid controversy. In January, Facebook invested US$7.5 million in a centre on ethics and AI at the Technical University of Munich, Germany.

Companies’ input in shaping the future of AI is essential, but they cannot retain the power they have gained to frame research on how their systems impact society or on how we evaluate the effect morally. Governments and publicly accountable entities must support independent research, and insist that industry shares enough data for it to be kept accountable.

Algorithmic-decision systems touch every corner of our lives: medical treatments and insurance; mortgages and transportation; policing, bail and parole; newsfeeds and political and commercial advertising. Because algorithms are trained on existing data that reflect social inequalities, they risk perpetuating systemic injustice unless people consciously design countervailing measures. For example, AI systems to predict recidivism might incorporate differential policing of black and white communities, or those to rate the likely success of job candidates might build on a history of gender-biased promotions.

Inside an algorithmic black box, societal biases are rendered invisible and unaccountable. When designed for profit-making alone, algorithms necessarily diverge from the public interest — information asymmetries, bargaining power and externalities pervade these markets. For example, Facebook and YouTube profit from people staying on their sites and by offering advertisers technology to deliver precisely targeted messages. That could turn out to be illegal or dangerous. The US Department of Housing and Urban Development has charged Facebook with enabling discrimination in housing adverts (correlates of race and religion could be used to affect who sees a listing). YouTube’s recommendation algorithm has been implicated in stoking anti-vaccine conspiracies. I see these sorts of service as the emissions of high-tech industry: they bring profits, but the costs are borne by society. (The companies have stated that they work to ensure their products are socially responsible.)

From mobile phones to medical care, governments, academics and civil-society organizations endeavour to study how technologies affect society and to provide a check on market-driven organizations. Industry players intervene strategically in those efforts.

When the NSF lends Amazon the legitimacy of its process for a $7.6-million programme (0.03% of Amazon’s 2018 research and development spending), it undermines the role of public research as a counterweight to industry-funded research. A university abdicates its central role when it accepts funding from a firm to study the moral, political and legal implications of practices that are core to the business model of that firm. So too do governments that delegate policy frameworks to industry-dominated panels. Yes, institutions have erected some safeguards. NSF will award research grants through its normal peer-review process, without Amazon’s input, but Amazon retains the contractual, technical and organizational means to promote the projects that suit its goals. The Technical University of Munich reports that the funds from Facebook come without obligations or conditions, and that the company will not have a place on the centre’s advisory board. In my opinion, the risk and perception of undue influence is still too great, given the magnitude of this sole-source gift and how it bears directly on the donor’s interests.

Today’s leading technology companies were born at a time of high faith in market-based mechanisms. In the 1990s, regulation was restricted, and public facilities such as railways and utilities were privatized. Initially hailed for bringing democracy andgrowth, pre-eminent tech companies came under suspicion after the Great Recession of the late 2000s. Germany, Australia and the United Kingdom have all passed or are planning laws to impose large fines on firms or personal liability on executives for the ills for which the companies are now blamed.

This new-found regulatory zeal might be an overreaction. (Tech anxiety without reliable research will be no better as a guide to policy than was tech utopianism.) Still, it creates incentives for industry to cooperate.

Governments should use that leverage to demand that companies share data in properly-protected databases with access granted to appropriately insulated, publicly-funded researchers. Industry participation in policy panels should be strictly limited.

Industry has the data and expertise necessary to design fairness into AI systems. It cannot be excluded from the processes by which we investigate which worries are real and which safeguards work, but it must not be allowed to direct them. Organizations working to ensure that AI is fair and beneficial must be publicly funded, subject to peer review and transparent to civil society. And society must demand increased public investment in independent research rather than hoping that industry funding will fill the gap without corrupting the process.

This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI

This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI

Recently, on a dazzling morning in Palm Springs, California, Vivienne Szetook to a small stage to deliver perhaps the most nerve-racking presentation of her career.

She knew the subject matter inside-out. She was to tell the audience about the chips, being developed in her lab at MIT, that promise to bring powerful artificial intelligence to a multitude of devices where power is limited, beyond the reach of the vast data centers where most AI computations take place. However, the event—and the audience—gave Sze pause.

A photo of Vivienne Sze
TONY LUONG

The setting was MARS, an elite, invite-only conference where robots stroll (or fly) through a luxury resort, mingling with famous scientists and sci-fi authors. Just a few researchers are invited to give technical talks, and the sessions are meant to be both awe-inspiring and enlightening. The crowd, meanwhile, consisted of about 100 of the world’s most important researchers, CEOs, and entrepreneurs. MARS is hosted by none other than Amazon’s founder and chairman, Jeff Bezos, who sat in the front row.

“It was, I guess you’d say, a pretty high-caliber audience,” Sze recalls with a laugh.

Other MARS speakers would introduce a karate-chopping robot, drones that flap like large, eerily silent insects, and even optimistic blueprints for Martian colonies. Sze’s chips might seem more modest; to the naked eye, they’re indistinguishable from the chips you’d find inside any electronic device. But they are arguably a lot more important than anything else on show at the event.

New capabilities

Newly designed chips, like the ones being developed in Sze’s lab, may be crucial to future progress in AI—including stuff like the drones and robots found at MARS. Until now, AI software has largely run on graphical chips, but new hardware could make AI algorithms more powerful, which would unlock new applications. New AI chips could make warehouse robots more common or let smartphones create photo-realistic augmented-reality scenery.

Sze’s chips are both extremely efficient and flexible in their design, something that is crucial for a field that’s evolving incredibly quickly.

The microchips are designed to squeeze more out of the “deep-learning” AI algorithms that have already turned the world upside down. And in the process, they may inspire those algorithms themselves to evolve. “We need new hardware because Moore’s law has slowed down,” Sze says, referring to the axiom coined by Intel cofounder Gordon Moore that predicted that the number of transistors on a chip will double roughly every 18 months—leading to a commensurate performance boost in computer power.

An image of a chip controlled car
TONY LUONG

This law is increasingly now running into the physical limits that come with engineering components at an atomic scale. And it is spurring new interest in alternative architectures and approaches to computing.

The high stakes attached to investing in next-generation AI chips—and maintaining America’s dominance in chipmaking overall—aren’t lost on the US government. Sze’s microchips are being developed with funding from a Defense Advanced Research Projects Agency (DARPA) program meant to help develop new AI chip designs (see “The out-there AI ideas designed to keep the US ahead of China”).

But innovation in chipmaking has been spurred mostly by the emergence of deep learning, a very powerful way for machines to learn to perform useful tasks. Instead of giving a computer a set of rules to follow, a machine basically programs itself. Training data is fed into a large, simulated artificial neural network, which is then tweaked so that it produces the desired result. With enough training, a deep-learning system can find subtle and abstract patterns in data. The technique is applied to an ever-growing array of practical tasks, from face recognition on smartphones to predicting disease from medical images.

The new chip race

Deep learning is not so reliant on Moore’s law. Neural nets run many mathematical computations in parallel, so they run far more effectively on the specialized video-game graphics chips that perform parallel computations for rendering 3-D imagery. But microchips designed specifically for the computations that underpin deep learning should be even more powerful.

The potential for new chip architectures to improve AI has stirred up a level of entrepreneurial activity that the chip industry hasn’t seen in decades (see “The Race to Power AI’s Silicon Brains” and “China has never had a real chip industry. Making AI chips could change that”).

An image of AI chips
TONY LUONG

Big tech companies hoping to harness and commercialize AI—including Google, Microsoft, and (yes) Amazon—are all working on their own deep-learning chips. Many smaller companies are developing new chips, too. “It’s impossible to keep track of all the companies jumping into the AI-chip space,” says Mike Delmer, a microchip analyst at the Linley Group, an analyst firm. “I’m not joking that we learn about a new one nearly every week.”

The real opportunity, says Sze, isn’t building the most-powerful deep-learning chips possible. Power efficiency is important because AI also needs to run beyond the reach of large data centers, which means relying only on the power available on the device itself to run. This is known as operating on “the edge.”

“AI will be everywhere—and figuring out ways to make things more energy-efficient will be extremely important,” says Naveen Rao, vice president of the artificial intelligence products group at Intel. 

For example, Sze’s hardware is more efficient partly because it physically reduces the bottleneck between where data is stored and where it’s analyzed, but also because it uses clever schemes for reusing data. Before joining MIT, Sze pioneered this approach for improving the efficiency of video compression while at Texas Instruments.

For a fast-moving field like deep learning, the challenge for those working on AI chips is making sure they are flexible enough to be adapted to work for any application. It is easy to design a super-efficient chip capable of doing just one thing, but such a product will quickly become obsolete.

Sze’s chip is called Eyeriss. Developed in collaboration with Joel Emer, a research scientist at Nvidia and a professor at MIT, it was tested alongside a number of standard processors to see how it handles a range of different deep-learning algorithms. By balancing efficiency with flexibility, the new chip achieves performance 10 or even 1,000 times more efficient than existing hardware does, according to a paper posted online last year.

Sertac Karaman and Vivienne Sze
MIT’s Sertac Karaman and Vivienne Sze developed the new chip

TONY LUONG

Simpler AI chips are already having a major impact. High-end smartphones already include chips optimized for running deep-learning algorithms for image and voice recognition. More-efficient chips could let these devices run more-powerful AI code with better capabilities. Self-driving cars, too, need powerful AI computer chips, as most prototypes currently rely on a trunk-load of computers.

Rao says the MIT chips are promising, but many factors will determine whether a new hardware architecture succeeds. One of the most important factors, he says, is developing software that lets programmers run code on it. “Making something usable from a compiler standpoint is probably the single biggest obstacle to adoption,” he says.

Sze’s lab is, in fact, also exploring ways of designing software so that it better exploits the properties of existing computer chips. And this work extends beyond just deep learning.

Together with Sertac Karaman, from MIT’s Department of Aeronautics and Astronautics, Sze developed a low-power chip called Navion that performs 3-D mapping and navigation incredibly efficiently, for use on a tiny drone. Crucial to this effort was crafting the chip to exploit the behavior of navigation-focused algorithms—and designing the algorithm to make the most of a custom chip. Together with the work on deep learning, Navion reflects the way AI software and hardware are now starting to evolve in symbiosis.

Sze’s chips might not be as attention-grabbing as a flapping drone, but the fact that they were showcased at MARS offers some sense of how important her technology—and innovation in silicon more generally—will be for the future of AI. After her presentation, Sze says, some of the other MARS speakers expressed an interest in finding out more. “People found a lot of important use cases,” she says.

In other words, expect the eye-catching robots and drones at the next MARS conference to come with something rather special hidden inside.

Japan’s Abe signals shift on North Korea, says will meet Kim without conditions: media

Japan’s Abe signals shift on North Korea, says will meet Kim without conditions: media

TOKYO (Reuters) – Japanese Prime Minister Shinzo Abe has said he is ready to meet North Korean leader Kim Jong Un without conditions to end long-running mistrust between their countries, the Sankei newspaper reported on Friday.

Abe’s remarks come days after he met U.S. President Donald Trump in Washington and thanked Trump for raising with Kim, at a February summit, the topic of Japanese people abducted by North Korea.

Resolving the issue of Japanese people abducted by North Korean agents decades ago to train the North’s spies has for years been a Japanese condition for improving diplomatic and economic ties with North Korea.

Japan, like the United States, is also seeking an end to North Korea’s nuclear and missile programs.

Abe signaled a shift in Japan’s position in an interview with the newspaper on Wednesday, saying the only way to “break the current mutual distrust” was for him to hold unconditional talks with Kim.

“That’s why I would like to meet him without setting preconditions and hold frank discussions. I hope he’s a leader who can determine flexibly and strategically what is best for his country,” Abe was quoted as saying.

In 2002, North Korea said that it had kidnapped 13 Japanese in the 1970s and 1980s.

Japan believes 17 of its citizens were abducted, five of whom were repatriated. Eight were said by North Korea to have died, while four were said to have never entered the country.

Abe’s shift on North Korea comes after more than a year of efforts by it to improves its foreign relations.

Kim has met Trump twice since June last year and has held three summits with South Korean President Moon Jae-in.

Kim also met Russian President Vladimir Putin last month.

That leaves Japan as the only regional power involved in the North Korea nuclear crisis yet to have a summit with the North’s leader.

The last meeting between the leaders of Japan and North Korea was in 2004, when the Japanese prime minister, Junichiro Koizumi, met Kim’s late father, Kim Jong-il.

They pledged to work together to resolve the abductee issue.

Reporting by Leika Kihara in Tokyo and Jack Kim in Seoul; Editing by Robert Birsel