Who to Sue When a Robot Loses Your Fortune

Who to Sue When a Robot Loses Your Fortune

The first known case of humans going to court over investment losses triggered by autonomous machines will test the limits of liability.

Robots are getting more humanoid every day, but they still can’t be sued.

So a Hong Kong tycoon is doing the next best thing. He’s going after the salesman who persuaded him to entrust a chunk of his fortune to the supercomputer whose trades cost him more than $20 million.

The case pits Samathur Li Kin-kan, whose father is a major investor in Shaftesbury Plc, which owns much of London’s Chinatown, Covent Garden and Carnaby Street, against Raffaele Costa, who has spent much of his career selling investment funds for the likes of Man Group Plc and GLG Partners Inc. It’s the first-known instance of humans going to court over investment losses triggered by autonomous machines and throws the spotlight on the “black box” problem: If people don’t know how the computer is making decisions, who’s responsible when things go wrong?

“People tend to assume that algorithms are faster and better decision-makers than human traders,” said Mark Lemley, a law professor at Stanford University who directs the university’s Law, Science and Technology program. “That may often be true, but when it’s not, or when they quickly go astray, investors want someone to blame.”

DeGrisogono "Love On The Rocks" Party - The 70th Annual Cannes Film Festival
Raffaele Costa
Photographer: Andreas Rentz/Getty Images

The timeline leading up to the legal battle was drawn from filings to the commercial court in London where the trial is scheduled to begin next April. It all started over lunch at a Dubai restaurant on March 19, 2017. It was the first time 45-year-old Li, met Costa, the 49-year-old Italian who’s often known by peers in the industry as “Captain Magic.” During their meal, Costa described a robot hedge fund his company London-based Tyndaris Investments would soon offer to manage money entirely using AI, or artificial intelligence.

Developed by Austria-based AI company 42.cx, the supercomputer named K1 would comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.

The idea of a fully automated money manager inspired Li instantly. He met Costa for dinner three days later, saying in an e-mail beforehand that the AI fund “is exactly my kind of thing.”

Over the following months, Costa shared simulations with Li showing K1 making double-digit returns, although the two now dispute the thoroughness of the back-testing. Li eventually let K1 manage $2.5 billion—$250 million of his own cash and the rest leverage from Citigroup Inc. The plan was to double that over time.

But Li’s affection for K1 waned almost as soon as the computer started trading in late 2017. By February 2018, it was regularly losing money, including over $20 million in a single day—Feb. 14—due to a stop-loss order Li’s lawyers argue wouldn’t have been triggered if K1 was as sophisticated as Costa led him to believe.

Li is now suing Tyndaris for about $23 million for allegedly exaggerating what the supercomputer could do. Lawyers for Tyndaris, which is suing Li for $3 million in unpaid fees, deny that Costa overplayed K1’s capabilities. They say he was never guaranteed the AI strategy would make money.

Sarah McAtominey, a lawyer representing Li’s investment company that is suing Tyndaris, declined to comment on his behalf. Rob White, a spokesman for Tyndaris, declined to make Costa available for interview.

The legal battle is a sign of what’s in store as AI is incorporated into all facets of life, from self-driving cars to virtual assistants. When the technology misfires, where the blame lies is open to interpretation. In March, U.S. criminal prosecutors let Uber Technologies Inc. off the hook for the death of a 49-year-old pedestrian killed by one of its autonomous cars.

Robot Investors

AI hedge fund managers are beating human peers, but not stock benchmarks

Source: Eurekahedge, Hedge Fund Research, Inc., Bloomberg

2019 gains through March for every $100 invested in 2014; S&P 500 returns are with dividends reinvested; *HFRI Fund Weighted Composite Index

In the hedge fund world, pursuing AI has become a matter of necessity after years of underperformance by human managers. Quantitative investors—computers designed to identify and execute trades—are already popular. More rare are pure AI funds that automatically learn and improve from experience rather than being explicitly programmed. Once an AI develops a mind of its own, even its creators won’t understand why it makes the decisions it makes.

“You might be in a position where you just can’t explain why you are holding a position,” said Anthony Todd, the co-founder of London-based Aspect Capital, which is experimenting with AI strategies before letting them invest clients’ cash. “One of our concerns about the application of machine-learning-type techniques is that you are losing any explicit hypothesis about market behavior.”

Li’s lawyers argue Costa won his trust by hyping up the qualifications of the technicians building K1’s algorithm, saying, for instance, they were involved in Deep Blue, the chess-playing computer designed by IBM Corp. that signaled the dawn of the AI era when it beat the world champion in 1997. Tyndaris declined to answer Bloomberg questions on this claim, which was made in one of Li’s more-recent filings.

Fans watch the fifth game between World Chess Cham
Garry Kasparov plays against IBM’s Deep Blue computer in 1997.
Photographer: Stan Honda/AFP via Getty Images

Speaking to Bloomberg, 42.cx founder Daniel Mattes said none of the computer scientists advising him were involved with Deep Blue, but one, Vladimir Arlazarov, developed a 1960s chess program in the Soviet Union known as Kaissa. He acknowledged that experience may not be entirely relevant to investing. Algorithms have gotten really good at beating humans in games because there are clear rules that can be simulated, something stock markets decidedly lack. Arlazarov told Bloomberg that he did give Mattes general advice but didn’t work on K1 specifically.

Inspired by a 2015 European Central Bank study measuring investor sentiment on Twitter, 42.cx created software that could generate sentiment signals, said Mattes, who recently agreed to pay $17 million to the U.S. Securities and Exchange Commission to settle charges of defrauding investors at his mobile-payments company, Jumio Inc., earlier this decade. Whether and how to act on those signals was up to Tyndaris, he said.

“It’s a beautiful piece of software that was written,” Mattes said by phone. “The signals we have been provided have a strong scientific foundation. I think we did a pretty decent job. I know I can detect sentiment. I’m not a trader.”

There’s a lot of back and forth in court papers over whether Li was misled about K1’s capacities. For instance, the machine generated a single trade in the morning if it deciphered a clear sentiment signal, whereas Li claims he was under the impression it would make trades at optimal times during the day. In rebuttal, Costa’s lawyers say he told Li that buying or selling futures based on multiple trading signals was an eventual ambition, but wouldn’t happen right away.

For days, K1 made no trades at all because it didn’t identify a strong enough trend. In one message to Costa, Li complained that K1 sat back while taking adverse movements “on the chin, hoping that it won’t strike stop loss.” A stop loss is a pre-set level at which a broker will sell to limit the damage when prices suddenly fall.

That’s what happened on Valentine’s Day 2018. In the morning, K1 placed an order with its broker, Goldman Sachs Group Inc., for $1.5 billion of S&P 500 futures, predicting the index would gain. It went in the opposite direction when data showed U.S. inflation had risen more quickly than expected, triggering K1’s 1.4 percent stop-loss and leaving the fund $20.5 million poorer. But the S&P rebounded within hours, something Li’s lawyers argue shows K1’s stop-loss threshold for the day was “crude and inappropriate.”

Li claims he was told K1 would use its own “deep-learning capability” daily to determine an appropriate stop loss based on market factors like volatility. Costa denies saying this and claims he told Li the level would be set by humans.

In his interview, Mattes said K1 wasn’t designed to decide on stop losses at all—only to generate two types of sentiment signals: a general one that Tyndaris could have used to enter a position and a dynamic one that it could have used to exit or change a position. While Tyndaris also marketed a K1-driven fund to other investors, a spokesman declined to comment on whether the fund had ever managed money. Any reference to the supercomputer was removed from its website last month.

Investors like Marcus Storr say they’re wary when AI fund marketers come knocking, especially considering funds incorporating AI into their core strategy made less than half the returns of the S&P 500 in the three years to 2018, according to Eurekahedge AI Hedge Fund Index data.

“We can’t judge the codes,” said Storr, who decides on hedge fund investments for Bad Homburg, Germany-based Feri Trust GmbH. “For us it then comes down to judging the setups and research capacity.”

But what happens when autonomous chatbots are used by companies to sell products to customers? Even suing the salesperson may not be possible, added Karishma Paroha, a London-based lawyer at Kennedys who specializes in product liability.

“Misrepresentation is about what a person said to you,” she said. “What happens when we’re not being sold to by a human?”

Facebook will open its data up to academics to see how it impacts elections

Facebook will open its data up to academics to see how it impacts elections

More than 60 researchers from 30 institutions will get access to Facebook user data to study its impact on elections and democracy, and how it’s used by advertisers and publishers.

A vast trove: Facebook will let academics see which websites its users linked to from January 2017 to February 2019. Notably, that means they won’t be able to look at the platform’s impact on the US presidential election in 2016, or on the Brexit referendum in the UK in the same year.

Despite this slightly glaring omission, it’s still hard to wrap your head around the scale of the data that will be shared, given that Facebook is used by 1.6 billion people every day. That’s more people than live in all of China, the most populous country on Earth. It will be one of the largest data sets on human behavior online to ever be released.

The process: Facebook didn’t pick the researchers. They were chosen by the Social Science Research Council, a US nonprofit. Facebook has been working on this project for over a year, as it tries to balance research interests against user privacy and confidentiality.

Privacy: In a blog post, Facebook said it will use a number of statistical techniques to make sure the data set can’t be used to identify individuals. Researchers will be able to access it only via a secure portal that uses a VPN and two-factor authentication, and there will be limits on the number of queries they can each run.

The context: Facebook is keen to improve its reputation after months of scandals over data privacysecurity, and its role in elections and democracy. If it opens up its data as promised, it could introduce some much-needed light into what’s often a very heated debate.

The industry shaping the future of AI?

The industry shaping the future of AI?

Industry has mobilized to shape the science, morality and laws of artificial intelligence. On 10 May, letters of intent are due to the US National Science Foundation (NSF) for a new funding programme for projects on Fairness in Artificial Intelligence, in collaboration with Amazon. In April, after the European Commission released the Ethics Guidelines for Trustworthy AI, an academic member of the expert group that produced them described their creation as industry-dominated “ethics washing”. In March, Google formed an AI ethics board, which was dissolved a week later amid controversy. In January, Facebook invested US$7.5 million in a centre on ethics and AI at the Technical University of Munich, Germany.

Companies’ input in shaping the future of AI is essential, but they cannot retain the power they have gained to frame research on how their systems impact society or on how we evaluate the effect morally. Governments and publicly accountable entities must support independent research, and insist that industry shares enough data for it to be kept accountable.

Algorithmic-decision systems touch every corner of our lives: medical treatments and insurance; mortgages and transportation; policing, bail and parole; newsfeeds and political and commercial advertising. Because algorithms are trained on existing data that reflect social inequalities, they risk perpetuating systemic injustice unless people consciously design countervailing measures. For example, AI systems to predict recidivism might incorporate differential policing of black and white communities, or those to rate the likely success of job candidates might build on a history of gender-biased promotions.

Inside an algorithmic black box, societal biases are rendered invisible and unaccountable. When designed for profit-making alone, algorithms necessarily diverge from the public interest — information asymmetries, bargaining power and externalities pervade these markets. For example, Facebook and YouTube profit from people staying on their sites and by offering advertisers technology to deliver precisely targeted messages. That could turn out to be illegal or dangerous. The US Department of Housing and Urban Development has charged Facebook with enabling discrimination in housing adverts (correlates of race and religion could be used to affect who sees a listing). YouTube’s recommendation algorithm has been implicated in stoking anti-vaccine conspiracies. I see these sorts of service as the emissions of high-tech industry: they bring profits, but the costs are borne by society. (The companies have stated that they work to ensure their products are socially responsible.)

From mobile phones to medical care, governments, academics and civil-society organizations endeavour to study how technologies affect society and to provide a check on market-driven organizations. Industry players intervene strategically in those efforts.

When the NSF lends Amazon the legitimacy of its process for a $7.6-million programme (0.03% of Amazon’s 2018 research and development spending), it undermines the role of public research as a counterweight to industry-funded research. A university abdicates its central role when it accepts funding from a firm to study the moral, political and legal implications of practices that are core to the business model of that firm. So too do governments that delegate policy frameworks to industry-dominated panels. Yes, institutions have erected some safeguards. NSF will award research grants through its normal peer-review process, without Amazon’s input, but Amazon retains the contractual, technical and organizational means to promote the projects that suit its goals. The Technical University of Munich reports that the funds from Facebook come without obligations or conditions, and that the company will not have a place on the centre’s advisory board. In my opinion, the risk and perception of undue influence is still too great, given the magnitude of this sole-source gift and how it bears directly on the donor’s interests.

Today’s leading technology companies were born at a time of high faith in market-based mechanisms. In the 1990s, regulation was restricted, and public facilities such as railways and utilities were privatized. Initially hailed for bringing democracy andgrowth, pre-eminent tech companies came under suspicion after the Great Recession of the late 2000s. Germany, Australia and the United Kingdom have all passed or are planning laws to impose large fines on firms or personal liability on executives for the ills for which the companies are now blamed.

This new-found regulatory zeal might be an overreaction. (Tech anxiety without reliable research will be no better as a guide to policy than was tech utopianism.) Still, it creates incentives for industry to cooperate.

Governments should use that leverage to demand that companies share data in properly-protected databases with access granted to appropriately insulated, publicly-funded researchers. Industry participation in policy panels should be strictly limited.

Industry has the data and expertise necessary to design fairness into AI systems. It cannot be excluded from the processes by which we investigate which worries are real and which safeguards work, but it must not be allowed to direct them. Organizations working to ensure that AI is fair and beneficial must be publicly funded, subject to peer review and transparent to civil society. And society must demand increased public investment in independent research rather than hoping that industry funding will fill the gap without corrupting the process.

This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI

This chip was demoed at Jeff Bezos’s secretive tech conference. It could be key to the future of AI

Recently, on a dazzling morning in Palm Springs, California, Vivienne Szetook to a small stage to deliver perhaps the most nerve-racking presentation of her career.

She knew the subject matter inside-out. She was to tell the audience about the chips, being developed in her lab at MIT, that promise to bring powerful artificial intelligence to a multitude of devices where power is limited, beyond the reach of the vast data centers where most AI computations take place. However, the event—and the audience—gave Sze pause.

A photo of Vivienne Sze
TONY LUONG

The setting was MARS, an elite, invite-only conference where robots stroll (or fly) through a luxury resort, mingling with famous scientists and sci-fi authors. Just a few researchers are invited to give technical talks, and the sessions are meant to be both awe-inspiring and enlightening. The crowd, meanwhile, consisted of about 100 of the world’s most important researchers, CEOs, and entrepreneurs. MARS is hosted by none other than Amazon’s founder and chairman, Jeff Bezos, who sat in the front row.

“It was, I guess you’d say, a pretty high-caliber audience,” Sze recalls with a laugh.

Other MARS speakers would introduce a karate-chopping robot, drones that flap like large, eerily silent insects, and even optimistic blueprints for Martian colonies. Sze’s chips might seem more modest; to the naked eye, they’re indistinguishable from the chips you’d find inside any electronic device. But they are arguably a lot more important than anything else on show at the event.

New capabilities

Newly designed chips, like the ones being developed in Sze’s lab, may be crucial to future progress in AI—including stuff like the drones and robots found at MARS. Until now, AI software has largely run on graphical chips, but new hardware could make AI algorithms more powerful, which would unlock new applications. New AI chips could make warehouse robots more common or let smartphones create photo-realistic augmented-reality scenery.

Sze’s chips are both extremely efficient and flexible in their design, something that is crucial for a field that’s evolving incredibly quickly.

The microchips are designed to squeeze more out of the “deep-learning” AI algorithms that have already turned the world upside down. And in the process, they may inspire those algorithms themselves to evolve. “We need new hardware because Moore’s law has slowed down,” Sze says, referring to the axiom coined by Intel cofounder Gordon Moore that predicted that the number of transistors on a chip will double roughly every 18 months—leading to a commensurate performance boost in computer power.

An image of a chip controlled car
TONY LUONG

This law is increasingly now running into the physical limits that come with engineering components at an atomic scale. And it is spurring new interest in alternative architectures and approaches to computing.

The high stakes attached to investing in next-generation AI chips—and maintaining America’s dominance in chipmaking overall—aren’t lost on the US government. Sze’s microchips are being developed with funding from a Defense Advanced Research Projects Agency (DARPA) program meant to help develop new AI chip designs (see “The out-there AI ideas designed to keep the US ahead of China”).

But innovation in chipmaking has been spurred mostly by the emergence of deep learning, a very powerful way for machines to learn to perform useful tasks. Instead of giving a computer a set of rules to follow, a machine basically programs itself. Training data is fed into a large, simulated artificial neural network, which is then tweaked so that it produces the desired result. With enough training, a deep-learning system can find subtle and abstract patterns in data. The technique is applied to an ever-growing array of practical tasks, from face recognition on smartphones to predicting disease from medical images.

The new chip race

Deep learning is not so reliant on Moore’s law. Neural nets run many mathematical computations in parallel, so they run far more effectively on the specialized video-game graphics chips that perform parallel computations for rendering 3-D imagery. But microchips designed specifically for the computations that underpin deep learning should be even more powerful.

The potential for new chip architectures to improve AI has stirred up a level of entrepreneurial activity that the chip industry hasn’t seen in decades (see “The Race to Power AI’s Silicon Brains” and “China has never had a real chip industry. Making AI chips could change that”).

An image of AI chips
TONY LUONG

Big tech companies hoping to harness and commercialize AI—including Google, Microsoft, and (yes) Amazon—are all working on their own deep-learning chips. Many smaller companies are developing new chips, too. “It’s impossible to keep track of all the companies jumping into the AI-chip space,” says Mike Delmer, a microchip analyst at the Linley Group, an analyst firm. “I’m not joking that we learn about a new one nearly every week.”

The real opportunity, says Sze, isn’t building the most-powerful deep-learning chips possible. Power efficiency is important because AI also needs to run beyond the reach of large data centers, which means relying only on the power available on the device itself to run. This is known as operating on “the edge.”

“AI will be everywhere—and figuring out ways to make things more energy-efficient will be extremely important,” says Naveen Rao, vice president of the artificial intelligence products group at Intel. 

For example, Sze’s hardware is more efficient partly because it physically reduces the bottleneck between where data is stored and where it’s analyzed, but also because it uses clever schemes for reusing data. Before joining MIT, Sze pioneered this approach for improving the efficiency of video compression while at Texas Instruments.

For a fast-moving field like deep learning, the challenge for those working on AI chips is making sure they are flexible enough to be adapted to work for any application. It is easy to design a super-efficient chip capable of doing just one thing, but such a product will quickly become obsolete.

Sze’s chip is called Eyeriss. Developed in collaboration with Joel Emer, a research scientist at Nvidia and a professor at MIT, it was tested alongside a number of standard processors to see how it handles a range of different deep-learning algorithms. By balancing efficiency with flexibility, the new chip achieves performance 10 or even 1,000 times more efficient than existing hardware does, according to a paper posted online last year.

Sertac Karaman and Vivienne Sze
MIT’s Sertac Karaman and Vivienne Sze developed the new chip

TONY LUONG

Simpler AI chips are already having a major impact. High-end smartphones already include chips optimized for running deep-learning algorithms for image and voice recognition. More-efficient chips could let these devices run more-powerful AI code with better capabilities. Self-driving cars, too, need powerful AI computer chips, as most prototypes currently rely on a trunk-load of computers.

Rao says the MIT chips are promising, but many factors will determine whether a new hardware architecture succeeds. One of the most important factors, he says, is developing software that lets programmers run code on it. “Making something usable from a compiler standpoint is probably the single biggest obstacle to adoption,” he says.

Sze’s lab is, in fact, also exploring ways of designing software so that it better exploits the properties of existing computer chips. And this work extends beyond just deep learning.

Together with Sertac Karaman, from MIT’s Department of Aeronautics and Astronautics, Sze developed a low-power chip called Navion that performs 3-D mapping and navigation incredibly efficiently, for use on a tiny drone. Crucial to this effort was crafting the chip to exploit the behavior of navigation-focused algorithms—and designing the algorithm to make the most of a custom chip. Together with the work on deep learning, Navion reflects the way AI software and hardware are now starting to evolve in symbiosis.

Sze’s chips might not be as attention-grabbing as a flapping drone, but the fact that they were showcased at MARS offers some sense of how important her technology—and innovation in silicon more generally—will be for the future of AI. After her presentation, Sze says, some of the other MARS speakers expressed an interest in finding out more. “People found a lot of important use cases,” she says.

In other words, expect the eye-catching robots and drones at the next MARS conference to come with something rather special hidden inside.

Japan’s Abe signals shift on North Korea, says will meet Kim without conditions: media

Japan’s Abe signals shift on North Korea, says will meet Kim without conditions: media

TOKYO (Reuters) – Japanese Prime Minister Shinzo Abe has said he is ready to meet North Korean leader Kim Jong Un without conditions to end long-running mistrust between their countries, the Sankei newspaper reported on Friday.

Abe’s remarks come days after he met U.S. President Donald Trump in Washington and thanked Trump for raising with Kim, at a February summit, the topic of Japanese people abducted by North Korea.

Resolving the issue of Japanese people abducted by North Korean agents decades ago to train the North’s spies has for years been a Japanese condition for improving diplomatic and economic ties with North Korea.

Japan, like the United States, is also seeking an end to North Korea’s nuclear and missile programs.

Abe signaled a shift in Japan’s position in an interview with the newspaper on Wednesday, saying the only way to “break the current mutual distrust” was for him to hold unconditional talks with Kim.

“That’s why I would like to meet him without setting preconditions and hold frank discussions. I hope he’s a leader who can determine flexibly and strategically what is best for his country,” Abe was quoted as saying.

In 2002, North Korea said that it had kidnapped 13 Japanese in the 1970s and 1980s.

Japan believes 17 of its citizens were abducted, five of whom were repatriated. Eight were said by North Korea to have died, while four were said to have never entered the country.

Abe’s shift on North Korea comes after more than a year of efforts by it to improves its foreign relations.

Kim has met Trump twice since June last year and has held three summits with South Korean President Moon Jae-in.

Kim also met Russian President Vladimir Putin last month.

That leaves Japan as the only regional power involved in the North Korea nuclear crisis yet to have a summit with the North’s leader.

The last meeting between the leaders of Japan and North Korea was in 2004, when the Japanese prime minister, Junichiro Koizumi, met Kim’s late father, Kim Jong-il.

They pledged to work together to resolve the abductee issue.

Reporting by Leika Kihara in Tokyo and Jack Kim in Seoul; Editing by Robert Birsel