Recently, on a dazzling morning in Palm Springs, California, Vivienne Szetook to a small stage to deliver perhaps the most nerve-racking presentation of her career.
She knew the subject matter inside-out. She was to tell the audience about the chips, being developed in her lab at MIT, that promise to bring powerful artificial intelligence to a multitude of devices where power is limited, beyond the reach of the vast data centers where most AI computations take place. However, the event—and the audience—gave Sze pause.
The setting was MARS, an elite, invite-only conference where robots stroll (or fly) through a luxury resort, mingling with famous scientists and sci-fi authors. Just a few researchers are invited to give technical talks, and the sessions are meant to be both awe-inspiring and enlightening. The crowd, meanwhile, consisted of about 100 of the world’s most important researchers, CEOs, and entrepreneurs. MARS is hosted by none other than Amazon’s founder and chairman, Jeff Bezos, who sat in the front row.
“It was, I guess you’d say, a pretty high-caliber audience,” Sze recalls with a laugh.
Other MARS speakers would introduce a karate-chopping robot, drones that flap like large, eerily silent insects, and even optimistic blueprints for Martian colonies. Sze’s chips might seem more modest; to the naked eye, they’re indistinguishable from the chips you’d find inside any electronic device. But they are arguably a lot more important than anything else on show at the event.
New capabilities
Newly designed chips, like the ones being developed in Sze’s lab, may be crucial to future progress in AI—including stuff like the drones and robots found at MARS. Until now, AI software has largely run on graphical chips, but new hardware could make AI algorithms more powerful, which would unlock new applications. New AI chips could make warehouse robots more common or let smartphones create photo-realistic augmented-reality scenery.
Sze’s chips are both extremely efficient and flexible in their design, something that is crucial for a field that’s evolving incredibly quickly.
The microchips are designed to squeeze more out of the “deep-learning” AI algorithms that have already turned the world upside down. And in the process, they may inspire those algorithms themselves to evolve. “We need new hardware because Moore’s law has slowed down,” Sze says, referring to the axiom coined by Intel cofounder Gordon Moore that predicted that the number of transistors on a chip will double roughly every 18 months—leading to a commensurate performance boost in computer power.
This law is increasingly now running into the physical limits that come with engineering components at an atomic scale. And it is spurring new interest in alternative architectures and approaches to computing.
The high stakes attached to investing in next-generation AI chips—and maintaining America’s dominance in chipmaking overall—aren’t lost on the US government. Sze’s microchips are being developed with funding from a Defense Advanced Research Projects Agency (DARPA) program meant to help develop new AI chip designs (see “The out-there AI ideas designed to keep the US ahead of China”).
But innovation in chipmaking has been spurred mostly by the emergence of deep learning, a very powerful way for machines to learn to perform useful tasks. Instead of giving a computer a set of rules to follow, a machine basically programs itself. Training data is fed into a large, simulated artificial neural network, which is then tweaked so that it produces the desired result. With enough training, a deep-learning system can find subtle and abstract patterns in data. The technique is applied to an ever-growing array of practical tasks, from face recognition on smartphones to predicting disease from medical images.
The new chip race
Deep learning is not so reliant on Moore’s law. Neural nets run many mathematical computations in parallel, so they run far more effectively on the specialized video-game graphics chips that perform parallel computations for rendering 3-D imagery. But microchips designed specifically for the computations that underpin deep learning should be even more powerful.
Big tech companies hoping to harness and commercialize AI—including Google, Microsoft, and (yes) Amazon—are all working on their own deep-learning chips. Many smaller companies are developing new chips, too. “It’s impossible to keep track of all the companies jumping into the AI-chip space,” says Mike Delmer, a microchip analyst at the Linley Group, an analyst firm. “I’m not joking that we learn about a new one nearly every week.”
The real opportunity, says Sze, isn’t building the most-powerful deep-learning chips possible. Power efficiency is important because AI also needs to run beyond the reach of large data centers, which means relying only on the power available on the device itself to run. This is known as operating on “the edge.”
“AI will be everywhere—and figuring out ways to make things more energy-efficient will be extremely important,” says Naveen Rao, vice president of the artificial intelligence products group at Intel.
For example, Sze’s hardware is more efficient partly because it physically reduces the bottleneck between where data is stored and where it’s analyzed, but also because it uses clever schemes for reusing data. Before joining MIT, Sze pioneered this approach for improving the efficiency of video compression while at Texas Instruments.
For a fast-moving field like deep learning, the challenge for those working on AI chips is making sure they are flexible enough to be adapted to work for any application. It is easy to design a super-efficient chip capable of doing just one thing, but such a product will quickly become obsolete.
Sze’s chip is called Eyeriss. Developed in collaboration with Joel Emer, a research scientist at Nvidia and a professor at MIT, it was tested alongside a number of standard processors to see how it handles a range of different deep-learning algorithms. By balancing efficiency with flexibility, the new chip achieves performance 10 or even 1,000 times more efficient than existing hardware does, according to a paper posted online last year.
Simpler AI chips are already having a major impact. High-end smartphones already include chips optimized for running deep-learning algorithms for image and voice recognition. More-efficient chips could let these devices run more-powerful AI code with better capabilities. Self-driving cars, too, need powerful AI computer chips, as most prototypes currently rely on a trunk-load of computers.
Rao says the MIT chips are promising, but many factors will determine whether a new hardware architecture succeeds. One of the most important factors, he says, is developing software that lets programmers run code on it. “Making something usable from a compiler standpoint is probably the single biggest obstacle to adoption,” he says.
Sze’s lab is, in fact, also exploring ways of designing software so that it better exploits the properties of existing computer chips. And this work extends beyond just deep learning.
Together with Sertac Karaman, from MIT’s Department of Aeronautics and Astronautics, Sze developed a low-power chip called Navion that performs 3-D mapping and navigation incredibly efficiently, for use on a tiny drone. Crucial to this effort was crafting the chip to exploit the behavior of navigation-focused algorithms—and designing the algorithm to make the most of a custom chip. Together with the work on deep learning, Navion reflects the way AI software and hardware are now starting to evolve in symbiosis.
Sze’s chips might not be as attention-grabbing as a flapping drone, but the fact that they were showcased at MARS offers some sense of how important her technology—and innovation in silicon more generally—will be for the future of AI. After her presentation, Sze says, some of the other MARS speakers expressed an interest in finding out more. “People found a lot of important use cases,” she says.
In other words, expect the eye-catching robots and drones at the next MARS conference to come with something rather special hidden inside.
TOKYO (Reuters) – Japanese Prime Minister Shinzo Abe has said he is ready to meet North Korean leader Kim Jong Un without conditions to end long-running mistrust between their countries, the Sankei newspaper reported on Friday.
Abe’s remarks come days after he met U.S. President Donald Trump in Washington and thanked Trump for raising with Kim, at a February summit, the topic of Japanese people abducted by North Korea.
Resolving the issue of Japanese people abducted by North Korean agents decades ago to train the North’s spies has for years been a Japanese condition for improving diplomatic and economic ties with North Korea.
Japan, like the United States, is also seeking an end to North Korea’s nuclear and missile programs.
Abe signaled a shift in Japan’s position in an interview with the newspaper on Wednesday, saying the only way to “break the current mutual distrust” was for him to hold unconditional talks with Kim.
“That’s why I would like to meet him without setting preconditions and hold frank discussions. I hope he’s a leader who can determine flexibly and strategically what is best for his country,” Abe was quoted as saying.
In 2002, North Korea said that it had kidnapped 13 Japanese in the 1970s and 1980s.
Japan believes 17 of its citizens were abducted, five of whom were repatriated. Eight were said by North Korea to have died, while four were said to have never entered the country.
Abe’s shift on North Korea comes after more than a year of efforts by it to improves its foreign relations.
Kim has met Trump twice since June last year and has held three summits with South Korean President Moon Jae-in.
Kim also met Russian President Vladimir Putin last month.
That leaves Japan as the only regional power involved in the North Korea nuclear crisis yet to have a summit with the North’s leader.
The last meeting between the leaders of Japan and North Korea was in 2004, when the Japanese prime minister, Junichiro Koizumi, met Kim’s late father, Kim Jong-il.
They pledged to work together to resolve the abductee issue.
Reporting by Leika Kihara in Tokyo and Jack Kim in Seoul; Editing by Robert Birsel
NATIONAL REPORT—Food waste is enormously costly for hotel owners and managers across the US. Financially, the average kitchen spends 5-15% of food costs on food that is never eaten. From an operational point of view, kitchen staff lose countless hours preparing uneaten food. It also has a huge environmental toll. If food waste were a country it would have more impact on global warming than every country in the world, except China and the U.S.
Alongside Middle East hotel chain Emaar Hospitality Group and retail giant IKEA, the technology has been tested in the field and is now ready to scale across hospitality companies across the Americas.
How can AI reduce food waste?
Using a form of AI called computer vision, Winnow’s new tool – Winnow Vision – automatically captures waste food via an intelligent camera that sits above the bin. As food is discarded over the course of the day, the data is collected in the cloud and shared with the team to help them cut food costs by an average of 3-8%.
Previously, simply collecting the data to understand what is wasted in the kitchen is a time-consuming and inaccurate process. Now, with the application of AI to automate data capture, Winnow Vision has surpassed human levels of accuracy in identifying food waste.
This major milestone ensures that kitchen teams receive pinpoint data to reduce waste without manually entering the data. The captured image data provides an extra layer of validation, giving confidence to the chef and hotel management team that the data is accurate.
For hotel chains with multiple properties across the country, reports on waste volume and value can highlight the top performing locations, and the locations which need further support to increase profitability.
Emaar hotels cut food waste by 72%
Emaar Hospitality Group deployed Winnow Vision at 12 locations across the Middle East to cut their costs, but also because they recognize that food waste is an issue that customers care about.
Like in the U.S., there is rising demand at the government level to reduce food waste. For the hospitality sector, an alignment with customer and government pressure can be blended with pragmatic business sense.
“We know that food is a big expense, but nonetheless the savings we made exceeded our expectations.” says Olivier Harnisch, CEO of Emaar Hospitality Group. “Across our 12 properties we reduced food waste by 72% in a short period of time—around $350,000.
Before launching Winnow Vision this year, Winnow has also been working with an international group of hotels and casinos, saving $30 million for clients annually, including some of the biggest names in the hotel sector like Accor, IHG and Hilton. By 2025, the aim is to save clients $1 billion in food savings every year.
Hotel chains across the U.S. have the opportunity to be the first to roll out AI in their kitchens nationwide. The opportunity is both financial and competitive—hotel chains can save thousands of dollars each year through the use of AI, and attract new customers who share the belief that food is too valuable to waste.
If you want to learn how your hotel kitchen could run more profitably and sustainably by cutting food waste,
AI applications in healthcare are becoming more common for white-collar automation and diagnostics. However, medical robotics is an area that may be marginally underdeveloped. This is likely because of regulations concerning automated surgery.
In this article, we cover how AI software is finding its way into medical robotics now and how it might in the future with more investment and when the density of AI talent at medical robotics companies increases. Specifically, we explore:
AI for Medical Robotics – What’s Possible, and What’s Being Used by healthcare clients right now. We found little to no case studies showing a health network or hospital’s success with AI-based medical robotics.
The State of AI at Medical Robotics Vendors, including the AI talent at medical robotics companies and a discussion of how to vet a vendor on whether or not its software is truly leveraging AI.
We begin our exploration of A-based medical robots with an overview of how they’re being used now.
AI for Medical Robotics – What’s Possible and What’s Being Used
Theoretically, multiple approaches to developing AI software could work for automating medical robotics. For example, one could use machine vision to guide the robot to problem areas and make it aware of mistakes or patient bodily reactions.
Currently, the medical robotics sector does not have many visible use cases in terms of fully automated surgery or other medical procedures. This is because regulations dictate that a recognized professional administer these procedures. Issues such as liability are harder to resolve with AI because it is usually unclear exactly how an AI application came to its conclusion.
Most medical robots are used for precision operations during non-invasive surgery. This use case nearly prohibits full automation with AI, as no one likely wants to “let loose” an AI software onto the human body. Additionally, a machine learning model built to operate a medical robot with dozens of moving arms and tools would need to be extensively trained on labeled videos of surgeries. This requires thousands of digitally labeled surgical videos before implementation.
A healthcare company may take months to acquire enough data to properly train a machine learning model to perform robotic surgery well enough that it would not be considered a liability. Even if a company did collect all that data, regulations may still need to change before the software can be used to fully automate surgeries.
That said, there are still medical robots for automating other healthcare processes such as diagnostics. For example, Indian software company Sigtuple purportedly created an AI-based telepathology system that automates their smart microscopes to take pictures and send them to the cloud.
Sigtuple’s software is called Shonit, and it consists of smart microscopes, or microscopes fitted to a movable robotic base which are connected to a smartphone camera. The software runs from an app on the smartphone, which also connects it to the cloud. The microscope slides around on its robotic base, which allows the lens to hover over an area of a sample dish and take multiple pictures.
Those pictures are then saved to the smartphone and sent to the cloud to be labeled. The cloud satellite that receives these pictures uses machine vision to label them according to blood cell count and any anomalies within the blood. Then, the pictures are sent to a remote pathologist who can diagnose based on these pre-labeled high-resolution images. Healthcare company workers using the software would then only need to wait for the pathologist to send a response with their diagnosis.
The 3-minute video below explains how the Shonit software can scan blood smears, send them to the cloud for analysis, and then to a pathologist so they may diagnose any illnesses found:
The software likely uses machine vision to cover all of these processes. First, the smartphone camera with the Shonit app installed would use the presence of blood cells as a good indicator to take a picture. Additionally, a higher concentration of cells or bodily structures at the edge of the lens may prompt the software to move the microscope more in the direction of the sample. The cloud-based portion would of course also use machine vision to count blood cells and recognize anomalies or illnesses.
Although this may seem like a novel use of the different capabilities of machine vision, Sigtuple does not list any results showing success with their software. This is because Shonit is still undergoing a partner-exclusive beta launch. Additionally, Sigtuple does not seem to employ long-standing AI talent at their company.
This lack of evidence is present with most medical robotics companies that purport to use AI, as we will discuss in the next section of this article.
The State of AI at Medical Robotics Vendors
A Google search for the top medical robotics companies that are using AI and ML to automate their robotics solutions provides various company names and articles describing each robot. One may see the leading names in medical robotics, such as Intuitive Surgical and Medrobotics Corporation.
These companies all offer surgical robots that facilitate delicate or non-invasive surgeries. They are made to hold tissue in place and make incisions at the same time using multiple robotic arms. Machine vision software can also be used in robotic camera arms that provide a clearer view of body structures during surgery.
Most surgical robots provide helpful information and recommendations during surgery. This can range from monitoring heart rate and blood loss to recommending where to start cutting to remove a foreign body.
Intuitive’s robot also includes an arm with a camera attached to allow for a closer view of the operation. While Intuitive Surgical and Medrobotics claim to use AI, business leaders may not know the trust indicators that would show it.
These companies exhibit some issues regarding their purported use of AI. Medrobotics does not have a robust host of dedicated AI talent, and this is especially true for PhD level staff. Additionally, each company lacks documentation of a client’s success with any kind of software or robotics solution that shows how the AI software solved a business problem.
Though they do not provide evidence of a healthcare company’s success with the software, their advances in robotic cameras and PhD level staff indicates that they may truly be using AI.
Below is a demonstration video showing Intuitive’s Da Vinci robot stitching a grape back together:
Medrobotics does not show as many trust indicators as Intuitive surgical. Some articles may inadvertently mislead a reader to assume a company such as these use real AI because of carefully selected marketing language such as “augmented intelligence,” or “advanced intelligence.”
The criteria we look for in AI software companies are listed below:
Talented AI staff with a significant academic background in Machine Learning, AI, or cognitive science. If there are too few PhDaworking on AI at the company, this is a bad sign.
Case studies, customer stories, or detailed press releases that provide evidence of a client company’s success with the software. If the company cannot provide so much as a press release with one statistic about their client’s success, it is not likely the software uses AI or is developed enough to go above and beyond for the customer.
A value proposition for the software solution that clearly indicates system requirements and inputs and what the system outputs or provides to the user. If one can identify these from a company website they should also be able to determine if the software is actually based on machine learning.
Information on a company’s staff and possible AI talent within that staff can be found on Linkedin. Talented AI staff will likely have “data scientist,” “AI,” or “machine learning,” in their title and have a PhD in machine learning, cognitive science, or another statistical field. Good signs for AI talent include an AI-specific C-level role in the company and multiple PhD holders across levels of seniority within the AI staff.
In order to find case studies that show a client’s success with the software, one may need to search through a company’s website for extra resources or videos. Some companies do not have any case studies, but still list multiple press releases about their clients’ experiences. Press releases are acceptable in cases where they provide detailed accounts of a client’s use of the software and at least one or two statistics that illustrate success with it.
If a company cannot provide any evidence for the legitimacy of their software, it may be best to direct attention elsewhere even if they have considerable AI talent.
A healthcare robotics company could have multiple reasons for claiming to use AI before they have actually implemented it for any of their solutions. One is that it could help the company find new clients that are eager to implement AI at their companies.
Another reason may be that stretching the truth in this way leads to good press for the company, and this good press and new clientele could lead to acquiring more AI staff who can help build the company’s AI applications to better reflect public perception of the company. Once a company like this has a dedicated AI staff, it is only a matter of time before they begin to test machine learning models for automating medical robots.
A company’s value proposition for their software can also illuminate the nature of how it is made and what it is used for. We focus on the system requirements to run properly and what the software does with those resources to determine if it is likely to be AI. Machine learning-based software requires large amounts of training data which is then used to determine when and how to take the next step in a procedure. If a company never states anything about needing to train the software on a corpus of related data, it is likely that it is a difficult process.
AI developers face challenges in terms of the legality and logistics of installing a robotic surgical assistant. As previously stated, a big challenge for the medical robotics field is the concern surrounding fully automated surgical procedures and the resulting healthcare regulations that may prohibit it.
This challenge will likely be overcome with time as the technology becomes more reliable and the public is more comfortable with allowing a robot to operate on them without human assistance.
Additionally, data scientists and machine learning experts may still be developing the method for training a machine learning model to learn surgical procedures. This could be possible with the right surgical footage labeled according to all present body structures and accurate movement or pulsation of those structures. This would also include visible mechanical structures such as the robot’s arms or a surgical implant.
Labeling a surgical video with that amount of information is surely a challenge, and finding a way to do that efficiently and within a reasonable time frame will likely be how these companies overcome it. Healthcare companies trying to make this a reality may benefit from a series of proofreads and approvals by the experienced AI staff at their business.
We spoke to Yufeng Deng, Chief Scientist of Infervision, a machine vision company for medical diagnostics about how data can be most efficiently collected and used for data science purposed in healthcare. In our interview, Deng spoke about the possibilities of machine vision technology in healthcare and more specifically chest diagnostics.
When asked about how his business goes about gathering data from his relevant staff who do not have time to spend on preparing and labeling data, Deng said:
The quality control [of data] is, I will say the most important piece for a good AI model. So we have this four-step quality control process, where each image is at least labeled by two radiologists respectively and independently, and the third step is we’ll have a more experienced radiologist to look at the previous two annotations, two labelings, and make a final decision if these two labelings don’t agree with each other. On top of that, we have a judge who is usually a more experienced person, and on top of that, we have a fourth step, which is a random check process on every day.
Small errors in creating automated tools such as this could endanger a patient’s life, and no healthcare company would want to risk that from a software vendor. So it follows that working software offerings would be few until these challenges are addressed.
In general, AI advances are good for our society. In particular cases, they can be bad. Take Amazon’s Rekognition AI service. There are evidences that the service exhibited much higher error rates on images of darker-skinned women versus lighter-skinned men. Our AIWS Weekly Newsletter last month discussed the controversies over Rekognition AI, Amazon’s facial recognition software. Notably, in March, a group of 26 prominent research scientists, including Dr. Yoshua Bengio, this year’s winner of the Turing Award (the Nobel Prize equivalence in the field of Computing), called for the company to stop selling Rekognition AI to police departments.
The Washingpost just published a detailed post on this matter. It started in late 2017 when the Washington County Sheriff’s Office in Oregon became the first law enforcement agency to test Rekognition. Almost overnight, deputies saw their investigative powers supercharged. But what if Rekognition gets it wrong? Earlier, after inquiries from the Post, Amazon updated its guidelines for law enforcement to advise officers to manually review all matches before detaining a suspect.
According to the Post, Amazon executives say they support national facial-recognition legislation, but arguing that “new technology should not be banned or condemned because of its potential misuse.” On the other side, “people love to always say, ‘Hey, if it’s catching bad people, great, who cares,’ ” said Joshua Crowther, a chief deputy defender in Oregon, “until they’re on the other end.”
The question whether we should use Rekognition because of its value knowing it is not perfect does have a moral impact. This example of an AI service with potential bias highlights the importance of an ethical framework in the development and use of AI. This is exactly the topic of a roundtable hosted by the Artificial Intelligence World Society (AIWS) in Tokyo in March 2019. We believe that regulation from the government level is needed to avoid any broad release of AI software that may have bias against any population.
Theme: AI World Society Standards
Time: 4:30pm – 6:30pm, March 26, 2019
Venue: Hitachi Central Laboratory in Kokubunji, Tokyo
AI World Society Distinguished Lecturer: Dr.Kazuo Yano, Ph.D., Fellow, Corporate Officer, Hitachi, Ltd., Member of AI World
Society Standards and Practice Committee
Agenda:
4:30 pm: Introduction, Ms. Nobue Mita, Representative of the Boston Global Forum
in Japan
Opening Remarks, Mr. Nguyen Anh Tuan, Co-Founder, and CEO of the Boston Global
Forum, Director of Michael Dukakis Institute for Leadership and Innovation.
4:40 pm: AI World Society Standards, Mr. Kazuo Yano, Ph.D., Fellow, Corporate
Officer, Hitachi, Ltd., Member of AI World Society Standards and Practice
Committee.
5:40 pm: Discussion: Dr.Kazuo Yano, Ph.D., fellow, cooperate officer from Hitachi, Mr.Yuichi Iwata, senior researcher from Nakasone Peace Institute, Mr.Kei Yamamoto, the president of D-Ocean and Mr.Yuji Ukai, the president of FFRI.
6:25 pm: Present Certificate of AI World Society Distinguished Lecturer to Dr.Kazuo Yano by Mr. Nguyen Anh Tuan
On January 14th-19th, 2019, the first regional leadership seminar in the framework of its flagship “Young Mediterranean Voices” programme will be launched with the participation of top countries’ leaders of this programme. The aim is to create an exciting method for attendees to take part in the discussion on common social challenges encountered in the Mediterranean.
The seminar will give participants a chance to act as leaders or ambassadors. Then, they will present their approaches to advocacy and communication. In addition, the program’s trainees will be given direct exposure to world leaders and policy makers who are Club de Madrid’s members.