Reducing Food Waste in Hotels with Artificial Intelligence

Reducing Food Waste in Hotels with Artificial Intelligence

NATIONAL REPORT—Food waste is enormously costly for hotel owners and managers across the US. Financially, the average kitchen spends 5-15% of food costs on food that is never eaten. From an operational point of view, kitchen staff lose countless hours preparing uneaten food. It also has a huge environmental toll. If food waste were a country it would have more impact on global warming than every country in the world, except China and the U.S.

What if artificial intelligence (AI) could help hotels to cut food waste in half? A breakthrough food waste solution has launched to tackle the problem, using AI to help hotels and other hospitality businesses to reduce food waste.

Alongside Middle East hotel chain Emaar Hospitality Group and retail giant IKEA, the technology has been tested in the field and is now ready to scale across hospitality companies across the Americas.

How can AI reduce food waste?

Using a form of AI called computer vision, Winnow’s new tool – Winnow Vision – automatically captures waste food via an intelligent camera that sits above the bin. As food is discarded over the course of the day, the data is collected in the cloud and shared with the team to help them cut food costs by an average of 3-8%.

Previously, simply collecting the data to understand what is wasted in the kitchen is a time-consuming and inaccurate process. Now, with the application of AI to automate data capture, Winnow Vision has surpassed human levels of accuracy in identifying food waste.

This major milestone ensures that kitchen teams receive pinpoint data to reduce waste without manually entering the data. The captured image data provides an extra layer of validation, giving confidence to the chef and hotel management team that the data is accurate.

For hotel chains with multiple properties across the country, reports on waste volume and value can highlight the top performing locations, and the locations which need further support to increase profitability.

Emaar hotels cut food waste by 72%

Emaar Hospitality Group deployed Winnow Vision at 12 locations across the Middle East to cut their costs, but also because they recognize that food waste is an issue that customers care about.

Like in the U.S., there is rising demand at the government level to reduce food waste. For the hospitality sector, an alignment with customer and government pressure can be blended with pragmatic business sense.

“We know that food is a big expense, but nonetheless the savings we made exceeded our expectations.” says Olivier Harnisch, CEO of Emaar Hospitality Group. “Across our 12 properties we reduced food waste by 72% in a short period of time—around $350,000.

Before launching Winnow Vision this year, Winnow has also been working with an international group of hotels and casinos, saving $30 million for clients annually, including some of the biggest names in the hotel sector like Accor, IHG and Hilton. By 2025, the aim is to save clients $1 billion in food savings every year.

Hotel chains across the U.S. have the opportunity to be the first to roll out AI in their kitchens nationwide. The opportunity is both financial and competitive—hotel chains can save thousands of dollars each year through the use of AI, and attract new customers who share the belief that food is too valuable to waste.

If you want to learn how your hotel kitchen could run more profitably and sustainably by cutting food waste,

Artificial Intelligence in Medical Robotics – Current Applications and Possibilities

Artificial Intelligence in Medical Robotics – Current Applications and Possibilities

AI applications in healthcare are becoming more common for white-collar automation and diagnostics. However, medical robotics is an area that may be marginally underdeveloped. This is likely because of regulations concerning automated surgery.

In this article, we cover how AI software is finding its way into medical robotics now and how it might in the future with more investment and when the density of AI talent at medical robotics companies increases. Specifically, we explore:

  • AI for Medical Robotics – What’s Possible, and What’s Being Used by healthcare clients right now. We found little to no case studies showing a health network or hospital’s success with AI-based medical robotics.
  • The State of AI at Medical Robotics Vendors, including the AI talent at medical robotics companies and a discussion of how to vet a vendor on whether or not its software is truly leveraging AI.

We begin our exploration of A-based medical robots with an overview of how they’re being used now.

AI for Medical Robotics – What’s Possible and What’s Being Used

Theoretically, multiple approaches to developing AI software could work for automating medical robotics. For example, one could use machine vision to guide the robot to problem areas and make it aware of mistakes or patient bodily reactions.

Currently, the medical robotics sector does not have many visible use cases in terms of fully automated surgery or other medical procedures. This is because regulations dictate that a recognized professional administer these procedures. Issues such as liability are harder to resolve with AI because it is usually unclear exactly how an AI application came to its conclusion.

Most medical robots are used for precision operations during non-invasive surgery. This use case nearly prohibits full automation with AI, as no one likely wants to “let loose” an AI software onto the human body. Additionally, a machine learning model built to operate a medical robot with dozens of moving arms and tools would need to be extensively trained on labeled videos of surgeries. This requires thousands of digitally labeled surgical videos before implementation.

A healthcare company may take months to acquire enough data to properly train a machine learning model to perform robotic surgery well enough that it would not be considered a liability. Even if a company did collect all that data, regulations may still need to change before the software can be used to fully automate surgeries.

That said, there are still medical robots for automating other healthcare processes such as diagnostics. For example, Indian software company Sigtuple purportedly created an AI-based telepathology system that automates their smart microscopes to take pictures and send them to the cloud.

Sigtuple’s software is called Shonit, and it consists of smart microscopes, or microscopes fitted to a movable robotic base which are connected to a smartphone camera. The software runs from an app on the smartphone, which also connects it to the cloud. The microscope slides around on its robotic base, which allows the lens to hover over an area of a sample dish and take multiple pictures.

Those pictures are then saved to the smartphone and sent to the cloud to be labeled. The cloud satellite that receives these pictures uses machine vision to label them according to blood cell count and any anomalies within the blood. Then, the pictures are sent to a remote pathologist who can diagnose based on these pre-labeled high-resolution images. Healthcare company workers using the software would then only need to wait for the pathologist to send a response with their diagnosis.

The 3-minute video below explains how the Shonit software can scan blood smears, send them to the cloud for analysis, and then to a pathologist so they may diagnose any illnesses found:

The software likely uses machine vision to cover all of these processes. First, the smartphone camera with the Shonit app installed would use the presence of blood cells as a good indicator to take a picture. Additionally, a higher concentration of cells or bodily structures at the edge of the lens may prompt the software to move the microscope more in the direction of the sample. The cloud-based portion would of course also use machine vision to count blood cells and recognize anomalies or illnesses.

Although this may seem like a novel use of the different capabilities of machine vision, Sigtuple does not list any results showing success with their software. This is because Shonit is still undergoing a partner-exclusive beta launch. Additionally, Sigtuple does not seem to employ long-standing AI talent at their company.

This lack of evidence is present with most medical robotics companies that purport to use AI, as we will discuss in the next section of this article.

The State of AI at Medical Robotics Vendors

A Google search for the top medical robotics companies that are using AI and ML to automate their robotics solutions provides various company names and articles describing each robot. One may see the leading names in medical robotics, such as Intuitive Surgical and Medrobotics Corporation.

These companies all offer surgical robots that facilitate delicate or non-invasive surgeries. They are made to hold tissue in place and make incisions at the same time using multiple robotic arms. Machine vision software can also be used in robotic camera arms that provide a clearer view of body structures during surgery.

Most surgical robots provide helpful information and recommendations during surgery. This can range from monitoring heart rate and blood loss to recommending where to start cutting to remove a foreign body.

Intuitive’s robot also includes an arm with a camera attached to allow for a closer view of the operation. While Intuitive Surgical and Medrobotics claim to use AI, business leaders may not know the trust indicators that would show it.

These companies exhibit some issues regarding their purported use of AI. Medrobotics does not have a robust host of dedicated AI talent, and this is especially true for PhD level staff. Additionally, each company lacks documentation of a client’s success with any kind of software or robotics solution that shows how the AI software solved a business problem.

Though they do not provide evidence of a healthcare company’s success with the software, their advances in robotic cameras and PhD level staff indicates that they may truly be using AI.

Below is a demonstration video showing Intuitive’s Da Vinci robot stitching a grape back together:

Medrobotics does not show as many trust indicators as Intuitive surgical. Some articles may inadvertently mislead a reader to assume a company such as these use real AI because of carefully selected marketing language such as “augmented intelligence,” or “advanced intelligence.”

The criteria we look for in AI software companies are listed below:

  • Talented AI staff with a significant academic background in Machine Learning, AI, or cognitive science. If there are too few PhDaworking on AI at the company, this is a bad sign.
  • Case studies, customer stories, or detailed press releases that provide evidence of a client company’s success with the software. If the company cannot provide so much as a press release with one statistic about their client’s success, it is not likely the software uses AI or is developed enough to go above and beyond for the customer.
  • A value proposition for the software solution that clearly indicates system requirements and inputs and what the system outputs or provides to the user. If one can identify these from a company website they should also be able to determine if the software is actually based on machine learning.

Information on a company’s staff and possible AI talent within that staff can be found on Linkedin. Talented AI staff will likely have “data scientist,” “AI,” or “machine learning,” in their title and have a PhD in machine learning, cognitive science, or another statistical field. Good signs for AI talent include an AI-specific C-level role in the company and multiple PhD holders across levels of seniority within the AI staff.

In order to find case studies that show a client’s success with the software, one may need to search through a company’s website for extra resources or videos. Some companies do not have any case studies, but still list multiple press releases about their clients’ experiences. Press releases are acceptable in cases where they provide detailed accounts of a client’s use of the software and at least one or two statistics that illustrate success with it.

If a company cannot provide any evidence for the legitimacy of their software, it may be best to direct attention elsewhere even if they have considerable AI talent.

A healthcare robotics company could have multiple reasons for claiming to use AI before they have actually implemented it for any of their solutions. One is that it could help the company find new clients that are eager to implement AI at their companies.

Another reason may be that stretching the truth in this way leads to good press for the company, and this good press and new clientele could lead to acquiring more AI staff who can help build the company’s AI applications to better reflect public perception of the company. Once a company like this has a dedicated AI staff, it is only a matter of time before they begin to test machine learning models for automating medical robots.

A company’s value proposition for their software can also illuminate the nature of how it is made and what it is used for. We focus on the system requirements to run properly and what the software does with those resources to determine if it is likely to be AI. Machine learning-based software requires large amounts of training data which is then used to determine when and how to take the next step in a procedure. If a company never states anything about needing to train the software on a corpus of related data, it is likely that it is a difficult process.

AI developers face challenges in terms of the legality and logistics of installing a robotic surgical assistant. As previously stated, a big challenge for the medical robotics field is the concern surrounding fully automated surgical procedures and the resulting healthcare regulations that may prohibit it.

This challenge will likely be overcome with time as the technology becomes more reliable and the public is more comfortable with allowing a robot to operate on them without human assistance.

Additionally, data scientists and machine learning experts may still be developing the method for training a machine learning model to learn surgical procedures. This could be possible with the right surgical footage labeled according to all present body structures and accurate movement or pulsation of those structures. This would also include visible mechanical structures such as the robot’s arms or a surgical implant.

Labeling a surgical video with that amount of information is surely a challenge, and finding a way to do that efficiently and within a reasonable time frame will likely be how these companies overcome it. Healthcare companies trying to make this a reality may benefit from a series of proofreads and approvals by the experienced AI staff at their business.

We spoke to Yufeng Deng, Chief Scientist of Infervision, a machine vision company for medical diagnostics about how data can be most efficiently collected and used for data science purposed in healthcare. In our interview, Deng spoke about the possibilities of machine vision technology in healthcare and more specifically chest diagnostics.

When asked about how his business goes about gathering data from his relevant staff who do not have time to spend on preparing and labeling data, Deng said:

The quality control [of data] is, I will say the most important piece for a good AI model. So we have this four-step quality control process, where each image is at least labeled by two radiologists respectively and independently, and the third step is we’ll have a more experienced radiologist to look at the previous two annotations, two labelings, and make a final decision if these two labelings don’t agree with each other. On top of that, we have a judge who is usually a more experienced person, and on top of that, we have a fourth step, which is a random check process on every day.

Small errors in creating automated tools such as this could endanger a patient’s life, and no healthcare company would want to risk that from a software vendor. So it follows that working software offerings would be few until these challenges are addressed.

 

Header Image Credit: LoHud

What if  Amazon’s Rekognition AI Gets It Wrong?

What if Amazon’s Rekognition AI Gets It Wrong?

In general, AI advances are good for our society. In particular cases, they can be bad. Take Amazon’s Rekognition AI service. There are evidences that the service exhibited much higher error rates on images of darker-skinned women versus lighter-skinned men. Our AIWS Weekly Newsletter last month discussed the controversies over Rekognition AI, Amazon’s facial recognition software. Notably, in March, a group of 26 prominent research scientists, including Dr. Yoshua Bengio, this year’s winner of the Turing Award (the Nobel Prize equivalence in the field of Computing), called for the company to stop selling Rekognition AI to police departments.

The Washingpost just published a detailed post on this matter. It started in late 2017 when the Washington County Sheriff’s Office in Oregon became the first law enforcement agency to test Rekognition. Almost overnight, deputies saw their investigative powers supercharged. But what if Rekognition gets it wrong? Earlier, after inquiries from the Post, Amazon updated its guidelines for law enforcement to advise officers to manually review all matches before detaining a suspect.

According to the Post, Amazon executives say they support national facial-recognition legislation, but arguing that “new technology should not be banned or condemned because of its potential misuse.” On the other side, “people love to always say, ‘Hey, if it’s catching bad people, great, who cares,’ ” said Joshua Crowther, a chief deputy defender in Oregon, “until they’re on the other end.”

The question whether we should use Rekognition because of its value knowing it is not perfect does have a moral impact. This example of an AI service with potential bias highlights the importance of an ethical framework in the development and use of AI. This is exactly the topic of a roundtable hosted by the Artificial Intelligence World Society (AIWS) in Tokyo in March 2019.  We believe that regulation from the government level is needed to avoid any broad release of AI software that may have bias against any population.

Agenda of AI World Society – G7 Summit Conference

Agenda of AI World Society – G7 Summit Conference

The forthcoming AI World Society – G7 Summit Initiative will focus on

the AI-Government Model for democracy in the age of Artificial Intelligence.

This is a new and evolutionary political development.

                         AI-Standards and Government Concepts

  • Time:        8:30 am – 12:00 pm, April 25, 2019
  • Venue:      Loeb House, Harvard University, 17 Quincy Street, Cambridge, Massachusetts 02138

 

AGENDA of AIWS-G7 Summit Conference

  • Governor Michael DukakisOpening Remarks
  • Nam Pham、the Assistant Secretary of Business Development and International Trade, Governmentof Massachusetts: Congratulations
  • Arnaud Mentré, Consul General of France in Boston: The French Perspective on Artificial Intelligence and the G7 Summit
  • Professor Thomas PattersonAI World Society – G7 Summit Initiative
  • Governor Michael Dukakis: Presents the AI World Society – G7 Summit Initiative to the Government of France
  • Vint Cerf, the Father of the Internet: Honored as World Leader in AI World Society (AIWS)
  • Vint Cerf: Artificial Intelligence and the Future of the Internet
  • Paul Nemitz: Legal Concepts for AI – Layer 4 of AI World Society
  • Conference Delegates: Open Discussion
  • Governor Michael Dukakis: Closing Remarks

Download Agenda AIWS-G7 Summit 2019

Agenda of AI World Society Distinguished Lecture

Agenda of AI World Society Distinguished Lecture

Theme: AI World Society Standards
Time: 4:30pm – 6:30pm, March 26, 2019
Venue: Hitachi Central Laboratory in Kokubunji, Tokyo

AI World Society Distinguished Lecturer:
Dr.Kazuo Yano, Ph.D., Fellow, Corporate Officer, Hitachi, Ltd., Member of AI World
Society Standards and Practice Committee

Agenda:

  • 4:30 pm: Introduction, Ms. Nobue Mita, Representative of the Boston Global Forum
    in Japan
  • Opening Remarks, Mr. Nguyen Anh Tuan, Co-Founder, and CEO of the Boston Global
    Forum, Director of Michael Dukakis Institute for Leadership and Innovation.
  • 4:40 pm: AI World Society Standards, Mr. Kazuo Yano, Ph.D., Fellow, Corporate
    Officer, Hitachi, Ltd., Member of AI World Society Standards and Practice
    Committee.
  • 5:40 pm: Discussion: Dr.Kazuo Yano, Ph.D., fellow, cooperate officer from Hitachi, Mr.Yuichi Iwata, senior researcher from Nakasone Peace Institute, Mr.Kei Yamamoto, the president of D-Ocean and Mr.Yuji Ukai, the president of FFRI.
  • 6:25 pm: Present Certificate of AI World Society Distinguished Lecturer to Dr.Kazuo Yano by Mr. Nguyen Anh Tuan
  • 6:30 pm: Closing Remarks.

Full agenda and our speakers and discussants, please download Agenda of AIWS Distinguished Lecture

The first regional leadership seminar in the framework of its flagship “Young Mediterranean Voices” programme

The first regional leadership seminar in the framework of its flagship “Young Mediterranean Voices” programme

On January 14th-19th, 2019, the first regional leadership seminar in the framework of its flagship “Young Mediterranean Voices” programme will be launched with the participation of top countries’ leaders of this programme. The aim is to create an exciting method for attendees to take part in the discussion on common social challenges encountered in the Mediterranean.

The seminar will give participants a chance to act as leaders or ambassadors. Then, they will present their approaches to advocacy and communication. In addition, the program’s trainees will be given direct exposure to world leaders and policy makers who are Club de Madrid’s members.

Japanese Prime Minister called for enhancing Japan’s security capabilities in cyberspace and outer space

Japanese Prime Minister called for enhancing Japan’s security capabilities in cyberspace and outer space

Prior to the government meeting, the threat of nuclear and missile programs from North Korea was addressed in the Defense Ministry’s annual paper. In the context of the information era, technologies are emerging day by day. “It is impossible to protect our country from every threat if our thinking is limited to conventional categories of land, sea and air”, said PM. Abe.

  1. Abe’s intention is to adjust the National Defense Program Guidelines which was last approved in 2013 with a higher defense capability. In addition, he asked Defense Minister Itsunori Onodera to review the guidelines on North Korea’s nuclear weapon development and China’s military force.

Apparently, problems in cybersecurity remains one of his major interests. In 2015, the Prime Minister was named the World Leader in Cybersecurity by BGF- MDI.

Tesla Founder sounds the alarm about AI being out of control

Tesla Founder sounds the alarm about AI being out of control

As AI is growing in an unprecedented speed and becoming more and more influential to human life, many experts and leaders have expressed their points of views on the development of AI. Elon Musk is one of those who fear for the future of our legacy.

It could take a long period to regulate AI, even while AI is rapidly developing and is already starting to change the world. It might be too late by the time AI is regulated. Musk’s point of view towards AI is not all negative. In the interview, he also mentioned a company he co-founded to create human-AI interface with the hope to effectively merge with AI and make AI serve humans well.

“I tried to convince people to slow down […] to regulate the AI. This was futile. I tried for years. Nobody listened”, said Musk to Alphr. He worries of a future when AI can become more dangerous.

In time like this, AI development needs adjustment so it won’t get out of hand; rules and principles need to be followed. This is the focus of BGF- MDI’s development of the AIWS initiative to come up with a set of moral values and norms for AI so it can become more transparent.