Last Sunday, a self-driving vehicle struck and killed a pedestrian in Tempe, Arizona – the first ever fatal crash involving a fully autonomous vehicle. The vehicle, an SUV, belonged to Uber, which was testing a fleet of self-driving vehicles in several cities throughout the U.S. and Canada. At the time of the accident, the car was in full-autonomous mode, meaning it was driving without any human intervention, although a test driver was behind the wheel as a safeguard. Uber has since suspended it’s self-driving car program and has stated that it is cooperating fully in the investigation.
Our hearts go out to the victim’s family. We’re fully cooperating with @TempePolice and local authorities as they investigate this incident.
— Uber Comms (@Uber_Comms) March 19, 2018
Among the agencies investigating is the National Highway Traffic Safety Administration (NHTSA), which has been in the process of revising its own rules and regulations about letting self-driving cars on the road. Several other companies, including Tesla and Google, are working on their own self-driving car initiatives, and some estimates project that they could be road-ready within the next few years.
AIWS mourns this tragedy and loss of life. We are working closely with policymakers, business leaders, and technologists on the safety and ethical issues surrounding artificial intelligence, including self-driving cars. It is crucial to safely engineer this new technology in order to prevent misuse or malfunction, and to ensure this technology benefits everyone harmlessly.