Researchers from Colombia University have created a new program called DeepXplore, a program that reverse engineers how AI learn, in order to find bugs. One application is self-driving cars, which depend on visual data to form neural networks and “learn.” Columbia News cited one example of a self-driving car that collided with a truck after mistaking it for a cloud, killing the passenger. The researchers tested DeepXplore on 15 state-of-the-art deep learning systems and found thousands of bugs that, once corrected, substantially improved accuracy. With this technology, researchers can not only “retrain” AI systems to recognize and correct bugs affecting them, but also for identifying Malware hiding in anti-virus software, and more.
Understanding why AI makes the decisions it does will go a long way in making people comfortable with self-driving cars and other safety-critical systems (aka, machines that can kill you). AI innovation is imperative, but steps must be made to ensure that it is responsible innovation. This was the topic of our talk at the AIWS Roundtable with Max Tegmark, author of Life 3.0. AI will change our society rapidly, and both technologists and policy makers must ensure that it changes society for the better.