When considering artificial intelligence and police, most may think of Robocop. AI may be coming to a police department near you, but it will likely be a bit more subtle. Companies are developing facial recognition software for police body cameras that could allow officers to ID suspects in real time. Police would provide the AI with the suspect’s image, obtained from a mugshot, surveillance footage, or witness description. If the AI-equipped body camera finds spots someone with matching features, it would then alert officers of the suspect’s presence.
Similar technologies are already used by a number of U.S. law enforcement agencies, including the New York Police Department. Some departments are also using the technology for a broader application: predictive policing. By analyzing criminal records, geographic data, and other factors such as weather and time of day, some are touting software that could (in theory) predict crime before it happens. This too, has drawn fierce criticism from civil rights activists. There is concern that the software is inaccurate or even racially biased.
This technology is already widely adopted in China whose largest AI company, SenseTime, has developed software that allows authorities to recognize citizens and track their movements in real time. This has drawn criticism from human rights activists who think that the government is using the technology to target its critics. Meanwhile, here in the United States, privacy advocates are raising their own concerns. While this technology could be useful in fighting (or preventing) crime, it is essential that it is developed and utilized ethically, to prevent its misuse.