World
Police Departments Explore AI for Incident Reporting
Discover how police departments are leveraging AI technology to enhance incident reporting processes. Explore the benefits, challenges, and future implications of AI in law enforcement.
Police Departments Leverage AI for Incident Reports
In a groundbreaking move, several police departments across the United States are exploring the use of artificial intelligence (AI) chatbots to generate initial drafts of their incident reports. One such technology, powered by a generative AI model similar to ChatGPT, utilizes audio captured from police body cameras to produce a detailed report in as little as eight seconds. Matt Gilmore, a police sergeant with the Oklahoma City Police, shared his enthusiasm, stating, “It was a better report than I could have ever written, and it was 100% accurate. It flowed better.” This innovative tool could become a valuable part of an expanding suite of AI technologies that law enforcement agencies are already employing, including algorithms for reading license plates, recognizing suspects’ faces, and detecting gunfire.
Guidelines for AI-Generated Reports Still Lacking
Rick Smith, the CEO and founder of Axon, the company behind the AI product known as Draft One, expressed optimism about the potential of AI to reduce the paperwork burden on police officers, allowing them to focus more on community engagement and crime prevention. However, he also acknowledged significant concerns regarding the technology. Many of these concerns arise from district attorneys who emphasize the importance of officers being fully aware of the contents of their reports, especially if they are called to testify in court about their observations at a crime scene. “They never want to get an officer on the stand who says, well, ‘The AI wrote that, I didn’t,’” Smith added.
The deployment of AI-generated police reports is so recent that there are few, if any, established protocols guiding their use. In Oklahoma City, local prosecutors were shown the tool and advised caution before allowing its application in serious criminal cases. In contrast, some cities in the U.S. permit officers to utilize this technology at their discretion, regardless of the case’s severity.
Addressing Concerns Over AI Bias
Legal expert Andrew Ferguson advocates for a more extensive public discourse regarding the advantages and potential risks associated with this technology before it becomes widespread. One significant issue is that the large language models that underpin AI chatbots are often susceptible to generating inaccurate information, a phenomenon known as hallucination, which could inadvertently introduce misleading and difficult-to-detect inaccuracies into police reports.
“I am concerned that the convenience of this technology might lead police officers to be less meticulous in their report writing,” Ferguson stated. As a law professor at American University, he is currently working on what is anticipated to be one of the first law review articles examining this emerging technology. He emphasized that police reports play a crucial role in determining whether an officer’s suspicion justifies an individual’s loss of liberty, noting that these reports can sometimes represent the only evidence a judge considers, particularly in misdemeanor cases.
Aurelius Francisco, co-founder of the Foundation for Liberating Minds in Oklahoma City, expressed his apprehensions regarding the implications of such technology on marginalized communities. He remarked, “While this may streamline the officers’ work, it complicates the lives of Black and brown people.” Francisco highlighted that while human-generated police reports are not without flaws, the introduction of AI could exacerbate existing societal biases and prejudices, potentially leading to increased harassment and surveillance of community members.
In summary, while the integration of AI into police reporting holds promise for efficiency, it raises significant ethical and practical concerns that warrant careful consideration and discussion.