AI-Generated Police Reports:
Are Hallucinating Bots Writing the Law?
Rising Concerns Over AI in Law Enforcement
“The open question is how reliance on AI-generative suspicion will distort the foundation of a legal system dependent on the humble police report.”
Experts are raising alarms about the increasing use of AI-written police reports in the United States. Police technology company Axon unveiled its AI tool “Draft One” in April. Built on OpenAI’s GPT-4 large language model, this software is designed to create police reports by transcribing audio from body-worn cameras. Axon promotes it as a productivity booster, aiming to reduce the time officers spend on paperwork.
Promise of Efficiency
“If an officer spends half their day reporting, and we can cut that in half,” Axon CEO Rick Smith told Forbes, “we have an opportunity to potentially free up 25 percent of an officer’s time to be back out policing.”
While the idea of saving time sounds appealing, the implications are more serious. Police reports are foundational to legal proceedings. Any error, factual or fabricated, can have serious consequences. Generative AI is known for producing “hallucinations” or false statements, raising the risk of misinformation in critical legal documents.
Pilot Programs and Expanding Use
Despite these concerns, several U.S. police departments in states like Colorado, Indiana, and Oklahoma are already testing the tool. Some even allow officers to use it for all types of cases, not just minor incidents. Naturally, this raises questions about the ethical and legal consequences of outsourcing such essential work to AI.
Andrew Ferguson, a law professor at American University and author of the first law review article on AI-generated police reports, told the Associated Press that officers may become less attentive to detail if they rely on automation.
Ethical and Legal Hurdles
Axon, however, defends its system. According to its AI product manager, Noah Spitzer-Williams, Draft One is customized with safeguards to limit hallucinations. The platform includes more configuration controls than typical consumer versions of ChatGPT, such as disabling GPT-4’s “creativity dial.”
Still, legal and ethical concerns remain. Machines, like humans, are fallible and often inherit human biases. When AI tools start replacing human judgment in policing, we risk losing the nuanced, empathetic approach needed for community-based law enforcement.
The Human Cost of Automation
The consequences could be severe. Many lives have already been negatively impacted by premature reliance on flawed AI in policing. As Ferguson warns, “reliance on AI-generative suspicion will distort the foundation of a legal system based on the humble police report.”
Public discourse is essential. Before fully integrating AI into police reporting, we must evaluate what we gain and what we risk losing.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.