AI (Artificial Intelligence) is everywhere, and none of it has been particularly good for you or me. From telling you why you should put glue on your pizza, why motorcycle gear doesn't make you safer, offering up election misinformation, putting together Nazi stuff, and more, AI has been shown that the technology just isn't ready for the general public.

But that obviously hasn't stopped every company known to man from employing AI in some capacity. It's the hip, new buzzword! You can't release anything these days without it. And now, AI has come to policing. Uh oh.

Body cameras were introduced to be a definitive, unbiased source of what occurs during a stop, search, or arrest. Meant to do away with the lone police report using the officer's memory, something that's malleable and prone to biases. But that hasn't been the case, as police departments can manipulate footage, and officers can just turn off their cameras. 

So, obviously, we're going back to police reports. But this time, with AI! 

According to Futurism, Axon, the police contractor that makes the Taser, has debuted its latest technology in something called "Draft One," an AI-powered police report maker that works off body camera audio.

Running off of OpenAI's GPT-4 Large Language Model (LLM) which actually isn't AI, but a fancier version of your phone's predictive text, camera audio is fed through Draft One, which then puts together a police report that Axon states most police call "Burdensome." You know, the paperwork that could put you or I away, and state what we did or did not do. That paperwork. 

I mean, what could go wrong?

Get the best news, reviews, columns, and more delivered straight to your inbox.
For more information, read our
Privacy Policy and Terms of Use.

Imagine you're stopped while riding your bike or hitting the waves on a new jet ski, and the officer is wearing a camera. Maybe they stop you for speeding and you get a ticket. But then, a few weeks later, you're in court for slapping the officer! That might sound outlandish, but not only does AI still get basic facts wrong, but it has the propensity to hallucinate. Yep, it sees things that totally aren't there. But don't worry, Axon and OpenAI are totally on it.

Speaking with Forbes on the subject, Axon's principal AI product manager Noah Spitzer-Williams said, "The simplest way to think about it is that we have turned off the creativity. That dramatically reduces the number of hallucinations and mistakes... Everything that it's produced is just based on that transcript and that transcript alone."

I'm sorry, but "reduces"? We're talking about police reports that can see fines levied on people, incidents that cause them to lose their income, or put them in jail. We're talking about details that jeopardize people's lives here, not spelling errors.

These will be official records. 

Axon also states that officers will be required to review and sign off on the reports, thereby reducing any such falsehoods or issues with the AI-generated report. But again, Axon claims most police call this work "burdensome." If they already don't like doing the paperwork, what's to say they'll be OK with looking it over after the fact? 

Draft One is also limited to "minor incidents and charges" according to the company. But we all know how this goes, right? It's only a matter of time before both police departments and Axon believe Draft One is good enough for everything. And the stakes grow exponentially once you graduate from petty crime and traffic infractions to assault and more. 

I'll be honest, I've been pulled over a few times in my driving and riding career. And I've had different interactions with law enforcement. And that's not to mention what friends and colleagues have gone through with local PDs. Despite that, I'd personally rather not have an unproven technology be the one writing something up after I get pulled over for speeding or whatever.

Especially one like ChatGPT. 

Got a tip for us? Email: