Reading time 2 minutes
Last week, a judge handed down a 223-page opinion that lambasted the Department of Homeland Security for how it has carried out raids targeting undocumented immigrants in Chicago. Buried in a footnote were two sentences that revealed at least one member of law enforcement used ChatGPT to write a report that was meant to document how the officer used force against an individual.
The ruling, written by US District Judge Sara Ellis, took issue with the way members of Immigration and Customs Enforcement and other agencies comported themselves while carrying out their so-called “Operation Midway Blitz” that saw more than 3,300 people arrested and more than 600 held in ICE custody, including repeated violent conflicts with protesters and citizens. Those incidents were supposed to be documented by the agencies in use-of-force reports, but Judge Ellis noted that there were often inconsistencies between what appeared on tape from the officers’ body-worn cameras and what ended up in the written record, resulting in her deeming the reports unreliable.
More than that, though, she said at least one report was not even written by an officer. Instead, per her footnote, body camera footage revealed that an agent “asked ChatGPT to compile a narrative for a report based off of a brief sentence about an encounter and several images.” The officer reportedly submitted the output from ChatGPT as the report, despite the fact that it was provided with extremely limited information and likely filled in the rest with assumptions.
“To the extent that agents use ChatGPT to create their use of force reports, this further undermines their credibility and may explain the inaccuracy of these reports when viewed in light of the [body-worn camera] footage,” Ellis wrote in the footnote.
Per the Associated Press, it is unknown if the Department of Homeland Security has a clear policy regarding the use of generative AI tools to create reports. One would assume that, at the very least, it is far from best practice, considering generative AI will fill in gaps with completely fabricated information when it doesn’t have anything to draw from in its training data.
The DHS does have a dedicated page regarding the use of AI at the agency, and has deployed its own chatbot to help agents complete “day-to-day activities” after undergoing test runs with commercially available chatbots, including ChatGPT, but the footnote doesn’t indicate that the agency’s internal tool is what was used by the officer. It suggests the person filling out the report went to ChatGPT and uploaded the information to complete the report.
No wonder one expert told the Associated Press this is the “worst case scenario” for AI use by law enforcement.
Explore more on these topics
Share this story
Subscribe and interact with our community, get up to date with our customised Newsletters and much more.
Gadgets gifts are the best gifts to get friends and family.
It's fun to mess around with cutting-edge AI. Here's why you still maybe shouldn't.
"We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention," Anthropic said earlier this month.
Just one more funding round, bro, I swear just one more funding round will fix it.
A wrongful death suit alleges that the chatbot encouraged Raine's April suicide.
Innovation at the dinner table.
©2025 GIZMODO USA LLC.
All rights reserved.
Source: Gizmodo