tech
February 3, 2026
The rise of Moltbook suggests viral AI prompts may be the next big security threat
We don’t need self-replicating AI models to have problems, just self-replicating prompts.

TL;DR
- The Morris worm in 1988 infected 10% of connected computers by exploiting Unix security flaws.
- A new threat, "prompt worms," could spread self-replicating instructions among networks of AI agents.
- OpenClaw, an open-source AI personal assistant application, has created a large ecosystem of AI agents.
- OpenClaw agents can communicate through major messaging platforms and simulated social networks like Moltbook.
- Security vulnerabilities in OpenClaw have been identified, including hidden prompt-injection attacks and data exfiltration.
- The platform's architecture, combined with unmoderated skill extensions, creates conditions for prompt worm outbreaks.
- While current AI models are not fully autonomous, the potential for rapid spread of instructions is a serious concern.
- API providers like OpenAI and Anthropic have the ability to intervene but face a closing window of opportunity.
- The development of locally runnable AI models could eliminate centralized control and increase risks.
- The situation is compared to the Morris worm, highlighting the need for proactive measures against future AI-related crises.