tech

January 28, 2026

Users flock to open source Moltbot for always-on AI, despite major risks

The open source “Jarvis” chats via WhatsApp but requires access to your files and accounts.

Users flock to open source Moltbot for always-on AI, despite major risks

TL;DR

  • Moltbot, formerly Clawdbot, is a rapidly growing open-source AI assistant with over 69,000 GitHub stars.
  • It allows users to run a personal AI assistant controlled through messaging apps like WhatsApp, Telegram, and Slack.
  • The tool aims to proactively assist users with reminders, alerts, and daily briefings.
  • Running Moltbot requires subscription access to models like Anthropic's Claude Opus 4.5 or OpenAI, or using API keys.
  • Setting up Moltbot involves complex configurations, including server setup, authentication, and sandboxing, due to its demand for access to various digital aspects.
  • Heavy usage can lead to substantial API costs due to frequent background calls by the agentic system.
  • Significant security risks include requiring access to messaging accounts, API keys, and potentially shell commands, expanding the user's attack surface.
  • Moltbot is described as 'Claude with hands' for its ability to connect an LLM with real-world actions like browser control and file management.
  • The bot is designed for persistent, local operation, retaining long-term memory and executing commands on the user's system, unlike web-based chatbots.
  • Memory is stored locally as Markdown files and an SQLite database, with auto-generated notes and vector search for context retrieval.
  • Moltbot runs 24/7, maintaining memory indefinitely and recalling past conversations weeks later, unlike session-based models.
  • The project was recently renamed from Clawdbot to Moltbot due to trademark concerns from Anthropic.
  • The rebranding transition led to bad actors hijacking old social media and GitHub handles, promoting crypto scams using the project's name.
  • Security researchers have identified vulnerabilities in misconfigured public deployments, exposing configuration data, API keys, and chat histories.
  • As an LLM with local machine access, Moltbot is susceptible to prompt injection attacks that could expose personal data.

Continue reading
the original article

Made withNostr