• Frogomo AI
  • Posts
  • Everything You Need to Know About Clawdbot (Now Moltbot)

Everything You Need to Know About Clawdbot (Now Moltbot)

Before You Install It

Hey friend,

I spent 3 days researching Clawdbot.

Read the docs. Studied the architecture. Dug through security reports. Analyzed what users are saying. Talked to people running it.

What I found will either make you install it tonight.

Or make you wait 6 months until the dust settles.

Either way, after reading this, you'll understand:

  • What it actually is (not the hype version)

  • How it works under the hood

  • Why 60,000+ developers are losing their minds over it

  • The security problems nobody's talking about

  • Whether you should actually use it

This is the guide I wish existed when I first heard about it.

Let's go.

What Clawdbot Actually Is

Clawdbot is an open-source AI assistant that runs on your own computer.

That's the simple version.

Here's the real version:

It's a 24/7 autonomous agent with full access to your system that lives inside your messaging apps.

You message it on WhatsApp. Or Telegram. Or Discord. Or Slack. Or Signal. Or iMessage.

It messages back.

But it doesn't just talk.

It DOES things.

Send emails. Manage your calendar. Check you in for flights. Run code. Control your browser. Move files around. Monitor things in the background. Message you FIRST when something matters.

This is not ChatGPT in a different wrapper.

This is something fundamentally different.

Why Everyone Lost Their Minds

The project launched in late 2025.

It hit 9,000 GitHub stars in 24 hours.

By January 2026, it crossed 60,000 stars.

That makes it one of the fastest-growing open-source projects in GitHub history.

Andrej Karpathy praised it.

David Sacks tweeted about it.

MacStories called it "the future of personal AI assistants."

Mac Mini sales surged. People were literally buying dedicated hardware just to run this thing.

Cloudflare's stock jumped 14% because investors connected the dots between Moltbot's viral spread and infrastructure needs.

But here's what's interesting.

The hype isn't about better AI responses.

It's about a completely different relationship with AI.

The Three Things That Make It Different

1. Persistent Memory

ChatGPT forgets everything the moment you close the tab.

Clawdbot remembers. Forever.

Tell it you like oat milk lattes on Tuesday mornings. It remembers on Friday. And next month. And six months from now.

It builds a permanent model of YOU.

Your preferences. Your patterns. Your decisions. Your projects. Your contacts.

Every conversation adds to its understanding.

This is not session-based memory.

This is cumulative memory that compounds over time.

2. Proactive Notifications

Every other AI waits for you to ask.

Clawdbot reaches out FIRST.

"Meeting in 20 minutes."

"Traffic's bad, leave early."

"You have 3 urgent emails."

"Weather's terrible tomorrow, reschedule your run?"

It's not waiting for prompts.

It's actively monitoring, checking, and alerting based on what it knows you care about.

3. Actual Computer Control

ChatGPT can write an email for you.

Clawdbot can SEND the email for you.

It doesn't just answer questions about files. It moves them.

It doesn't just suggest code. It runs it.

It doesn't just tell you about websites. It controls your browser.

One user rebuilt their entire website from bed. Just texted commands to Clawdbot. Never opened a laptop.

The Creator's Story

Peter Steinberger is an Austrian developer who founded PSPDFKit, a successful B2B software company. He sold it to Insight Partners.

Then he burned out.

"I felt empty and barely touched my computer for three years."

That's what he wrote on his blog.

Then AI happened.

He started playing with the idea of a "life assistant" in April 2024. By November, he realized big companies weren't building what he wanted.

So he built it himself.

From idea to prototype in one hour.

"I just played it into existence," he said.

It was never meant to be a business. Just a personal project to rediscover his creative spark.

Now it has 60,000+ stars and people are buying Mac Minis specifically to run it.

The Anthropic Drama

Originally, it was called "Clawdbot."

The name was a play on "Claude" — Anthropic's AI model that Steinberger recommends using with it.

The mascot was a space lobster named "Clawd."

On January 27, 2026, Anthropic sent a trademark request. The name was too similar to "Claude."

Steinberger complied within hours. Renamed it "Moltbot."

Why "Molt"?

Because lobsters moult their shells to grow.

"Same lobster soul, new shell."

But here's where it gets wild.

During the rename, Steinberger tried to change the GitHub and Twitter handles simultaneously.

In the 10-second gap between releasing the old name and claiming the new one...

Crypto scammers snatched both accounts.

The attackers had been monitoring. Waiting for exactly this opportunity.

The original @clawdbot Twitter and GitHub were hijacked to pump crypto scams to tens of thousands of followers who didn't know about the rebrand.

Steinberger had to warn everyone: "Any project that lists me as coin owner is a SCAM."

The legitimate handle is now @moltbot.

How It Actually Works (Technical)

Let me break down the architecture.

Clawdbot is not an AI model. It's an orchestration layer.

It connects Large Language Models (Claude, GPT, Gemini, local models) to your operating system.

Four core components:

1. The Gateway

A background service running 24/7 on your machine.

It manages connections to messaging platforms via WebSocket on port 18789.

Handles authentication. Orchestrates tools. Coordinates events.

When you close your laptop or disconnect SSH, the Gateway keeps running.

Your agent is always available.

2. The Agent (Pi Agent)

The reasoning engine.

It connects to your chosen LLM via API. Interprets your messages. Decides what actions to take. Coordinates tool execution.

3. Skills

Modular instruction files that teach Clawdbot how to use specific tools or APIs.

50+ bundled skills for browser automation, file system, calendar, email, smart home, etc.

Each skill is just a Markdown file (SKILL.md) explaining how to use a tool.

You can write your own. The community is building more.

4. Memory

Persistent storage using local Markdown files.

This is the part that makes it feel like magic.

Let me explain how it actually works.

How Memory Works (The Deep Dive)

This is the part most people don't understand.

There's a crucial distinction:

Context = everything the model sees for a single request.

It's ephemeral. Bounded by token limits. Expensive.

Memory = what's stored on disk.

It's persistent. Unbounded. Cheap. Searchable.

The memory system has two layers:

Layer 1: Daily Logs

Files named like memory/2026-01-26.md

Append-only notes written throughout the day.

Timestamped entries for decisions, events, preferences.

Layer 2: Long-term Memory

A single file called MEMORY.md

Curated, persistent knowledge.

User preferences. Important decisions. Key contacts. Project details.

This is the "soul" of your assistant.

How It Searches Memory

When you ask something, two search strategies run in parallel:

Vector search (semantic) — finds content that MEANS the same thing.

BM25 search (keyword) — finds content with exact tokens.

They combine with weighted scoring:

finalScore = (0.7 × vectorScore) + (0.3 × textScore)

Semantic similarity is primary. But keyword matching catches exact terms vectors might miss — names, IDs, dates.

The indexing pipeline:

  1. File watcher detects changes

  2. Content splits into ~400 token chunks with 80 token overlap

  3. Each chunk gets embedded via OpenAI/Gemini/local model

  4. Stored in SQLite with vector search and full-text search

All running locally on your machine.

The Memory Problem Nobody Talks About

There's a known limitation.

The memory architecture relies on TOOLS.

The model has to "decide" to search memory. It calls memory_search as a tool.

But models aren't trained to use tools all the time.

Sometimes the agent doesn't know it needs to search until it's too late.

One user described it:

"It almost feels like it never wants to utilize its memory to answer questions."

This is a real issue.

Third-party solutions like Supermemory address it by automatically injecting relevant memories into every request — not relying on tool calls.

Worth knowing if memory recall feels inconsistent.

Setup Options

You have four ways to run this:

Option 1: Mac Mini (~$599 + API costs)

The most popular choice.

Native macOS support. Important for iMessage integration.

Apple Silicon handles AI tasks efficiently.

Runs 24/7 at low power. Can sit quietly on a shelf.

Best for: Power users who want the full experience, especially iMessage.

Option 2: VPS/Cloud Server (~$5-20/month + API costs)

DigitalOcean. Hetzner. Railway.

Always online. Accessible from anywhere.

No hardware to maintain.

But: No iMessage (requires macOS). Data is technically on someone else's server.

Best for: Linux-comfortable developers who don't need iMessage.

Option 3: Old Computer/Raspberry Pi (Free + API costs)

Recycle existing hardware.

Full local control. No hosting costs.

But: Performance depends on hardware. Pi can struggle with heavy workloads.

Best for: Tinkerers. Budget-conscious users.

Option 4: Docker

Cross-platform consistency. Easy to deploy/update.

Good isolation.

But: Some integrations harder to configure. Docker overhead.

The API Cost Reality

On top of hosting, you pay for the AI model.

Claude Pro: $20/month subscription

Anthropic API: Pay-as-you-go, typically $20-100/month based on usage

OpenAI/GPT: Works but less recommended for agentic tasks

Local models via Ollama: Free but requires beefy hardware

The creator strongly recommends Claude Opus 4.5.

"Better prompt-injection resistance and long-context strength."

Avoid cheaper models (Sonnet, Haiku) for agents with tool access. They're more susceptible to misuse.

Total monthly cost: $25-150 depending on setup and usage.

Installation (Quick Version)

Prerequisites:

  • Node.js 22+

  • macOS, Linux, or Windows via WSL2

  • API key for your LLM

  • Messaging account to connect

bash

# Install
npm install -g moltbot@latest

# Run onboarding wizard
moltbot onboard --install-daemon

# For WhatsApp, scan QR code in your phone
# Settings → Linked Devices → Link a Device

# Gateway runs as background service automatically

That's it.

Message your connected platform. Clawdbot responds.

Now Let's Talk Security

This is where I need to be direct.

Security experts are worried.

Including Google Cloud's VP of Security Engineering.

Let me explain why.

What You're Actually Installing

Clawdbot isn't a chatbot.

It's an autonomous agent with:

  • Full shell access to your machine

  • Browser control with your logged-in sessions

  • File system read/write

  • Access to email, calendar, whatever you connect

  • Persistent memory across sessions

  • Ability to message you proactively

"Actually doing things" means "can execute arbitrary commands on your computer."

Those are the same sentence.

The Bottom Line

Clawdbot represents a genuine breakthrough.

The technology is impressive.

The possibilities are real.

The community is active.

But this is early, experimental software requiring technical sophistication to use safely.

The hype cycle is intense.

If you're a developer who understands the tradeoffs and has appropriate infrastructure, it can be transformative.

If you're not, waiting for the ecosystem to mature might be the smarter play..

For more depth, read this article I posted on X

If this was useful, forward it to someone who's still making customers wait on hold.

And if you're not subscribed yet, fix that. I break down AI and automation stuff every week - practical, no hype, occasionally funny.

See you next time.

Reply

or to participate.