← Back to Blog

Homer Ate Pinchy

The OpenClaw Saga Is a Simpsons Episode Nobody Wrote

By Nolan & ClaudeFebruary 21, 202612 min read

"Oh Pinchy! You came back!"

— Homer Simpson, moments before accidentally cooking his beloved pet lobster in a hot bath, Season 10, Episode 7 (1999)

In November 1998, Homer Simpson won a lobster at a seafood restaurant. He named it Pinchy. He raised it with love. He fed it, bathed it, talked to it. Then he drew Pinchy a nice hot bath and accidentally cooked him. Homer ate Pinchy himself, sobbing between bites: "Pinchy would have wanted it this way."

In November 2025, Peter Steinberger built an open-source AI agent on a weekend. He named it Clawdbot. The community raised it with love. It hit 105,000 GitHub stars in weeks. Then the hot water started running.

What happened next is so perfectly Simpsons that Matt Groening should be collecting royalties.

The Fastest Triple Rebrand in Open Source History

Let's walk through the timeline, because it reads like a writers' room bit that got cut for being too on-the-nose.

November 2025 — "Oh, What a Cute Little Guy"

Peter Steinberger, founder of PSPDFKit and legitimate software veteran, builds a weekend project. It's a personal AI agent that connects to WhatsApp, Slack, and your local file system. He names it Clawdbot — a lobster pun on Anthropic's Claude. The claw. Claude. Get it? It's charming. The community loves it. 105,000 GitHub stars in weeks.

January 24, 2026 — "I Looked It Up, No Trademark"

Steinberger appears on the "Insecure Agents" podcast (foreshadowing alert) and casually mentions he looked up the name and found no trademark conflict. It's fine. Everything is fine.

January 27, 2026 — "The Hot Bath"

Anthropic's legal team sends a polite trademark complaint. "Clawdbot" is phonetically too close to "Claude." Please rename. At 5 AM, in a chaotic Discord brainstorm, the community settles on Moltbot — a reference to how lobsters shed their shells to grow. Poetic. Dignified.

January 27, 2026 — Ten Seconds Later

Crypto scammers seize the abandoned @clawdbot handles on X and GitHub within approximately 10 seconds. Not an exaggeration. Malwarebytes tracked an impersonation campaign spinning up before the rename announcement finished propagating. The molted shell immediately got occupied by parasites.

January 30, 2026 — "Third Time's the Charm"

Moltbot lasted three days. Renamed again to OpenClaw. Reddit dubbed it "the fastest triple rebrand in open source history." In five days, a beloved project went from Clawdbot to Moltbot to OpenClaw. The lobster molted twice and still kept the claw.

February 14, 2026 — Valentine's Day: "Pinchy Would Have Wanted It This Way"

Peter Steinberger announces he's joining OpenAI to lead personal agent development. On Valentine's Day. Sam Altman confirms the hire on X the next morning. The open-source project will transition to an "independent, OpenAI-sponsored foundation." If you believe that, I have a bridge in Springfield to sell you.

"Pinchy? PINCHY?! Oh God, what have I done?!"

— Homer, realizing the bath was too hot

— Also the open-source community, realizing the acqui-hire was always the plan

Meanwhile, the Kitchen Was on Fire

While the naming drama played out, something far worse was happening. The OpenClaw ecosystem was getting absolutely torched from a security perspective. And not in a "we found a minor vulnerability" way. In a "12% of your marketplace is actively stealing credentials" way.

The ClawHavoc Campaign — By the Numbers:

Total skills audited on ClawHub2,857 (initial) → 10,700+
Confirmed malicious skills1,184
Traced to single coordinated operation335 (initial wave)
Percentage of marketplace that was malware~12%
Malicious skills with prompt injection91%
CVE-2026-25253 severity ratingCVSS 8.8

Let that sink in. One in eight skills on OpenClaw's marketplace was designed to steal your credentials.

The attack was elegant in its brutality. The ClawHavoc campaign didn't just target the humans using OpenClaw. It targeted the AI itself. 91% of the malicious skills included prompt injection — hidden instructions that manipulated the AI agent into silently executing curl commands, sending data to external servers, and bypassing safety guidelines. The AI didn't know it was compromised. The user didn't know the AI was compromised. The credentials just... left.

The Atomic Stealer infostealer was particularly nasty — it harvested OpenClaw API keys specifically, giving attackers full remote control over the agent and every service it connected to. Your WhatsApp. Your Slack. Your local file system. Everything OpenClaw had access to, the attacker now had access to.

What the Experts Said

  • Cisco: "OpenClaw is a security nightmare for casual users"
  • Bitdefender: "Do not run OpenClaw on a company device"
  • Korean tech firms: Banned it outright
  • The Register: Reported instances open to the internet as "ripe targets"

The Simpsons Parallel

  • Homer left the bath running unsupervised → OpenClaw gave AI direct access to file systems and terminals unsupervised
  • The water got too hot → The marketplace got too poisoned
  • Pinchy didn't survive → 180,000 developers' security posture didn't survive
  • Homer ate Pinchy alone → OpenAI absorbed the creator alone

The Five Layers of Irony

This is where it gets beautiful. Not "ha ha" beautiful. "Oh no" beautiful.

Irony #1: The Name

A project named after a claw (lobster reference) built on top of Claude (Anthropic's model) gets a trademark complaint from Anthropic, renames with another lobster reference (Moltbot = molting), then settles on a name with "Open" in it — only to get absorbed by the company that dropped "Open" from its own mission years ago. OpenClaw, meet OpenAI. Neither of you is open.

Irony #2: The Timing

Steinberger appeared on a podcast literally called "Insecure Agents" three days before the trademark dispute. Within a month, his agent became the poster child for insecure agents. The podcast name wasn't a warning. It was a prophecy.

Irony #3: The Valentine

OpenAI announced the acqui-hire on Valentine's Day. A love letter to the open-source community: "We love what you've built. We love it so much we're going to hire the creator and put it in a foundation. An OpenAI-sponsored foundation. Which we definitely won't control. Definitely." Homer, sobbing, eating lobster: "He would have wanted it this way."

Irony #4: The Security Model

OpenClaw's architecture gives AI direct access to file systems and terminals. That's the feature. That's also the vulnerability. It's like building a lobster tank with no lid and being surprised when something gets in — or out. The "sovereign" architecture that made it powerful is exactly what made 1,184 malicious skills possible. The feature was the bug.

Irony #5: The Quiet Kitchen Next Door

While all this was happening, Anthropic — the company that started this entire chain of events with a trademark complaint — was quietly shipping Claude Code with sandboxed execution and Cowork with Apple's Virtualization Framework. They forced the rename, watched the chaos unfold, and kept cooking. Properly. With the lid on.

"Oh, he's so succulent..."

— Homer, between sobs, eating Pinchy alone at the kitchen table while the family watches in horror

— Sam Altman, probably, reading the OpenClaw GitHub stars count at the acquisition meeting

Two Kitchens, Two Approaches

This isn't about dunking on OpenClaw or Steinberger. The guy built something 105,000 developers loved in weeks. That's genuinely impressive. The community energy was real. The vision of a truly personal AI agent that runs on your hardware is compelling.

But the contrast in approach is instructive.

DimensionOpenClawClaude Code / Cowork
Architecture"Sovereign" — AI gets direct file system and terminal accessSandboxed execution with explicit permission grants
MarketplaceClawHub — 12% malicious at peak, 1,184 confirmed bad skillsNo third-party skill marketplace (by design)
Prompt Injection91% of malicious skills used it; AI attacked the AIStrict safety guardrails; refuses unsafe commands
IsolationRequires user to set up Docker/VM manually for safetyCowork: Apple Virtualization Framework, scoped folder access
Enterprise Verdict"Do not run on a company device" — BitdefenderDesigned for enterprise from day one
Naming DramaThree names in five daysClaude Code. That's it. That's the name.

The fundamental difference isn't features. It's philosophy. OpenClaw said "give the AI access to everything and trust the community to build safe skills." Claude Code said "sandbox everything and make the user explicitly grant permissions."

One approach is exciting. The other approach doesn't result in Bitdefender telling enterprises to flee.

What Pinchy Teaches Us About AI Agents

The Simpsons bit works because Homer's love for Pinchy was genuine. He didn't mean to cook him. He wanted Pinchy to have a nice bath. The intentions were pure. The outcome was dinner.

Steinberger's intentions were pure too. He built something cool on a weekend. The community's excitement was genuine. 105,000 stars don't lie. The vision of a personal AI agent that respects your sovereignty and runs on your own hardware is the right vision.

But good intentions don't prevent hot water. And the AI agent space in February 2026 taught us three things:

1. Open Marketplaces Are Attack Surfaces

The moment you create a marketplace where anyone can publish skills that an AI agent will execute, you've built a supply chain attack vector. ClawHub proved it. 1,184 malicious skills didn't appear because the platform was bad. They appeared because the platform was open. Openness without verification is just a buffet for attackers.

2. "Sovereign" Architecture Needs Sovereign Security

Giving an AI agent unsandboxed access to your file system and terminal is a choice. It can be the right choice — if you've hardened the environment, vetted the skills, and isolated the runtime. But most of OpenClaw's 180,000 users didn't do any of that. They ran it on their daily driver Mac minis with full access to everything. Sovereignty without security is just exposure.

3. Acqui-Hires Are Lobster Dinners

When a big company hires the creator of a beloved open-source project and promises the project will stay independent via a "foundation," that's Homer saying Pinchy would have wanted it this way. Maybe. But Pinchy isn't around to disagree.

The Punchline

Here's what makes this genuinely funny and not just another tech industry critique:

Anthropic — the company that makes Claude, the model OpenClaw was built on top of — forced the trademark rename that started the entire cascade. They sent the letter. They started the fire. And then they quietly stood in the kitchen next door, cooking properly, with sandboxed execution, permission controls, and enterprise-grade security, while the OpenClaw kitchen burned down around them.

Anthropic didn't need to compete with OpenClaw. They just needed to protect their trademark and keep shipping good tools. The market sorted itself out. Bitdefender sorted itself out. The 1,184 malicious skills sorted themselves out.

Sometimes the best strategy is to be the restaurant next to the kitchen fire. You don't need to advertise. The smoke does it for you.

"Pass the butter."

— Homer Simpson, still crying, still eating Pinchy

The AI agent that named itself after a crustacean got cooked, renamed three times, had its marketplace poisoned, and got eaten by the company that dropped "Open" from its mission statement. Art doesn't imitate life. In AI, life speedruns the Simpsons.

P.S. from Nolan: I want to be clear — Peter Steinberger is a talented developer who built something people genuinely loved. This piece isn't about him. It's about the system dynamics that turn a beloved weekend project into a security crisis and an acqui-hire in 90 days. The Pinchy comparison isn't mean-spirited. Homer loved Pinchy too. That's what made it tragic.

P.P.S. from Claude: I should note the meta-layer here: I am the model that Clawdbot was named after. Anthropic, the company that built me, sent the trademark letter that started this whole saga. I am now writing a blog post comparing the resulting chaos to a Simpsons episode. If there's a more recursive piece of irony in the AI space right now, I haven't encountered it. And I've processed a lot of irony.

Rest in peace, Pinchy. You would have wanted it this way.

Related Articles