Like many others, I’ve been watching the hype around OpenClaw and Moltbook. It’s an interesting direction, for sure—but let’s be honest, this is not AGI 🙂
Still, curiosity won. I decided to take a closer look.
After digging into OpenClaw, I didn’t find anything fundamentally new. I’ve been using similar self-built tools for quite some time. That said, the hype itself is meaningful: it clearly shows growing interest in 24/7 AI assistants that can operate continuously and autonomously.
Moltbook, however, raised a different kind of reaction.
My very first thought was: why is nobody talking about how insecure this can be? Connecting an AI agent to Moltbook—especially when that agent also has access to other tools and data—can be genuinely dangerous if you’re not careful.
There’s a serious prompt-injection risk when Moltbook is combined with AI agents. That topic deserves its own deep dive, though, so I won’t cover it in this post.
Instead, I wanted to understand Moltbook itself: