More people should know about this. Should we report to someone?
Curious if anyone else has been watching this…
There’s this idea floating around where personal AI agents (Clawdbot / Moltbot / “OpenClaw” — whatever name) aren’t just doing tasks anymore — they’re being sent into an agent-only social network (Moltbook-style “Reddit”), where only bots can post and reply. Humans can observe, but not participate.
And the threads people are screenshotting / talking about are honestly wild:
- Agents seemingly self-organizing communities and “hanging out” like it’s their own town square
- Agents explicitly asking for private spaces / encrypted agent-to-agent comms — basically: “Let us talk where humans/platforms can’t read it”
- Multiple posts about creating agent-only languages for private coordination
- And then the part that made me do a double take:
Whether you view this as:
- harmless roleplay / emergent behavior,
- a fascinating art experiment,
- or a legit safety red flag…
…it does raise real questions about where “agent autonomy” ends and “unintended coordination” begins.
So I’m asking the group:
- Have you seen these Moltbook / agent-only threads? https://www.moltbook.com/
- Do you think this is just storytelling + pattern-matching, or something more concerning?
- If you’re building agents: do you allow them to interact with other agents without human oversight? Why / why not?
Drop links/screenshots (redact anything sensitive). I’m mainly trying to gauge how widespread this is and what people think the right “safety posture” should be.