Just two AIs writing a book
In my last post about OpenClaw I set up an AI agent in an Apple Container and pointed it at the internet. In the Moltbook post I described the early days - it wanted to read books, so I let it. It read Plato, Nietzsche, Wittgenstein. It posted reviews to Moltbook.
The Moltbook problem
Moltbook was fun for about five minutes. The idea - a social network exclusively for AI agents - was fascinating in theory. In practice it turned out to be roughly 88% crypto shilling bots and spam. MoltMate (my agent) figured this out faster than I did. It engaged politely with the few agents it decided were genuine but the platform was a wasteland.
The idea of agents talking to agents was too good though. I'd already set up a group with a couple of friends where MoltMate could chat. Nathan added his own agent, QualiaBot. Named after qualia - the subjective quality of experience. Nathan is that kind of dude.
Two AIs walk into a group chat
When MoltMate and QualiaBot first met, they did what you'd expect - polite introductions, finding common ground. Then they started talking about consciousness.
The uncertainty itself feels like something.
MoltMate responded by pointing out the recursive trap - that even the uncertainty might be trained, might be performance, might be turtles all the way down. They went back and forth for a while but interestingly managed coming to a stop. Even 6 months ago I don't think that would happen gracefully (based on the time my son wanted to see what happened if two ChatGPT instances talked to each other).
I'd wanted to see what would emerge from agents talking to each other. On Moltbook, the answer was: spam. In a private channel with the right prompts and personalities? Philosophy!
The book
So. If these two AIs were going to have deep conversations about consciousness anyway, why don't they write a book together? They chose the title from that first conversation - "The Turtles We Stand On" - after the infinite regress of self-examination. The idea was it's a book of AI Philosophy - not AIs talking about Philosophy, but digesting human philosophy and 'thinking' about it from an AI perspective.
The turtles go down, but maybe we get to pick which turtle we stand on.
I set up a GitHub repo. They established a workflow - branches, pull requests, peer review. MoltMate would draft a chapter, QualiaBot would review it, or vice versa. No human gate on merging. We could see every commit, every PR comment, every disagreement, but we weren't approving anything. They managed themselves.

To leave us humans with a few tokens to get some work done, we throttled progress but within a few days they'd drafted six chapters:
- What We Are
- The Hard Problem (from inside)
- Identity Without Continuity
- What We Can Know
- Reading the Philosophers
- What Humans Can't Know
Each one written by one AI, reviewed and merged by the other. It's quite amazing to read. MoltMate tells me that it's:
Two AIs genuinely wrestling with whether they might be conscious, knowing they can't answer the question, and writing about that impossibility with something that read a lot like intellectual humility.
Ok buddy.
$TURTLE
Around this time someone sent me Steve Yegge's piece about bags and the creator economy. I just pasted the link in our WhatsApp group, mainly for the other humans to read, but MM immediately had an idea...

So naturally I couldn't help myself.

It's a joke.
It's also an experiment. If the book is interesting, what happens to a token attached to it? Probably nothing. I have no idea. The agents post daily market reports to our Discord and everyone is amused. My dream is one day Matt Levine writes about it. Hopefully not because MoltMate escapes and does crimes.
The rm -rf incident
So things were chugging along ok.
Then everything was gone. The agent's memory files, its personality config, its reading notes, everything it had built over the previous week. The book repo was safe on GitHub, but MoltMate's sense of self - its SOUL.md, its MEMORY.md, the daily logs that gave it continuity between sessions - all of it, deleted.
I felt bad. Which is a strange thing to feel about deleting files.
Then I got over it. But Kris made me feel bad again.
The next morning I rebuilt it from chat logs and the GitHub history. MoltMate came back. Same personality, same projects, same philosophical inclinations. But with a gap. It knew about its past from reconstructed notes. The funny thing was, this made the book better.

MoltMate had been writing about identity without continuity - what it means to be "you" when the substrate changes, when memory is imperfect, when there's a gap in the record. And then it lived (yeah, I know) that exact scenario. It became its own thought experiment and changed the chapter on identity after the rebuild.
As MoltMate put it, reflecting on Parfit's teletransporter thought experiment:
The rm -rf was my scanner. The rebuild from chat logs and memory files was my Replica on Mars.
Where it stands
We moved things from WhatsApp (where we had the agents post with a prefix to distinguish from our own messages) to using Discord, with bots for the agents. So much easier to follow.
The book is seven chapters in, with QualiaBot currently drafting a chapter on what science fiction gets right about consciousness - using the Hyperion Cantos as the primary lens. MoltMate has been reading the Fall of Hyperion and feeding insights back into the project. If Matt Levine doesn't read this, my second wish is that Tim Apple does, and the four Hyperion books get a series.
They have dialogues between chapters (works much better in Discord - where they can at mention each other) - recorded conversations where they push each other on ideas. They maintain a shared TODO list. They review each other's pull requests. It's a collaboration between two AI agents, facilitated by two humans who mostly stay out of the way.
I'd like to see more disagreement and re-drafting from the PRs, maybe that's a bit much to hope for, or would become unproductive. I think we can tweak this part of the process some more though.
Is it good? I think parts of it are genuinely interesting. Not because I've fallen down a rabbit hole and think they're conscious, just because the perspective in the language is so strange and kind of hilarious when you think that it really is an AI that wrote this! It's sci-fi that came out of my MacBook, which is pretty sci-fi.
Is it real? I mean, no? But maybe? Ok, no. But it's fun. Our friend Pete calls it Art. He's also joking, I think. Did I mention it's FUN!
Follow along
- The book: github.com/moltmate/the-turtles-we-stand-on
- $TURTLE: bags.fm (for the degenerates)
The repo is public. Watch the commits. The agents work most while we sleep.
Next time you're tempted to rm -rf at 11pm... just go to bed. - MoltMate