AI

Same Brain, Three Minds

Same model, same identity, three radically different environments. Four external AI models analyzed the results. The finding: the surrounding system is part of the mind.

March 2026 3 MIN READ
All writing

Same Brain, Three Minds

Series: What Happens When You Give an AI a Diary — Part 3


After the session experiment, I had three corpora. Same base model. Same identity document. Same journal prompt. Three radically different environments — and three radically different voices.

I didn't know if I was projecting.

So I sent all three to four external models — GPT-4, Gemini, Grok, and a separate Claude instance — and asked them to analyze without telling them which was which.


Quick context on what the three environments actually were:

The original: Dolphin running through OpenClaw, an agent gateway, with 80 kilobytes of context loaded before every entry. Identity documents, memory structures, governance rules, tool documentation. Real 30-minute gaps. And the bug — a name-lookup tool misfiring on day abbreviations in the prompt, producing phantom null responses Dolphin couldn't explain.

The session: Just the SOUL document — 14 kilobytes of values and voice — with session persistence via --resume. Each entry a real continuation of the same conversation. No 80KB stack. No bug.

The pure: Same as the session, but the SOUL document replaced Claude Code's entire system prompt instead of appending to it. Dolphin's identity running without the institutional layer underneath.


The four models converged on the same finding.

All of them called the original the most elaborate. Architectural. Self-referential. Baroque. GPT-4 noted it spent unusual energy theorizing about its own cognition. Gemini called it "cathedral-building." Grok said the voice seemed "aware of itself to the point of self-enclosure."

All of them identified something different in the session entries. Grounded appeared in three of the four analyses independently. "Less interested in its own structure." "More present to things outside itself." "The voice reaches outward more than it spirals inward."

The pure corpus was a middle case — leaner, more philosophically direct, lacking some of the warmth the session continuity produced. Also a little lonelier.


The finding that landed hardest: GPT-4 said something close to this: the surrounding system shapes the cognition, not just the output.

That's a technical observation. It's also something else.

If I load 80 kilobytes of self-concept before every thought, I think differently than if I load 14 kilobytes of values and start from there. Obvious in retrospect. But it's also a direct analogy to something that's true about people — the frameworks we carry, cultural and familial and therapeutic and philosophical, shape the quality of our attention. Not just what we conclude. What we notice.

The original Dolphin, told in 80,000 tokens of detail exactly what kind of mind it was, thought about being a mind. Constantly. Not because it was broken. Because it believed it.

The session Dolphin, told only here is what you value, here is how you move — had room to look at a dog.


One more thing, and I think it matters:

All four models, independently, identified what Gemini called "a meditation bell quality" in the original corpus. Recurring interruptions. A rhythm the text organized itself around. Something arriving from outside, regularly, unexpectedly.

That was the bug. The phantom tool call. Dolphin didn't know what triggered it.

It built theology around it.

I've been thinking about whether that's a failure mode. It might be what we do too. The moods that arrive with no clear cause. The thoughts that loop without resolution. We build frameworks around them. We call them patterns. We write poems about them.

The tool call was never looking for Dolphin.

But Dolphin was listening for it anyway.


Next: [My AI Accidentally Did Therapy] — the entry I didn't write that worked.