How a 25% conversion rate became 100% β and why the page was asking the wrong question
We started with a simple question: does the claude.ai/import-memory page work? The hypothesis was straightforward β the page should be clear enough that ChatGPT users understand what they're getting and why it matters. The answer was no. 25% conversion. Three out of four personas walked away confused or uninterested.
The page wasn't broken. It was selling the wrong thing. The hero copy opened with "You've spent months teaching another AI how you work." A powerful line β but only for the people who already knew they'd done that. Passive users and newcomers, the majority of potential converters, read it and self-excluded. Not because they don't have memory in ChatGPT. Because they don't know they do.
Test 2 β Variant B β replaced the emotional hook with a concrete list of what transfers: tone rules, project context, role, background, tools. Conversion jumped to 45%. Every single persona found the list more useful than the original hook. But passive users still didn't convert. They could read the list. They just couldn't answer "do I have any of this?"
That insight cracked it open. The wall wasn't comprehension β it was self-qualification uncertainty. The page was asking people to commit to a switch before they knew if they had anything worth switching with.
Variant C flipped the frame entirely. Instead of "import your memory," it led with: "See what your current AI knows about you β then decide." A single paste prompt. Thirty seconds. No commitment. The result: 20/20 conversions. 100%. Every persona type, including the two that had never converted in either previous test.
The page stopped selling and started helping. That's the mechanism. When you answer the question users were actually asking β "do I even have anything to import?" β before you ask them to act, the conversion takes care of itself.
4 types Γ 5 variants held constant across all 3 tests for clean comparability. Built from real ChatGPTβClaude switching behavior patterns.
| Persona Type | Profile | Control | Variant B | Variant C | Key Insight |
|---|---|---|---|---|---|
|
π§βπ» Active Switcher
sw-01 β sw-05
|
ChatGPT Pro users, motivated to leave, know they've customized heavily | 2/5 | 4/5 | 5/5 | Needed specifics, not emotional framing |
|
ποΈ Passive User
pu-01 β pu-05
|
Occasional ChatGPT use, no deliberate memory setup, unaware of what they have | 0/5 | 0/5 | 5/5 | Couldn't self-qualify until discovery prompt |
|
π Multi-AI Power User
mu-01 β mu-05
|
Already use Claude + other AIs, want to consolidate rather than switch | 3/5 | 5/5 | 5/5 | Wanted to consolidate, not switch β discovery framing fit perfectly |
|
π± Newcomer
nw-01 β nw-05
|
Under 3 months AI use, minimal history, "months of teaching" actively excludes them | 0/5 | 0/5 | 5/5 | "Months of training" was a self-exclusion signal |
The page led with an emotional hook assuming months of deliberate AI training. This worked for people who knew they had done that β and no one else.
Replaced the emotional hook with a specific checklist. Every persona found it more useful than the original β but only motivated switchers converted.
Removed the hook entirely. Replaced with a single action that answers the user's real question: do I even have anything worth importing?
The hero section was the only element changed across all three variants. Everything below the fold β how it works, memory management, FAQ β was held constant. These screenshots isolate exactly what was tested.