Claude Code compacts your session and suddenly forgets which files it modified, what errors it found, and what it was working on. Here's the technical explanation of why — and how to prevent it.

“Claude was deep into debugging a complex issue. It had the stack trace, the narrowed hypothesis, the exact files. Then auto-compact fired and it literally forgot everything — started suggesting changes to the wrong file.”
This experience is one of the most common frustrations with Claude Code. The agent is productive, precise, and on track — and then suddenly it’s vague, confused, and working from a lossy approximation of what it knew 30 seconds ago.
GitHub issue #13112 captures the sentiment: “Auto compact is the worst. Every time it happens I feel like Claude Code has forgotten everything — what’s the point of auto-compact if it functionally makes everything worse?”
Here’s a real-world example of what compaction does to your debugging context:
BEFORE compaction (what Claude knows):
Error at src/api/webhooks/stripe.ts:98
TypeError: Cannot read property 'retryCount' of undefined
subscription.metadata.retryCount is null when customer has no prior failed payments
Fix: null check at line 98
Update test at test/webhooks.test.ts:156
Auth flow traced:
1. gateway/src/middleware/auth.ts validates JWT
2. auth-service/src/verify.ts checks against Redis cache
3. refresh-service/src/refresh.ts issues new token
4. billing-service/src/hooks/auth-check.ts verifies billing
Hypothesis 1: retryCount undefined because new customer → CONFIRMED
Hypothesis 2: metadata field missing → REJECTED (metadata exists, retryCount missing)
Migration at db/migrations/20240115_add_retry.sql had default but didn't backfill existing recordsAFTER compaction (what Claude remembers):
"Found bug in Stripe webhook retry logic. Need to add null check for subscription metadata.
Traced auth flow through gateway, auth service, refresh service, and billing service.
Issue related to missing defaults for existing records."Everything specific — the file path, the line number, the exact error type, the test location, the hypothesis chain, the migration file, the specific Redis involvement — is gone. Replaced by a high-level prose summary.
The first compaction isn’t catastrophic. The general picture survives. But each subsequent compaction compounds the loss:
After compaction 1: “Found bug in Stripe webhook retry logic” — you’ve lost specifics but have the general direction.
After compaction 2: “Working on payment bug related to missing defaults” — the Stripe reference might be gone. The webhook detail might be generalized.
After compaction 3: “Fixing payment-related issue in the codebase” — now even the category is vague. Claude might start looking in the wrong files entirely.
Factory.ai’s benchmarks quantified this: LLM summarization scores only 3.70 out of 5 on information retention. Opaque compression (like what OpenAI Codex uses) scores even lower at 3.35/5. That’s per cycle. After 3–4 cycles of lossy compression on top of lossy compression, critical information is permanently gone.
Developers use this phrase when the agent enters a destructive loop:
In bug report #3274, a developer documented a case where compaction corruption became permanent — context consistently showed “102%” regardless of conversation length, and every single interaction required waiting through compaction. “This makes Claude Code completely unusable. The majority of time is spent waiting for compaction processes rather than productive work.”
Run /compact manually before auto-compact fires with specific preservation instructions: /compact preserve the file paths I modified, current test failures, and the exact error messages I'm debugging
Commit at logical stopping points. Don’t let valuable work exist only in Claude’s context. Git commit frequently.
Start new sessions for distinct tasks. Each new task gets a fresh 200K tokens.
Stay below 50% context utilization. Once you pass 50%, consider whether it’s time to compact manually or start fresh.
Keep notes externally. If you find a critical file path or error, paste it into a comment or note — don’t rely on Claude’s context to remember it.
All these strategies are defensive — they mitigate the compaction problem but don’t eliminate it. The root cause remains: 60–80% of context is consumed by file reading, which forces compaction, which destroys information. ByteBell’s Smart Context Refresh eliminates the compaction death spiral at its source — by replacing brute-force file reading with pre-computed graph metadata that uses just 3–5% of the context window, keeping the agent well below compaction thresholds for the entire session with zero information loss. Learn more at bytebell.ai