#14743 · @bhagirathsinh-vaghela · opened Feb 23, 2026 at 3:17 AM UTC · last updated Mar 21, 2026 at 7:13 PM UTC
fix(cache): improve Anthropic prompt cache hit rate with system split and tool stability
Score breakdown
Impact
Clarity
Urgency
Ease Of Review
Guidelines
Readiness
Size
Trust
Traction
Summary
This PR significantly improves Anthropic prompt cache hit rates by addressing several sources of prefix instability, leading to reduced token usage and cost. It implements always-active fixes and optional experimental stabilization features.
Description
Issue for this PR
Closes #5416, #5224 Related: #14065, #5422, #14203
Type of change
- [x] Bug fix
What does this PR do?
Fixes cross-repo and cross-session Anthropic prompt cache misses. Same-session caching already works (AI SDK places markers correctly). This PR fixes the cases where the prefix changes between repos, sessions, or process restarts — causing full cache writes on every first prompt.
Anthropic hashes tools → system → messages in prefix order. Any change to an earlier block invalidates everything after it. OpenCode has several sources of unnecessary prefix changes.
Terminology (1-indexed): S1/S2 = system block 1/2. M1/M2 = cache marker on S1/S2.
Always-active fixes:
-
System prompt is a single block — dynamic content (env, project AGENTS.md) invalidates the stable provider prompt. Split into 2 blocks: stable (provider prompt + global AGENTS.md) first, dynamic (env + project) second.
-
Bash tool schema includes
Instance.directory— changes per-repo, invalidating tool hash. Removed; model gets cwd from the environment block. -
Skill tool ordering is nondeterministic —
Object.values()on glob results. Sorted by name.
Opt-in fixes (behind env var flags):
-
Date and instructions change between turns —
OPENCODE_EXPERIMENTAL_CACHE_STABILIZATION=1freezes date and caches instruction file reads for the process lifetime. -
Extended cache TTL —
OPENCODE_EXPERIMENTAL_CACHE_1H_TTL=1sets 1h TTL on M1 (2x write cost vs 1.25x for default 5-min). Useful for sessions with idle gaps.
Commits:
| # | What | Behind flag? |
|---|---|---|
| 1 | Cache token audit logging | OPENCODE_CACHE_AUDIT |
| 2 | Stabilize system prefix (freeze date + instructions) | OPENCODE_EXPERIMENTAL_CACHE_STABILIZATION |
| 3 | Split system prompt into 2 blocks | Always active |
| 4 | Remove cwd from bash tool schema | Always active |
| 5 | Sort skill tool ordering | Always active |
| 6 | Optional 1h TTL on M1 | OPENCODE_EXPERIMENTAL_CACHE_1H_TTL |
What this doesn't fix:
- Per-project skills or MCP tools that differ across repos — the skill tool description changes per project, breaking M1 even on the same machine. This is expected; per-project tools are inherently dynamic.
- Cross-machine cache sharing (different skill tool descriptions per machine)
- Plan/build mode switches (TaskTool description changes per mode) — deferred
- Compaction cache alignment (#10342 — planned follow-up)
Impact beyond Anthropic: The prefix stability fixes also benefit providers with automatic prefix caching (OpenAI, DeepSeek, Gemini, xAI, Groq) — no markers needed, just a stable prefix.
How did you verify your code works?
OPENCODE_CACHE_AUDIT=1 logs [CACHE] hit/miss per LLM call. Tested with Claude Sonnet 4.6 on Anthropic direct API, bun dev, Feb 23 2026.
Cross-repo (different folder, within 5-min TTL — the key improvement):
BEFORE (no fixes):
Prompt 1: hit=0.0% read=0 write=17,786 new=3 (full miss, no reuse)
Prompt 2: hit=99.9% read=17,786 write=10 new=3
Prompt 3: hit=99.9% read=17,796 write=14 new=3
AFTER (system split + tool stability):
Prompt 1: hit=97.6% read=17,345 write=428 new=3 (block 1 reused, only env block misses)
Prompt 2: hit=99.9% read=17,773 write=10 new=3
Prompt 3: hit=99.9% read=17,783 write=14 new=3
The first prompt in a new repo goes from 0% → 97.6% cache hit. S1 (tools + provider prompt + global AGENTS.md) is reused across repos. These numbers are based on my setup — S1 is ~17,345 tokens, mostly tool definitions (~12k tokens), with provider prompt (~2k) and global AGENTS.md (~2.8k) making up the rest. Your numbers will differ based on your tool set (MCP servers, skills) and global AGENTS.md size, but the cross-repo miss is eliminated regardless.
Only block 2 (env with different cwd = 428 tokens) is a cache write on the first prompt in a new repo.
To reproduce:
OPENCODE_CACHE_AUDIT=1 bun dev /tmp/folder-a
# send a prompt, exit
OPENCODE_CACHE_AUDIT=1 bun dev /tmp/folder-b
# send a prompt within 5 minutes
grep '\[CACHE\]' ~/.local/share/opencode/log/dev.log
Screenshots / recordings
N/A — no UI changes.
Checklist
- [x] I have tested my changes locally
- [x] I have not included unrelated changes in this PR
Linked Issues
#5416 [FEATURE]: Anthropic (and others) caching improvement
View issueComments
PR comments
bhagirathsinh-vaghela
Reviewer's guide — supplementary context not covered in the PR description. Uses same terminology (S1/S2, M1/M2) defined there.
AI SDK cache marker mechanics
Ref: Anthropic prompt caching docs | Anthropic engineers' caching best practices (Feb 19 2026): Thariq Shihipar, R. Lance Martin
Max 4 cache_control markers per request. The AI SDK already places markers on the first 2 system blocks and the last 2 conversation turns. That part works — the problem is OpenCode mutating blocks before these markers, cascading hash changes downstream.
Key subtlety: before this PR, OpenCode had a single system block. M1 covered it, but M2 was unused — it fell through to conversation. The system split (commit 3) is what activates both markers, letting S1 (stable) cache independently from S2 (dynamic).
Since M1 covers the tool block too (tools hash before system in Anthropic's ordering), any tool instability (commits 4–5) completely invalidates M1 — the entire cached prefix up to that marker is lost.
Related open PRs
Several open PRs address parts of this (#5422, #14203, #10380, #11492). This PR addresses the root causes directly.
bhagirathsinh-vaghela
CI failure seems pre-existing — same NotFoundError affecting all PRs since the Windows path fixes landed in dev. Unrelated to this PR. All other checks pass.
ShanePresley
I pulled this into my fork and it's working beautifully. Unfortunately I only found this after getting a huge bill from Anthropic. Thanks OpenCode!
TomLucidor
@bhagirathsinh-vaghela could you check this with SLMs like Qwen3 or Nemotron or Kimi-Linear or GPT-OSS? Or providers using the OpenAI-compatible APIs (e.g. OpenRouter)? Also why are some of the E2E tests failing in OpenCode PR?
Bonus ask: would Speculative Decoding work with this fork? I am looking at this from the lens of vLLM-MLX and MLX-OpenAI-Server (for non-MLX there is vLLM).
bhagirathsinh-vaghela
@bhagirathsinh-vaghela could you check this with SLMs like Qwen3 or Nemotron or Kimi-Linear or GPT-OSS? Or providers using the OpenAI-compatible APIs (e.g. OpenRouter)? Also why are some of the E2E tests failing in OpenCode PR?
Bonus ask: would Speculative Decoding work with this fork? I am looking at this from the lens of vLLM-MLX and MLX-OpenAI-Server (for non-MLX there is vLLM).
@TomLucidor
The fixes are provider/model-agnostic — they stabilize the request prefix so it is byte-for-byte identical across calls. Any provider with server-side prefix caching benefits automatically. See my reviewer's guide comment above for the full breakdown of each fix.
The specific model behind the provider does not matter — the changes are purely at the request layer. You can verify with any provider using OPENCODE_CACHE_AUDIT=1 to see hit/miss per call.
E2E failures — pre-existing upstream issue, since fixed. CI is green now.
Speculative decoding — orthogonal. This PR only changes what is sent in the request, not how the server processes it.
fkroener
Looking forward to seeing less prompt re-processing with opencode. Unfortunately it seems currently this patchset breaks llama.cpp support:
[60919] srv operator(): got exception: {"error":{"code":400,"message":"Unable to generate parser for this template. Automatic parser generation failed: \n------------\nWhile executing CallExpression at line 85, column 32 in source:\n...first %}↵ {{- raise_exception('System message must be at the beginnin...\n ^\nError: Jinja Exception: System message must be at the beginning.","type":"invalid_request_error"}}
Tested with and without the new autoparser. Maybe I'm using it wrong?
fkroener
So, after partially reverting fix(cache): split system prompt into 2 blocks for independent caching, or rather naively ensuring llama.cpp gets just one system prompt (revert.patch) opencode now flies with this patchset using a llama.cpp endpoint (openai api though).
No more "erased invalidated context checkpoint" for all checkpoints and reprocessing of the entire context seemingly whenever I send a new query.
Checkpoint reuse happens usually at around 99 %, sometimes drops to 93 % - lowest was in the 70 % with > 60k tokens.
Much appreciated!
Wonder whether the split system message is something @pwilkin would be willing to support or whether it should be guarded to only be sent to Antrophic endpoints.
pwilkin
Any chance the system message could be moved to the top of the messages list? We could possibly do this for the Anthropic API, but technically the system prompt should be the first message.
fkroener
Thanks @pwilkin. Given this is actually coming from the model template (Qwen 3.5) and not the parser:
{%- if message.role == "system" %}
{%- if not loop.first %}
{{- raise_exception('System message must be at the beginning.') }}
{%- endif %}
{%- elif message.role == "user" %}
this should probably best be handled on OpenCode's end.
sandeep-chaps
When will this PR make it into a release? We are seeing lower cache hit rates (Anthropic) across users using the same repo with a standard workflow based on opencode. -> higher token costs
Stellarthoughts
Even more important now to get it into release with the general rollout of 1M Context windows for Max subscribers. The price remained as if it was 200K window, so it's up to caching to cut costs.
https://claude.com/blog/1m-context-ga
hhieuu
Would love to see this get in as well. Caching is much less efficient in OpenCode with Claude models. We are pushing internal users to OpenCode for better general model supports, but the caching issue is a blocker.
thdxr
we are looking at it
rekram1-node
I think most of these changes prolly make sense, but it seems like the primary 2 things that are gained:
- option for 1H ttl
- tool prompt cache won't bust between projects as frequently.
In my experience #2 prolly won't have much impact for most ppl but we may as well do that
We actually resolved some of the ordering things in separate pr, ill look at the rest of this and then we will ship a cleaned up version
Review comments
kamelkace
Would it make sense to change the wording here, to hint to the LLM that this isn't a live updating value? Otherwise it might make some weird choices elsewhere for long lived conversations. E.g.
` Session started at: ${date.toDateString()}`,
bhagirathsinh-vaghela
Good point — this is better to show when the date is frozen. I'm keeping Today's date in this PR for now since it's what all OpenCode users expect(at least by experience even if they are not aware), but I'm not against the change if maintainers agree.
Separately, I've been experimenting locally with a progressive disclosure approach — making the env block fully static, instructing the model to fetch cwd, date, platform, etc. via tool calls when needed. Eliminates the block 2 cache write entirely at the cost of an occasional extra round-trip.
Interesting finding in this approach: completely removing the env block tended to result in models not bothering to fetch the info at all and assume things which is non deterministic. A static block with explicit "figure out when needed" instructions worked much better, at least with Anthropic models.
kamelkace
Separately, I've been experimenting locally with a progressive disclosure approach — making the env block fully static, instructing the model to fetch cwd, date, platform, etc. via tool calls when needed. [...] A static block with explicit "figure out when needed" instructions worked much better, at least with Anthropic models.
Hmm! I'll have to give that a shot when I patch from this PR later; I'm running locally against one of the Qwen3.5 models, so it'll be interesting data to see how they respond.
Changed Files
packages/opencode/src/cli/cmd/tui/routes/session/sidebar.tsx
+20−0packages/opencode/src/flag/flag.ts
+2−0packages/opencode/src/provider/transform.ts
+9−4packages/opencode/src/session/index.ts
+9−0packages/opencode/src/session/instruction.ts
+29−13packages/opencode/src/session/llm.ts
+11−14packages/opencode/src/session/prompt.ts
+5−2packages/opencode/src/session/system.ts
+7−1packages/opencode/src/tool/bash.ts
+5−4packages/opencode/src/tool/bash.txt
+1−1packages/opencode/src/tool/skill.ts
+8−6packages/opencode/test/provider/transform.test.ts
+77−0packages/opencode/test/session/instruction.test.ts
+11−11packages/opencode/test/tool/bash.test.ts
+12−0packages/opencode/test/tool/skill.test.ts
+24−0