#16730 · @BYK · opened Mar 9, 2026 at 10:52 AM UTC · last updated Mar 21, 2026 at 10:24 AM UTC

fix(opencode): reduce memory usage and database bloat in long-running instances

appfix
70
+1845412 files

Score breakdown

Impact

9.0

Clarity

9.0

Urgency

8.0

Ease Of Review

8.0

Guidelines

8.0

Readiness

7.0

Size

4.0

Trust

5.0

Traction

6.0

Summary

This PR addresses critical memory usage and database bloat in long-running OpenCode instances. It implements several fixes including database health improvements, optimized output spooling, data compaction, algorithm optimizations, and memory leak resolutions.

Open in GitHub

Description

Issue for this PR

Closes #16777

Type of change

  • [x] Bug fix
  • [ ] New feature
  • [x] Refactor / code improvement
  • [ ] Documentation

What does this PR do?

A long-running OpenCode server (2+ days) was consuming 1.76 GB (602 MB RSS + 1.15 GB swap) on an 8 GB system. The database grew to 1.99 GB (274K parts, 1,706 sessions spanning 53 days) with no automatic cleanup. This PR fixes several root causes:

Database health (storage/db.ts, project/bootstrap.ts): Enable incremental auto-vacuum (one-time migration) so disk space is reclaimed on deletes. Add periodic WAL checkpoint (every 5 min) and incremental vacuum (hourly) via Scheduler.

Bash/shell output spooling (tool/bash.ts, session/prompt.ts): Both accumulated ALL stdout/stderr in an unbounded in-memory string. Now output beyond 50 KB streams to a spool file on disk. Only a 50 KB preview stays in memory. Full output is recoverable via the spool file path in metadata.

Compaction clears dead data (session/compaction.ts): Compacted tool parts kept their full output in SQLite even though toModelMessages() skips them. Now clears output/metadata/attachments on compaction.

Levenshtein optimization (tool/edit.ts): Replace O(n×m) full-matrix with O(min(n,m)) 2-row algorithm. For 10K char strings: ~400 MB → ~40 KB. Identical results.

Memory leak fixes (file/time.ts, lsp/client.ts, util/rpc.ts): Clean FileTime per-session state on archive/delete. Delete empty LSP diagnostics map entries. Add 60s timeout to RPC pending calls.

Session retention (session/index.ts, config/config.ts): Auto-delete archived sessions older than retention.days (default 90, 0 = disabled). Runs every 6 hours, batched at 100 per run.

How did you verify your code works?

  • Edit tool tests: 27/27 pass
  • Compaction tests: 24/24 pass
  • Bash tool tests: 16/16 pass (includes new spooling test)
  • TypeScript typecheck: clean
  • Measured process memory on a live instance to identify the issues

Screenshots / recordings

N/A — backend changes only.

Checklist

  • [x] I have tested my changes locally
  • [x] I have not included unrelated changes in this PR

Linked Issues

#16777 High memory usage and database bloat in long-running OpenCode instances

View issue

Comments

PR comments

binarydoubling

Hey! We have a complementary PR at #16695 that tackles the in-memory side of these leaks — event listener accumulation in the TUI, unbounded Maps/Sets in the bus/RPC/LSP subsystems, timer/interval cleanup, and session data not being freed on switch. Between the two PRs, most of the memory growth paths should be covered.

BYK

Thanks for flagging! Checked #16695 — there are 3 overlapping files but the changes are complementary:

  • lsp/client.ts: Both PRs delete empty diagnostics entries. #16695 goes further with a 200-file LRU cap and clear() on shutdown — that's a superset of our change here, so no conflict. Happy to defer to theirs on this file.
  • util/rpc.ts: Different code paths — ours adds a 60s timeout on pending calls (prevents leaked promises from dead workers), theirs cleans up empty listeners Sets (prevents tombstones). Both are needed and merge cleanly.
  • session/prompt.ts: Different areas — ours is the output spooling rewrite (~line 1637), theirs rejects pending callbacks on cancel (~line 262). No overlap.

The rest is non-overlapping: our PR covers database health (auto-vacuum, WAL checkpoint, incremental vacuum), bash output spooling, compaction data clearing, Levenshtein optimization, FileTime cleanup, and session retention. Their PR covers TUI event listener leaks, bus tombstones, PTY buffer cleanup, model refresh interval, and share-next disposal.

Between the two PRs most memory growth paths should indeed be covered 👍

Changed Files

packages/app/src/components/dialog-connect-provider.tsx

+11
@@ -383,7 +383,7 @@ export function DialogConnectProvider(props: { provider: string }) {
setFormStore("error", undefined)
await globalSDK.client.auth.set({
providerID: props.provider,
auth: {
body: {
type: "api",
key: apiKey,
},

packages/app/src/components/dialog-custom-provider.tsx

+11
@@ -131,7 +131,7 @@ export function DialogCustomProvider(props: Props) {
const auth = result.key
? globalSDK.client.auth.set({
providerID: result.providerID,
auth: {
body: {
type: "api",
key: result.key,
},

packages/opencode/src/cli/cmd/tui/component/dialog-provider.tsx

+11
@@ -265,7 +265,7 @@ function ApiMethod(props: ApiMethodProps) {
if (!value) return
await sdk.client.auth.set({
providerID: props.providerID,
auth: {
body: {
type: "api",
key: value,
},

packages/opencode/src/config/config.ts

+100
@@ -1194,6 +1194,16 @@ export namespace Config {
url: z.string().optional().describe("Enterprise URL"),
})
.optional(),
retention: z
.object({
days: z
.number()
.int()
.min(0)
.optional()
.describe("Auto-delete archived sessions older than this many days (default: 90, 0 = disabled)"),
})
.optional(),
compaction: z
.object({
auto: z.boolean().optional().describe("Enable automatic compaction when context is full (default: true)"),

packages/opencode/src/lsp/client.ts

+61
@@ -57,7 +57,12 @@ export namespace LSPClient {
count: params.diagnostics.length,
})
const exists = diagnostics.has(filePath)
diagnostics.set(filePath, params.diagnostics)
if (params.diagnostics.length === 0) {
if (!exists) return
diagnostics.delete(filePath)
} else {
diagnostics.set(filePath, params.diagnostics)
}
if (!exists && input.serverID === "typescript") return
Bus.publish(Event.Diagnostics, { path: filePath, serverID: input.serverID })
})

packages/opencode/src/session/compaction.ts

+30
@@ -92,6 +92,9 @@ export namespace SessionCompaction {
for (const part of toPrune) {
if (part.state.status === "completed") {
part.state.time.compacted = Date.now()
part.state.output = "[compacted]"
part.state.metadata = {}
part.state.attachments = undefined
await Session.updatePart(part)
}
}

packages/opencode/src/session/prompt.ts

+3717
@@ -1,8 +1,10 @@
import path from "path"
import os from "os"
import fs from "fs/promises"
import fsSync from "fs"
import z from "zod"
import { Filesystem } from "../util/filesystem"
import { Identifier } from "@/id/id"
import { SessionID, MessageID, PartID } from "./schema"
import { MessageV2 } from "./message-v2"
import { Log } from "../util/log"
@@ -1672,29 +1674,37 @@ NOTE: At any point in time through this workflow you should feel free to ask the
},
})
let output = ""
let head = ""
let spoolFd: number | undefined
let spoolPath: string | undefined
let totalBytes = 0
proc.stdout?.on("data", (chunk) => {
output += chunk.toString()
if (part.state.status === "running") {
part.state.metadata = {
output: output,
description: "",
const appendShell = (chunk: Buffer) => {
const str = chunk.toString()
totalBytes += str.length
if (head.length < Truncate.MAX_BYTES) {
head += str
}
if (totalBytes > Truncate.MAX_BYTES) {
if (spoolFd === undefined) {
spoolPath = path.join(Truncate.DIR, Identifier.ascending("tool"))

packages/opencode/src/storage/db.ts

+190
@@ -90,6 +90,15 @@ export namespace Database {
db.run("PRAGMA foreign_keys = ON")
db.run("PRAGMA wal_checkpoint(PASSIVE)")
// Migrate to incremental auto-vacuum if needed (one-time)
const vacuum = db.$client.prepare("PRAGMA auto_vacuum").get() as { auto_vacuum: number } | undefined
if (vacuum && vacuum.auto_vacuum === 0) {
log.info("migrating database to incremental auto-vacuum mode (one-time operation)...")
db.$client.run("PRAGMA auto_vacuum = 2")
db.$client.run("VACUUM")
log.info("auto-vacuum migration complete")
}
// Apply schema migrations
const entries =
typeof OPENCODE_MIGRATIONS !== "undefined"
@@ -111,6 +120,16 @@ export namespace Database {
return db
})
/** Run periodic WAL checkpoint (TRUNCATE mode) */
export function checkpoint() {
Client().$client.run("PRAGMA wal_checkpoint(TRUNCATE)")
}
/** Run incremental vacuum to reclaim free pages */
export function vacuum() {
Client().$client.run("PRAGMA incremental_vacuum(500)")
}
export function close() {
Client().$client.close()
Client.reset()

packages/opencode/src/tool/bash.ts

+5417
@@ -2,12 +2,12 @@ import z from "zod"
import { spawn } from "child_process"
import { Tool } from "./tool"
import path from "path"
import fs from "fs"
import DESCRIPTION from "./bash.txt"
import { Log } from "../util/log"
import { Instance } from "../project/instance"
import { lazy } from "@/util/lazy"
import { Language } from "web-tree-sitter"
import fs from "fs/promises"
import { Filesystem } from "@/util/filesystem"
import { fileURLToPath } from "url"
@@ -16,6 +16,7 @@ import { Shell } from "@/shell/shell"
import { BashArity } from "@/permission/arity"
import { Truncate } from "./truncate"
import { Identifier } from "@/id/id"
import { Plugin } from "@/plugin"
const MAX_METADATA_LENGTH = 30_000
@@ -116,7 +117,7 @@ export const BashTool = Tool.define("bash", async () => {
if (["cd", "rm", "cp", "mv", "mkdir", "touch", "chmod", "chown", "cat"].includes(command[0])) {
for (const arg of command.slice(1)) {
if (arg.startsWith("-") || (command[0] === "chmod" && arg.startsWith("+"))) continue
const resolved = await fs.realpath(path.resolve(cwd, arg)).catch(() => "")
const resolved = await fs.promises.realpa

packages/opencode/src/tool/edit.ts

+912
@@ -174,24 +174,21 @@ const SINGLE_CANDIDATE_SIMILARITY_THRESHOLD = 0.0
const MULTIPLE_CANDIDATES_SIMILARITY_THRESHOLD = 0.3
/**
* Levenshtein distance algorithm implementation
* Levenshtein distance — 2-row O(min(n,m)) memory instead of full O(n×m) matrix.
*/
function levenshtein(a: string, b: string): number {
// Handle empty strings
if (a === "" || b === "") {
return Math.max(a.length, b.length)
}
const matrix = Array.from({ length: a.length + 1 }, (_, i) =>
Array.from({ length: b.length + 1 }, (_, j) => (i === 0 ? j : j === 0 ? i : 0)),
)
if (a === "" || b === "") return Math.max(a.length, b.length)
if (a.length < b.length) [a, b] = [b, a]
let prev = Array.from({ length: b.length + 1 }, (_, j) => j)
let curr = new Array(b.length + 1)
for (let i = 1; i <= a.length; i++) {
curr[0] = i
for (let j = 1; j <= b.length; j++) {
const cost = a[i - 1] === b[j - 1] ? 0 : 1
matrix[i][j] = Math.min(matrix[i - 1][j] + 1, matrix[i][j - 1] + 1, matrix[i - 1][j - 1] + cost)
curr[j] = a[i - 1] === b[j - 1] ? prev[j - 1] : 1 + Math.min(prev[j], curr[j - 1], prev[j - 1])
}
;[prev, curr] = [curr, prev]

packages/opencode/src/util/rpc.ts

+114
@@ -44,10 +44,17 @@ export namespace Rpc {
}
return {
call<Method extends keyof T>(method: Method, input: Parameters<T[Method]>[0]): Promise<ReturnType<T[Method]>> {
const requestId = id++
return new Promise((resolve) => {
pending.set(requestId, resolve)
target.postMessage(JSON.stringify({ type: "rpc.request", method, input, id: requestId }))
const rid = id++
return new Promise((resolve, reject) => {
const timer = setTimeout(() => {
pending.delete(rid)
reject(new Error(`RPC timeout: ${String(method)}`))
}, 60_000)
pending.set(rid, (result) => {
clearTimeout(timer)
resolve(result)
})
target.postMessage(JSON.stringify({ type: "rpc.request", method, input, id: rid }))
})
},
on<Data>(event: string, handler: (data: Data) => void) {

packages/opencode/test/tool/bash.test.ts

+320
@@ -400,4 +400,36 @@ describe("tool.bash truncation", () => {
},
})
})
test("spools large output to disk and keeps full content recoverable", async () => {
await Instance.provide({
directory: projectRoot,
fn: async () => {
const bash = await BashTool.init()
// Generate 200 KB of output — well over the 50 KB spool threshold
const bytes = Truncate.MAX_BYTES * 4
const result = await bash.execute(
{
command: `head -c ${bytes} /dev/urandom | base64`,
description: "Generate large output for spool test",
},
ctx,
)
expect((result.metadata as any).truncated).toBe(true)
const filepath = (result.metadata as any).outputPath
expect(filepath).toBeTruthy()
// Spool file should be in the tool-output directory
expect(filepath).toContain("tool-output")
// Full output on disk should be larger than what we kept in memory
const saved = await Filesystem.readText(filepath)
expect(saved.length).toBeGreaterThan(Truncate.MAX_BYTES)
// The returned output to the agent should be trun