#17670 · @dmitryryabkov · opened Mar 15, 2026 at 11:16 PM UTC · last updated Mar 21, 2026 at 8:08 PM UTC

feat(opencode): dynamic model discovery for local providers (LM Studio, llama.cpp, etc.)

appfeat
70
+70233 files

Score breakdown

Impact

9.0

Clarity

9.0

Urgency

6.0

Ease Of Review

9.0

Guidelines

8.0

Readiness

8.0

Size

1.0

Trust

5.0

Traction

8.0

Summary

This PR introduces dynamic model discovery for OpenAI-compatible providers, eliminating the need for manual opencode.json configuration. It adds a dynamicModelList option that fetches models from the /models API, significantly improving user experience for local AI setups. The solution is robustly tested and comes with clear examples and UI screenshots.

Open in GitHub

Description

Issue for this PR

Closes #6231

Type of change

  • [ ] Bug fix
  • [x] New feature
  • [ ] Refactor / code improvement
  • [ ] Documentation

What does this PR do?

Adds an option for dynamic model list population for OpenAI-compatible providers supporting /model API (as opposed to manually typing them in opencode.json):

  • There's a new option which can be added to a "provider" in opencode.json: "dynamicModelList": true
  • If the option is not set, the old behavior takes precedence (backward compatibility)
  • When set to true issues an API request and retrieves the list of the models supported by the provider
  • If the list is returned successfully, populates the models from the response
  • In the explicit list of models is provided, it takes precedence even if the "dynamicModelList": true flag is set

This change makes working with local AI inference engines much nicer, because it eliminates the need to update opencode.json every time a new model is added, or if the engine is started with the specific model as a parameter. The model list is now completely driven by the inference engine itself.

Example configuration (opencode.json):

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "llama.cpp": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "llama.cpp (local)",
      "options": {
        "baseURL": "http://127.0.0.1:8080/v1"
      },
      "dynamicModelList": true
    },
    "lm_studio": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "LM Studio (local)",
      "options": {
        "baseURL": "http://127.0.0.1:1234/v1",
      },
      "dynamicModelList": true
    }
  }
}

will result in the following model list in OpenCode, for LM Studio with "Just-in-Time Model Loading" enabled:

<img width="340" height="61" alt="image" src="https://github.com/user-attachments/assets/6cb49d59-3510-4986-bb8d-73638290dee3" />

or llama.cpp started without specifying a model:

<img width="340" height="63" alt="image" src="https://github.com/user-attachments/assets/ccd7b181-a307-4b0d-a47f-efa1314a48b8" />

The selected model will be loaded by the inference engine once the first /chat/completion request is received.

If, however, the inference engines only serving specific models, only those models will be returned and available in OpenCode model selector. For LM Studio with "Just-in-Time Model Loading" disabled:

<img width="300" height="58" alt="image" src="https://github.com/user-attachments/assets/781400f7-7ba6-4319-93a7-768358cfaa1e" />

or llama.cpp started with a specific model:

<img width="300" height="58" alt="image" src="https://github.com/user-attachments/assets/5bd14412-7461-4cb2-adaa-1616bf96c875" />

I'm aware that there's a number of opened PRs on this issue already (i.e. #15732, #13234), however I believe this is the most robust implementation because:

  • it doesn't hard-code any specific provider name
  • it maintains the existing behavior by default (if the new flag is not used)
  • it works with any provider which supports /model endpoint (not even just a local one, i.e. Ollama Cloud also works just fine)
  • adds a bunch of tests

🤖 Developed with some help from OpenCode running against a local model hosted in llama.cpp!

How did you verify your code works?

Run OpenCode with models hosted in LM Studio and llama.cpp. Tried various scenarios:

  • with a single model loaded vs dynamic model loading (see the screenshots above)
  • with/without auth
  • with the inference engine down

Also tested it agains Ollama Cloud

Screenshots / recordings

See above

Checklist

  • [x] I have tested my changes locally
  • [x] I have not included unrelated changes in this PR

Linked Issues

#6231 Auto-discover models from OpenAI-compatible provider endpoints

View issue

Comments

No comments.

Changed Files

packages/opencode/src/config/config.ts

+10
@@ -997,6 +997,7 @@ export namespace Config {
}),
)
.optional(),
dynamicModelList: z.boolean().optional().describe("Enable automatic model discovery from OpenAI-compatible /models endpoint"),
options: z
.object({
apiKey: z.string().optional(),

packages/opencode/src/provider/provider.ts

+1423
@@ -431,9 +431,9 @@ export namespace Provider {
const location = String(
provider.options?.location ??
Env.get("GOOGLE_VERTEX_LOCATION") ??
Env.get("GOOGLE_CLOUD_LOCATION") ??
Env.get("VERTEX_LOCATION") ??
Env.get("GOOGLE_VERTEX_LOCATION") ??
Env.get("GOOGLE_CLOUD_LOCATION") ??
Env.get("VERTEX_LOCATION") ??
"us-central1",
)
@@ -750,6 +750,141 @@ export namespace Provider {
},
}
/**
* Populate the provider models dynamically using provider config
* Returns models or emty object if fails
*/
async function populateDynamicModels(providerID: string, provider: any): Promise<Record<string, Model>> {
// Get base URL from config or thrown an exception
const baseURL = provider.options?.baseURL
if (!baseURL) {
log.error("Missing baseURL for dynamic model discovery", { providerID })
throw new InitError({ providerID: providerID })
}
// Get auth credentials
const key = provider.options?.apiKey
const auth = key ? {"type": "api", "key": key} : await Auth.get(providerID)
// Discover models
const discoveredModels

packages/opencode/test/provider/provider.dynamic-discovery.test.ts

+5590
@@ -0,0 +1,559 @@
import { test, expect, beforeEach, afterEach, mock } from "bun:test"
import path from "path"
import { tmpdir } from "../fixture/fixture"
import { Instance } from "../../src/project/instance"
import { Provider } from "../../src/provider/provider"
import { ProviderID } from "@/provider/schema"
// Mock fetch for dynamic model discovery tests
let originalFetch: typeof global.fetch
let mockFetch: ReturnType<typeof mock>
beforeEach(() => {
originalFetch = global.fetch
mockFetch = mock((url: string | URL | Request) =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({ data: [] }),
}),
)
global.fetch = mockFetch as any
})
afterEach(() => {
global.fetch = originalFetch
mockFetch.mockReset()
})
test("dynamic model discovery - successful", async () => {
await using tmp = await tmpdir({
init: async (dir) => {
await Bun.write(
path.join(dir, "opencode.json"),
JSON.stringify({
$schema: "https://opencode.ai/config.json",
provider: {
"test-openai-compatible": {
npm: "@ai-sdk/openai-compatible",
options: {