#17670 · @dmitryryabkov · opened Mar 15, 2026 at 11:16 PM UTC · last updated Mar 21, 2026 at 8:08 PM UTC
feat(opencode): dynamic model discovery for local providers (LM Studio, llama.cpp, etc.)
Score breakdown
Impact
Clarity
Urgency
Ease Of Review
Guidelines
Readiness
Size
Trust
Traction
Summary
This PR introduces dynamic model discovery for OpenAI-compatible providers, eliminating the need for manual opencode.json configuration. It adds a dynamicModelList option that fetches models from the /models API, significantly improving user experience for local AI setups. The solution is robustly tested and comes with clear examples and UI screenshots.
Description
Issue for this PR
Closes #6231
Type of change
- [ ] Bug fix
- [x] New feature
- [ ] Refactor / code improvement
- [ ] Documentation
What does this PR do?
Adds an option for dynamic model list population for OpenAI-compatible providers supporting /model API (as opposed to manually typing them in opencode.json):
- There's a new option which can be added to a
"provider"inopencode.json:"dynamicModelList": true - If the option is not set, the old behavior takes precedence (backward compatibility)
- When set to
trueissues an API request and retrieves the list of the models supported by the provider - If the list is returned successfully, populates the models from the response
- In the explicit list of models is provided, it takes precedence even if the
"dynamicModelList": trueflag is set
This change makes working with local AI inference engines much nicer, because it eliminates the need to update opencode.json every time a new model is added, or if the engine is started with the specific model as a parameter. The model list is now completely driven by the inference engine itself.
Example configuration (opencode.json):
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"llama.cpp": {
"npm": "@ai-sdk/openai-compatible",
"name": "llama.cpp (local)",
"options": {
"baseURL": "http://127.0.0.1:8080/v1"
},
"dynamicModelList": true
},
"lm_studio": {
"npm": "@ai-sdk/openai-compatible",
"name": "LM Studio (local)",
"options": {
"baseURL": "http://127.0.0.1:1234/v1",
},
"dynamicModelList": true
}
}
}
will result in the following model list in OpenCode, for LM Studio with "Just-in-Time Model Loading" enabled:
<img width="340" height="61" alt="image" src="https://github.com/user-attachments/assets/6cb49d59-3510-4986-bb8d-73638290dee3" />or llama.cpp started without specifying a model:
<img width="340" height="63" alt="image" src="https://github.com/user-attachments/assets/ccd7b181-a307-4b0d-a47f-efa1314a48b8" />The selected model will be loaded by the inference engine once the first /chat/completion request is received.
If, however, the inference engines only serving specific models, only those models will be returned and available in OpenCode model selector. For LM Studio with "Just-in-Time Model Loading" disabled:
<img width="300" height="58" alt="image" src="https://github.com/user-attachments/assets/781400f7-7ba6-4319-93a7-768358cfaa1e" />or llama.cpp started with a specific model:
<img width="300" height="58" alt="image" src="https://github.com/user-attachments/assets/5bd14412-7461-4cb2-adaa-1616bf96c875" />I'm aware that there's a number of opened PRs on this issue already (i.e. #15732, #13234), however I believe this is the most robust implementation because:
- it doesn't hard-code any specific provider name
- it maintains the existing behavior by default (if the new flag is not used)
- it works with any provider which supports
/modelendpoint (not even just a local one, i.e. Ollama Cloud also works just fine) - adds a bunch of tests
🤖 Developed with some help from OpenCode running against a local model hosted in llama.cpp!
How did you verify your code works?
Run OpenCode with models hosted in LM Studio and llama.cpp. Tried various scenarios:
- with a single model loaded vs dynamic model loading (see the screenshots above)
- with/without auth
- with the inference engine down
Also tested it agains Ollama Cloud
Screenshots / recordings
See above
Checklist
- [x] I have tested my changes locally
- [x] I have not included unrelated changes in this PR
Linked Issues
#6231 Auto-discover models from OpenAI-compatible provider endpoints
View issueComments
No comments.
Changed Files
packages/opencode/src/config/config.ts
+1−0packages/opencode/src/provider/provider.ts
+142−3packages/opencode/test/provider/provider.dynamic-discovery.test.ts
+559−0