#7984 · @dgokeeffe · opened Jan 12, 2026 at 11:31 AM UTC · last updated Mar 21, 2026 at 4:14 AM UTC
feat(opencode): Add Databricks provider support
Score breakdown
Impact
Clarity
Urgency
Ease Of Review
Guidelines
Readiness
Size
Trust
Traction
Summary
This PR introduces comprehensive Databricks provider support to opencode, enabling connection to their Foundation Model APIs with dynamic model discovery and robust SDK authentication. It addresses issue #7983 with detailed architectural explanations and extensive unit testing.
Description
Summary
Adds Databricks Foundation Model APIs as a new provider, enabling opencode users to connect to their Databricks workspace's pay-per-token LLM endpoints.
Fixes #7983
Changes
- Provider implementation (
provider.ts): Full Databricks provider with per-model SDK routing (Claude -> Anthropic SDK, GPT/Codex -> OpenAI Responses API, Gemini/others -> OAI-compatible) - SDK auth (
@databricks/sdk-experimental): Uses the official Databricks JS SDK for all authentication - supports PAT, OAuth, Azure AD, and all other SDK-supported auth methods automatically - Dynamic model discovery: All models are discovered dynamically via the Serving Endpoints API - no hardcoded model list. Assigns family defaults (context window, capabilities) based on model name pattern matching.
- Codex support: Codex models are routed through the OpenAI Responses API (required by Databricks). A stream transformer normalizes Databricks' mismatched item IDs to be compatible with the AI SDK. OpenAI-specific features (encrypted reasoning content, reasoning summaries) are excluded.
- Auth guidance (
auth.ts,dialog-provider.tsx): Added Databricks to auth login flow with SDK-based credential detection - Test cleanup (
preload.ts): Clear Databricks env vars between tests - Unit tests (
databricks.test.ts): Tests covering config parsing, auth, URL handling, model capabilities, SDK routing, dynamic discovery, prompt caching, and Codex integration
Authentication
Uses @databricks/sdk-experimental for authentication, which supports all Databricks auth methods automatically:
- PAT token via
DATABRICKS_TOKEN - OAuth M2M via
DATABRICKS_CLIENT_ID+DATABRICKS_CLIENT_SECRET - Azure AD, Databricks CLI profiles, and all other SDK-supported methods
The SDK handles credential resolution, token refresh, and header injection.
Model discovery
All models come from dynamic discovery via the Serving Endpoints API - there are no hardcoded model definitions. The provider:
- Lists all serving endpoints in the workspace
- Filters for Foundation Model API endpoints with chat task type
- Matches model names to known families (claude, gpt, codex, gemini) for capability defaults
- Only includes models from families known to reliably support tool calling
Users can add custom model endpoints via opencode.json to override or supplement discovered models.
Architecture
databricksFetch: Custom fetch wrapper that:- Injects SDK auth headers
- Fixes empty content responses (Databricks rejects
content: "") - Normalizes Responses API streams - Databricks' proxy uses different item IDs in
output_item.addedevents vscontent_part/output_textevents. The transformer tracks IDs byoutput_indexand rewrites mismatcheditem_idfields to be consistent. - Transforms Gemini streaming format (array content to string)
getModel: Routes each model to the appropriate AI SDK:- GPT/Codex ->
@ai-sdk/openaiResponses API (required for Codex) - Claude ->
@ai-sdk/anthropic(native Anthropic API) - Others ->
@ai-sdk/openai-compatible(chat completions)
- GPT/Codex ->
toProviderModel: Detects model families to set correct capabilities (streaming, reasoning, prompt caching)transform.ts: Databricks models skip OpenAI-specific provider options (encrypted reasoning content, reasoning summaries,previous_response_id) that the Databricks proxy doesn't support- Prompt caching enabled for Databricks-hosted Claude models via transform.ts integration
Verification
- All 36 tests pass:
bun test packages/opencode/test/provider/databricks.test.ts - Tested locally with PAT authentication against a Databricks workspace
- Verified raw SSE stream from Databricks Responses API to confirm the ID mismatch root cause
- Full typecheck and build pass (pre-push hook verified)
Linked Issues
#7983 Support for Databricks Foundation Model APIs provider
View issueComments
PR comments
cbcoutinho
@dgokeeffe I am really looking forward to this PR landing, especially after seeing your post on LinkedIn regarding running opencode on a cluster via databricks ssh .... Great work!
I'm interested in running this locally, although without a Databricks PAT if possible. Can you provide a comment regarding auth via azure-cli or databricks-cli?
mdlam92
is this PR supposed to make databricks show up as a provider in the /connect command?
i checked out your branch locally and built it and was trying to use it to use models in databricks but not sure if this implements all that
mdlam92
<img alt="2026-01-21-124311_1670x490_scrot" width="1670" height="490" src="https://private-user-images.githubusercontent.com/35846054/538780740-4746c7a6-fc52-4934-b52e-b4494e4d63fa.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NjkwMjg1NDAsIm5iZiI6MTc2OTAyODI0MCwicGF0aCI6Ii8zNTg0NjA1NC81Mzg3ODA3NDAtNDc0NmM3YTYtZmM1Mi00OTM0LWI1MmUtYjQ0OTRlNGQ2M2ZhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAxMjElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMTIxVDIwNDQwMFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTNhZDI3ZDc3NmE5MThhODFiZGU1YTU3YjZkNzZlN2FlMmFmNWMwMTQ2NjZjNGJjMzE3OWZkMDRkMGMxZTQzZmYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.GABf1tnmA1j82bKrThf1zq6W1BbboNvofN0GuyVpWqM"> is this PR supposed to make databricks show up as a provider in the `/connect` command?i checked out your branch locally and built it and was trying to use it to use models in databricks but not sure if this implements all that
oh i got it working, i guess i had to manually add my provider to my opencode.json?? im not entirely sure why that worked tho
dgokeeffe
@mdlam92 - I pulled the trigger a bit too quick on this PR. As I tested it more and more, I had to make some changes. Databricks should now appear as a provider on the Connect screen.
There were a few changes I had to make that are in this PR:
- Empty content handling - The AI SDK sends content: "" for assistant messages that only have tool calls. Databricks Model Serving (which uses OpenAI-compatible endpoints) rejects these empty strings. This PR filters out empty content and transforms it to content: null where needed.
- Prompt caching - Added support for Databricks prompt caching via cache_control on system messages and recent conversation turns. This works for models that support it (GPT, Gemini, Claude via Databricks).
- Host URL prompt in Connect screen - Unlike other providers with fixed API endpoints, Databricks requires your workspace-specific URL (e.g., https://your-workspace.cloud.databricks.com). I added an extra prompt in the /connect flow to capture the host URL along with your API key.
Authentication options (in order of precedence):
- PAT Token - Set DATABRICKS_HOST and DATABRICKS_TOKEN environment variables
- Databricks CLI - Run databricks auth login, and the provider will use your cached token from ~/.databricks/token-cache.json
- Azure CLI (for Azure Databricks) - If you're logged in with az login, it will use that for workspaces on *.azuredatabricks.net
- OAuth M2M - Set DATABRICKS_HOST, DATABRICKS_CLIENT_ID, and DATABRICKS_CLIENT_SECRET
Quick start
Option 1: Using Databricks CLI (easiest)
databricks auth login --host https://your-workspace.cloud.databricks.com
Option 2: Using environment variables
export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_TOKEN="your-pat-token"
Once auth is configured, Databricks should auto-load with default models (Claude, GPT-5, Gemini). You shouldn't need to manually edit opencode.json unless you want to add custom model endpoints.
Let me know if you run into any issues, and anything I need to do to get this merged in.
hellomikelo
Just tested this PR and can confirm Databricks can be connected as a model provider.
But gpt 5.2 endpoint needs to change to use the /responses endpoint.
Bad Request: Model databricks-gpt-5-1-codex-max only supports the Responses API. Please use /serving-endpoints/responses instead.
elementalvoid
I'm super stoked to see this! I searched for such a provider a few weeks ago but it had to have been just before you created the issue. Last week I took a go at adding support, but I did so as a plugin where you decided to integrate into the CLI. Official in-built support will be great but I had no idea what I was getting myself into so I wanted to start a bit more out of band. I got a working version and was just publishing it internally to my company today for folks to try out when someone informed me of your PR!
I've a couple of thoughts/questions as I read and compare where I landed...
- I am spoiled by the GH Copilot provider autoloading available models so I chose to use a Databricks API (
${workspace_host}/api/2.0/serving-endpoints) to enumerate the available models.- This meant my plugin picks up the foundational models plus any custom and external models automatically. This API provides detailed model information (cost, capabilities, etc.).
- It meant that I only show available models whether that be due to permissions or other availability requirements (related to your comment that "These are the pay-per-token endpoints available in most workspaces").
- Sadly, this API does not provide context window sizing so maybe that's reason enough to use hard coded model configs.
- Model costs: Since the models returned from (1) included costs in DBU units, I created a configurable
dbu_rateso that if my DBU cost or rate was different than someone else's it could be adjusted. But I see you are already converting from DBUs to USD. Super transparently, I have no clue if their DBUs are static across users or not. Do you know?
I ran into some other issues (errors about stream_options, parallel tool calls, gemini thoughtSignatures, etc.) that lead me to do much more transformation than you did. I'm hoping that was due to my implementation. But I'll try to test your version out soon and see if I run into any similar issues.
elementalvoid
I got a chance to test last night. I'm excited to see this work happening but I had some pretty major troubles....
I'm going to detail my issues below. I'll post a second comment with a summary of all the translations that I had to make. Sorry/not-sorry about the amount of text that is about to appear.
I'm happy to participate however you find it most helpful. I'm unsure at the moment about making my provider plugin public but I can pursue that internally if it would help.
1. /connect and auth login don't work
Like @mdlam92, /connect (and opencode auth login) does not show the new provider. I exported the host/key env vars and it gets enabled automatically.
2. GPT Codex doesn't work
I can replicate @hellomikelo's databricks-gpt-5-1-codex-max issue.
❯ ./packages/opencode/dist/opencode-darwin-arm64/bin/opencode run --model databricks/databricks-gpt-5-1-codex-max 'hello'
Error: Bad Request: Model databricks-gpt-5-1-codex-max only supports the Responses API. Please use /serving-endpoints/responses instead.
I will note that non-codex gpt models work fine:
❯ ./packages/opencode/dist/opencode-darwin-arm64/bin/opencode run --model databricks/databricks-gpt-5-1 'hello'
Hi! What are you working on today, or how can I help with your code?
3. Model availability
Not all of the hard coded models are available. Notably the gemma, gpt-oss, and llama models are missing.
❯ ./packages/opencode/dist/opencode-darwin-arm64/bin/opencode models
databricks/databricks-claude-3-7-sonnet
databricks/databricks-claude-haiku-4-5
databricks/databricks-claude-opus-4-1
databricks/databricks-claude-opus-4-5
databricks/databricks-claude-sonnet-4
databricks/databricks-claude-sonnet-4-5
databricks/databricks-gemini-2-5-flash
databricks/databricks-gemini-2-5-pro
databricks/databricks-gemini-3-flash
databricks/databricks-gemini-3-pro
databricks/databricks-gpt-5
databricks/databricks-gpt-5-1
databricks/databricks-gpt-5-1-codex-max
databricks/databricks-gpt-5-2
databricks/databricks-gpt-5-mini
databricks/databricks-gpt-5-nano
4. Gemini models don't work
4a. MCP tool calls
When using a Databricks hosted Gemini model, MCP Tool calls need to have the JSON Schema sanitized to remove the $schema entry. Without this we get the following:
❯ ./packages/opencode/dist/opencode-darwin-arm64/bin/opencode run --model databricks/databricks-gemini-3-flash 'hello'
Error: Bad Request: {
"error": {
"code": 400,
"message": "Invalid JSON payload received. Unknown name \"$schema\" at 'tools[0].function_declarations[0].parameters': Cannot find field.\nInvalid JSON payload received. Unknown name \"$schema\" at 'tools[0].function_declarations[1].parameters': Cannot find field.\nInvalid JSON payload received.",
"status": "INVALID_ARGUMENT",
"details": [
{
"@type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "tools[0].function_declarations[0].parameters",
"description": "Invalid JSON payload received. Unknown name \"$schema\" at 'tools[0].function_declarations[0].parameters': Cannot find field."
},
{
"field": "tools[0].function_declarations[1].parameters",
"description": "Invalid JSON payload received. Unknown name \"$schema\" at 'tools[0].function_declarations[1].parameters': Cannot find field."
}
]
}
]
}
}
I condensed that to two tools but it is dependent on how many tools you have.
I have a set of changes that resolve this but they're very likely not the right way to resolve it as I sanitized both the gemini model requests and all mcp tool requests (for all models and providers .. gross).
4b. Response structure string vs. array
With MCPC resolved, requests with a gemini model now get a response but Databricks returns an array of object for the content instead of a string as expected:
❯ ./packages/opencode/dist/opencode-darwin-arm64/bin/opencode run --model databricks/databricks-gemini-3-flash 'hello'
Error: AI_TypeValidationError: Type validation failed: Value: {"model":"gemini-3-flash-preview","choices":[{"delta":{"role":"assistant","content":[{"type":"text","text":"Hello! How can I help you with your codebase today?","thoughtSignature":"CiEBjz1rXyr7BGOKA1RF3FdIrRxBsGEHM2WT7j+0CLfurTo="}]},"index":0,"finish_reason":"stop"}],"usage":{"prompt_tokens":18295,"completion_tokens":12,"total_tokens":18307},"object":"chat.completion.chunk","id":null,"created":1770142691}.
Error message: [{"code":"invalid_union","errors":[[{"expected":"string","code":"invalid_type","path":["choices",0,"delta","content"],"message":"Invalid input: expected string, received array"}],[{"expected":"object","code":"invalid_type","path":["error"],"message":"Invalid input: expected object, received undefined"}]],"path":[],"message":"Invalid input"}]
We need to transform the response's choices.content (returned as an array of object). We need to give back content as a string and deal with the thoughtSignature which is in the object. But we quickly spiral into other transformations too (see next comment).
elementalvoid
Regarding the rest of the transformations I had to make, I had Opus create the following summary of all of the specializations that I added. I honestly don't know if all of them are strictly required, but with these I found that all of my models worked without issue.
1. Gemini-Specific Transformations
Tool Schema Sanitization (utils/sanitize-tools.ts)
Gemini has strict JSON Schema requirements. The following sanitizations are applied:
- Strips
$schemafield - Gemini doesn't support this standard JSON Schema field - Resolves
$refinline - Gemini poorly handles JSON Schema references, so they're inlined - Removes
definitions/$defs- After inlining refs, these are deleted - Strips non-standard
reffield - Some tools useref(without$) which Gemini interprets incorrectly - Preserves only
$ref,description,defaultwhen a ref is present
Tool Call History Handling (convert-to-openai-messages.ts:281-314)
Gemini has thought_signature requirements that break multi-turn tool conversations:
- Historical tool calls → text - Converted to
[Called tool: name(args)]format - Historical tool results → user messages - Become user messages with
[Tool result for name: content]format - Only current-turn tool calls preserved - The most recent assistant message retains actual tool call structure
Thought Signature Preservation (databricks-chat-model.ts:471-556)
Gemini's thoughtSignature is extracted and passed through providerMetadata.databricks.thoughtSignature.
2. Stream Options Handling
Some models reject stream_options (databricks-chat-model.ts:232-237):
| Model Family | stream_options |
|--------------|------------------|
| Llama | Omitted |
| Qwen | Omitted |
| Gemma | Omitted |
| GPT-OSS | Omitted |
| Claude | Included |
| Gemini | Included |
| Other | Included |
3. Message Format Transformations
AI SDK → OpenAI-compatible format (convert-to-openai-messages.ts)
| Message Type | Transformation |
|--------------|----------------|
| System | Passed through directly |
| User | Text and file parts converted; images formatted to data URLs |
| Assistant | Text extracted, tool calls converted, empty content → null |
| Tool | Result output converted to string (handles text, json, error-text, error-json types) |
Empty Content Normalization
When assistant has tool calls but no text, content becomes null (not empty string) - this is a Databricks requirement.
content: textContent || (toolCalls.length > 0 ? null : '')
4. Image Formatting (utils/format-image.ts)
Normalizes image data to proper URLs:
| Input Type | Output |
|------------|--------|
| URL objects | String URL |
| Binary data (Uint8Array/ArrayBuffer) | Base64 data URL (chunked encoding to avoid stack overflow) |
| String data URLs | Passed through |
| HTTP/HTTPS URLs | Passed through |
| Raw base64 string | Prefixed with data:{mediaType};base64, |
5. Finish Reason Mapping (map-openai-finish-reason.ts)
Maps OpenAI finish reasons to AI SDK format:
| OpenAI | AI SDK |
|--------|--------|
| stop | stop |
| length | length |
| content_filter | content-filter |
| tool_calls | tool-calls |
| function_call | tool-calls |
| (other) | unknown |
6. Response Content Handling
Handles both response formats (databricks-chat-model.ts:291-309, 433-458):
- String content - Directly used
- Array of content parts - Text parts extracted and concatenated (for multi-modal responses)
Key Design Decisions
parallel_tool_callsomitted - Databricks doesn't support this parameter- No
cache_control- Stripped from text parts (Anthropic-specific) - Model-agnostic tool format - Converts to OpenAI function calling format universally
elementalvoid
I gave this another test this morning and most of what I've had issues with is resolved. Thanks again for your work on this!
Remaining issues that I experienced in testing:
- Codex models need to use the
responsesendpointError: Bad Request: Model databricks-gpt-5-1-codex-max only supports the Responses API. Please use /serving-endpoints/responses instead. - Since Databricks is not listed on
models.devwe don't get dynamic model availability updates; only the hard coded list which is already out of date. My account has Sonnet 4.6, Opus 4.6, GPT 5.2 Codex, and 5.1 Codex Mini available. But in order for me to use them I must hard code them to my config, and to do that I have to dig around in the Serving UI and docs to figure out the context windows, modality, etc. For custom models this of course makes sense but I would hope for a more dynamic way to autodiscover the foundational models. Honestly though I understand this is a bit of a non-functional ask so ...¯\_(ツ)_/¯
dgokeeffe
Update: Squashed to a single commit and fixed Codex model support.
What changed since last update
-
Codex models now work via the Responses API. The root cause was that Databricks' Responses API proxy uses different item IDs in
output_item.addedevents (short ID likemsg_05cf...) vscontent_part/output_textevents (long ID likemsg_01ad...). The AI SDK mapstext-startfrom the first and looks uptext-deltafrom the second — when they don't match, it errors with"text part msg_... not found". Fixed by adding a stream normalizer indatabricksFetchthat rewritesitem_idin delta events to match the registeredoutput_item.addedID. -
Squashed 7 commits → 1 clean commit for easier review.
-
Transform guards added so Databricks GPT/Codex models don't request OpenAI-specific features (
encrypted_content,reasoningSummary) that the proxy doesn't support.
CI status
The only failing test (tool.registry > loads tools with external dependencies without crashing) is a pre-existing upstream issue from #12227 — bun install --no-cache doesn't resolve the cowsay dependency in CI. All Databricks tests pass (36/36). Typecheck, compliance, standards, and nix-eval all pass.
Testing done
- All 36 unit tests pass locally
- Verified raw SSE stream from Databricks Responses API via curl to confirm the ID mismatch root cause
- Tested with PAT auth against a live Databricks workspace
- Full typecheck and build pass
@thdxr @fwang — would appreciate a review when you get a chance. Happy to make any changes needed.
Nozzie
What is the progress on this PR? I would love to be able to use Databricks as a model provider.
chiggly007
agree what is missing, here honestly opencode team this is literally lost revenue..
Review comments
elementalvoid
Looks like this works for (most?) built-in tools, but not MCP tools.
With context7 enabled:
❯ grep -A5 '"mcp":' .opencode/opencode.jsonc
"mcp": {
"context7": {
"enabled": true,
"type": "remote",
"url": "https://mcp.context7.com/mcp",
},
❯ ./packages/opencode/dist/opencode-darwin-arm64/bin/opencode run --model databricks/databricks-gemini-3-flash 'hello'
Error: Bad Request: {
"error": {
"code": 400,
"message": "Invalid JSON payload received. Unknown name \"$schema\" at 'tools[0].function_declarations[31].parameters': Cannot find field.\nInvalid JSON payload received. Unknown name \"$schema\" at 'tools[0].function_declarations[32].parameters': Cannot find field.",
"status": "INVALID_ARGUMENT",
"details": [
{
"@type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "tools[0].function_declarations[31].parameters",
"description": "Invalid JSON payload received. Unknown name \"$schema\" at 'tools[0].function_declarations[31].parameters': Cannot find field."
},
{
"field": "tools[0].function_declarations[32].parameters",
"description": "Invalid JSON payload received. Unknown name \"$schema\" at 'tools[0].function_declarations[32].parameters': Cannot find field."
}
]
}
]
}
}
If we disable context7 we get a fun new error on the builtin question tool:
❯ grep -A5 '"mcp":' .opencode/opencode.jsonc
"mcp": {
"context7": {
"enabled": false,
"type": "remote",
"url": "https://mcp.context7.com/mcp",
},
❯ ./packages/opencode/dist/opencode-darwin-arm64/bin/opencode run --model databricks/databricks-gemini-3-flash 'hello'
Error: Bad Request: {
"error": {
"code": 400,
"message": "Schema.ref 'QuestionOption' was set alongside unsupported fields. If a schema node has Schema.ref set, then only description and default can be set alongside it; other fields they would be replaced by the expanded reference.",
"status": "INVALID_ARGUMENT"
}
}
elementalvoid
I'm not sure the best way to populate this, but I think models here being empty causes this error when no auth is configured (not in auth.json and not via env var):
❯ env | grep -c DATABRICKS
0
❯ grep -ic databricks ~/.local/share/opencode/auth.json
0
❯ ./packages/opencode/dist/opencode-darwin-arm64/bin/opencode
{
"name": "UnknownError",
"data": {
"message": "TypeError: undefined is not an object (evaluating 'Provider2.sort(Object.values(item.models))[0].id')\n at <anonymous> (src/server/routes/provider.ts:68:93)\n at o3 (../../node_modules/.bun/remeda@2.26.0/node_modules/remeda/dist/chunk-3ZJAREUD.js:1:137)\n at <anonymous> (src/server/routes/provider.ts:68:20)\n at processTicksAndRejections (native:7:39)"
}
}
A similar error condition might exist in packages/opencode/src/provider/provider.ts where the databricks provider is added to the models.dev "database"? It doesn't appear that that location uses the models array in any way, but I'm not positive on way or another.
elementalvoid
What this means in a usability sense is that you cannot use /connect inside of the TUI to configure the provider. You can use env vars or you can use opencode auth login though. Once env vars are set, models are accessible.
elementalvoid
I noticed a PR today that fixes just the Gemini issues... https://github.com/anomalyco/opencode/pull/12292
Might wait for / integrate that? ¯\_(ツ)_/¯
elementalvoid
Gemini models are working for me now.
elementalvoid
This too is resolved!
fjakobs
much of this code can be replaced by the JS SDK https://www.npmjs.com/package/@databricks/sdk-experimental
As a bonus you'll also get consistency on how the other Databricks tools handle auth
dgokeeffe
Done! Refactored to use @databricks/sdk-experimental for all auth. The SDK's Config class handles credential resolution (PAT, OAuth, Azure AD, CLI profiles, etc.) and token refresh, and WorkspaceClient.servingEndpoints.list() handles model discovery. Manual CLI token cache reading and custom auth logic have been removed.
See commits 0c1ad14d8 (refactor) and 2e5dbe1c4 (bug fixes).
Changed Files
bun.lock
+417−329package.json
+1−0packages/opencode/package.json
+1−0packages/opencode/src/auth/index.ts
+9−1packages/opencode/src/cli/cmd/auth.ts
+36−0packages/opencode/src/cli/cmd/tui/component/dialog-model.tsx
+16−2packages/opencode/src/cli/cmd/tui/component/dialog-provider.tsx
+367−1packages/opencode/src/cli/cmd/tui/component/prompt/index.tsx
+10−4packages/opencode/src/cli/cmd/tui/context/local.tsx
+12−1packages/opencode/src/cli/cmd/tui/context/sync.tsx
+3−0packages/opencode/src/cli/cmd/tui/routes/session/footer.tsx
+29−0packages/opencode/src/cli/cmd/tui/ui/dialog-prompt.tsx
+10−0packages/opencode/src/provider/auth.ts
+2−0packages/opencode/src/provider/databricks-profile.ts
+23−0packages/opencode/src/provider/provider.ts
+463−0packages/opencode/src/provider/transform.ts
+101−5packages/opencode/src/server/routes/config.ts
+6−1packages/opencode/src/server/routes/provider.ts
+19−1packages/opencode/src/session/message-v2.ts
+14−6packages/opencode/src/session/prompt.ts
+8−1packages/opencode/test/preload.ts
+7−0packages/opencode/test/provider/databricks-profile.test.ts
+60−0packages/opencode/test/provider/databricks.test.ts
+1521−0packages/opencode/test/provider/transform.test.ts
+1478−0packages/sdk/js/src/v2/gen/types.gen.ts
+19−1packages/ui/package.json
+2−0packages/web/src/content/docs/providers.mdx
+149−0