1890 lines
42 KiB
Plaintext
1890 lines
42 KiB
Plaintext
---
|
||
title: 供應商
|
||
description: 使用 OpenCode 中的任何 LLM 提供程式。
|
||
---
|
||
|
||
import config from "../../../../config.mjs"
|
||
export const console = config.console
|
||
|
||
OpenCode uses the [AI SDK](https://ai-sdk.dev/) and [Models.dev](https://models.dev) to support **75+ LLM providers** and it supports running local models.
|
||
|
||
要新增提供商,您需要:
|
||
|
||
1. 使用 `/connect` 命令新增提供 API 程序金钥匙。
|
||
2. 在您的 OpenCode 配置中配置提供程式。
|
||
|
||
---
|
||
|
||
### 證書
|
||
|
||
当您使用 `/connect` 命令再次成功的 API 时,它们会被存储
|
||
in `~/.local/share/opencode/auth.json`.
|
||
|
||
---
|
||
|
||
### 配置
|
||
|
||
您可以利用 OpenCode 中的 `provider` 部分自定义提供程序
|
||
配置。
|
||
|
||
---
|
||
|
||
#### 基本網址
|
||
|
||
您可以通过设置 `baseURL` 选项来自定义任何提供程序的基本 URL。这在使用代理服务或自定义端点时非常有用。
|
||
|
||
```json title="opencode.json" {6}
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"anthropic": {
|
||
"options": {
|
||
"baseURL": "https://api.anthropic.com/v1"
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
## OpenCode 一代
|
||
|
||
OpenCode Zen 是OpenCode团队提供的模型列表,这些模型已被
|
||
经测试和验证可与OpenCode良好配合。 [了解更多](/docs/zen)。
|
||
|
||
:::提示
|
||
如果您是新手,我们建议您从 OpenCode Zen 开始。
|
||
:::
|
||
|
||
1. Run the `/connect` command in the TUI, select opencode, and head to [opencode.ai/auth](https://opencode.ai/auth).
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
2. 登入,新增您的账单详细信息,然后复制您的 API 金钥匙。
|
||
|
||
3. 贴上您的 API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 在 TUI 中执行 `/models` 以检视我们推荐的型号列表。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
它的工作方式与 OpenCode 中的任何其他提供的程序相同,并且完全可以选择使用。
|
||
|
||
---
|
||
|
||
## 目錄
|
||
|
||
讓我們詳細瞭解一些提供商。如果您想將提供商新增到
|
||
列表,请随时开启PR。
|
||
|
||
:::笔记
|
||
在这里没有看到成功?提交 PR。
|
||
:::
|
||
|
||
---
|
||
|
||
### 302.艾伊
|
||
|
||
1. Head over to the [302.AI console](https://302.ai/), create an account, and generate an API key.
|
||
|
||
2. 执行`/connect`命令并搜索**302.AI**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 302.AI API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 亞馬遜基岩
|
||
|
||
相当于 Amazon Bedrock 与 OpenCode 结合使用:
|
||
|
||
1. 前往 Amazon Bedrock 控制台中的 **模型目录** 并请求
|
||
訪問您想要的模型。
|
||
|
||
:::提示
|
||
您需要能够在 Amazon Bedrock 中访问所需的模型。
|
||
:::
|
||
|
||
2. **使用以下方法之一配置身份驗證**:
|
||
|
||
#### 環境變數(快速啟動)
|
||
|
||
执行 opencode 时设置以下环境变量之一:
|
||
|
||
```bash
|
||
# Option 1: Using AWS access keys
|
||
AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=YYY opencode
|
||
|
||
# Option 2: Using named AWS profile
|
||
AWS_PROFILE=my-profile opencode
|
||
|
||
# Option 3: Using Bedrock bearer token
|
||
AWS_BEARER_TOKEN_BEDROCK=XXX opencode
|
||
```
|
||
|
||
或者将它们添加到您的 bash 配置文件中:
|
||
|
||
```bash title="~/.bash_profile"
|
||
export AWS_PROFILE=my-dev-profile
|
||
export AWS_REGION=us-east-1
|
||
```
|
||
|
||
#### 配置檔案(推薦)
|
||
|
||
For project-specific or persistent configuration, use `opencode.json`:
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"amazon-bedrock": {
|
||
"options": {
|
||
"region": "us-east-1",
|
||
"profile": "my-aws-profile"
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
**可用選項:**
|
||
- `region` - AWS 区域(例如 `us-east-1`、`eu-west-1`)
|
||
- `profile` - 来自 `~/.aws/credentials` 的 AWS 命名配置档案
|
||
- `endpoint` - VPC 节点节点的自定义节点 URL(通用 `baseURL` 选项的别名)
|
||
|
||
:::提示
|
||
配置檔案選項優先於環境變數。
|
||
:::
|
||
|
||
#### 高阶:VPC 端点
|
||
|
||
如果您使用 Bedrock 的 VPC 终端节点:
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"amazon-bedrock": {
|
||
"options": {
|
||
"region": "us-east-1",
|
||
"profile": "production",
|
||
"endpoint": "https://bedrock-runtime.us-east-1.vpce-xxxxx.amazonaws.com"
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
:::笔记
|
||
`endpoint` 选项是通用 `baseURL` 选项的别名,使用 AWS 术语特定。如果同时指定了 `endpoint` 和 `baseURL`,则 `endpoint` 优先。
|
||
:::
|
||
|
||
#### 認證方式
|
||
- **`AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY`**:创建IAM用户并在AWS控制台中生成访问金币。
|
||
- **`AWS_PROFILE`**:使用 `~/.aws/credentials` 中的命名配置文件。首先配置 `aws configure --profile my-profile` 或 `aws sso login`
|
||
- **`AWS_BEARER_TOKEN_BEDROCK`**:从 Amazon Bedrock 控制台生成长期 API 金钥匙
|
||
- **`AWS_WEB_IDENTITY_TOKEN_FILE` / `AWS_ROLE_ARN`**:适用于 EKS IRSA(服务账户的 IAM 角色)或具有 OIDC 联合的其他 Kubernetes 环境。使用服务账户注释时,这些环境变量由 Kubernetes 自动注入。
|
||
|
||
#### 認證優先順序
|
||
|
||
Amazon Bedrock 使用以下身份验证优先顺序:
|
||
1. **不记名令牌** - `AWS_BEARER_TOKEN_BEDROCK`环境变化数据或来自`/connect`令牌的令牌
|
||
2. **AWS 凭证链** - 配置档案、访问金钥、共享凭证、IAM 角色、Web 身份令牌 (EKS IRSA)、实例项后设置资料
|
||
|
||
:::笔记
|
||
设置不记名令牌(使用 `/connect` 或 `AWS_BEARER_TOKEN_BEDROCK`)时,其优先于所有 AWS 凭证方法(包括配置的配置文件)。
|
||
:::
|
||
|
||
3. 执行`/models`命令选择所需的型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
:::笔记
|
||
对于自定义推理配置文件,请在按键中使用模型并提供程序名称,并将 `id` 属性设置为 arn。这确保了正确的快取:
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"amazon-bedrock": {
|
||
// ...
|
||
"models": {
|
||
"anthropic-claude-sonnet-4.5": {
|
||
"id": "arn:aws:bedrock:us-east-1:xxx:application-inference-profile/yyy"
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
:::
|
||
|
||
---
|
||
|
||
### 人择
|
||
|
||
1. 注册后,执行`/connect`命令并选择Anthropic。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
2. 您可以在此处选择 **Claude Pro/Max** 选项,就会打开您的浏览器
|
||
並要求您進行身份驗證。
|
||
|
||
```txt
|
||
┌ Select auth method
|
||
│
|
||
│ Claude Pro/Max
|
||
│ Create an API Key
|
||
│ Manually enter API Key
|
||
└
|
||
```
|
||
|
||
3. 现在,当您使用 `/models` 命令时,所有人类模型都应该可用。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
:::信息
|
||
Using your Claude Pro/Max subscription in OpenCode is not officially supported by [Anthropic](https://anthropic.com).
|
||
:::
|
||
|
||
##### 使用 API 键
|
||
|
||
如果您没有 Pro/Max 订阅,您还可以选择 **建立 API 重要**。它还会开启您的浏览器并要求您登入 Anthropic 并为您提供一个可以粘贴到终端中的程序代码。
|
||
|
||
或者,如果您已安装 API 金钥匙,则可以选择 **手动输入 API 金钥匙** 将其贴到终端中。
|
||
|
||
---
|
||
|
||
### 天蓝色 OpenAI
|
||
|
||
:::笔记
|
||
如果遇到“抱歉,但我无法协助该请求”错误,请尝试将 Azure 资源中的内容筛选器从 **DefaultV2** 更改为 **Default**。
|
||
:::
|
||
|
||
1. Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need:
|
||
- **资源名称**:这将成为您的 API 端点 (`https://RESOURCE_NAME.openai.azure.com/`) 的一部分
|
||
- **API 金钥匙**:来自您资源的 `KEY 1` 或 `KEY 2`
|
||
|
||
2. Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model.
|
||
|
||
:::笔记
|
||
部署名称必须与型号名称匹配,opencode才能正常工作。
|
||
:::
|
||
|
||
3. 执行 `/connect` 命令并搜索 **Azure**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
4. 输入您的 API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
5. 將您的資源名稱設定為環境變數:
|
||
|
||
```bash
|
||
AZURE_RESOURCE_NAME=XXX opencode
|
||
```
|
||
|
||
或者将其新增内容添加到您的 bash 配置文件中:
|
||
|
||
```bash title="~/.bash_profile"
|
||
export AZURE_RESOURCE_NAME=XXX
|
||
```
|
||
|
||
6. 执行 `/models` 命令以选择您部署的模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### Azure 认知服务
|
||
|
||
1. Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need:
|
||
- **资源名称**:这将成为您的 API 端点 (`https://AZURE_COGNITIVE_SERVICES_RESOURCE_NAME.cognitiveservices.azure.com/`) 的一部分
|
||
- **API 金钥匙**:来自您资源的 `KEY 1` 或 `KEY 2`
|
||
|
||
2. Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model.
|
||
|
||
:::笔记
|
||
部署名称必须与型号名称匹配,opencode才能正常工作。
|
||
:::
|
||
|
||
3. 执行 `/connect` 命令并搜索 **Azure 认知服务**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
4. 输入您的 API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
5. 將您的資源名稱設定為環境變數:
|
||
|
||
```bash
|
||
AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX opencode
|
||
```
|
||
|
||
或者将其新增内容添加到您的 bash 配置文件中:
|
||
|
||
```bash title="~/.bash_profile"
|
||
export AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX
|
||
```
|
||
|
||
6. 执行 `/models` 命令以选择您部署的模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 巴吉度獵犬
|
||
|
||
1. Head over to the [Baseten](https://app.baseten.co/), create an account, and generate an API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Baseten**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 Baseten API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 大腦
|
||
|
||
1. Head over to the [Cerebras console](https://inference.cerebras.ai/), create an account, and generate an API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Cerebras**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 Cerebras API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择_Qwen 3 Coder 480B_等型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### Cloudflare AI闸道器
|
||
|
||
Cloudflare AI Gateway lets you access models from OpenAI, Anthropic, Workers AI, and more through a unified endpoint. With [Unified Billing](https://developers.cloudflare.com/ai-gateway/features/unified-billing/) you don't need separate API keys for each provider.
|
||
|
||
1. Head over to the [Cloudflare dashboard](https://dash.cloudflare.com/), navigate to **AI** > **AI Gateway**, and create a new gateway.
|
||
|
||
2. 将您的账户ID和闸道器ID设定为环境变量。
|
||
|
||
```bash title="~/.bash_profile"
|
||
export CLOUDFLARE_ACCOUNT_ID=your-32-character-account-id
|
||
export CLOUDFLARE_GATEWAY_ID=your-gateway-id
|
||
```
|
||
|
||
3. 执行 `/connect` 命令并搜索 **Cloudflare AI Gateway**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
4. 输入您的 Cloudflare API 令牌。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
或者將其設定為環境變數。
|
||
|
||
```bash title="~/.bash_profile"
|
||
export CLOUDFLARE_API_TOKEN=your-api-token
|
||
```
|
||
|
||
5. 执行`/models`命令选择型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
您还可以使用开放代码配置新增模型。
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"cloudflare-ai-gateway": {
|
||
"models": {
|
||
"openai/gpt-4o": {},
|
||
"anthropic/claude-sonnet-4": {}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
### 皮質
|
||
|
||
1. Head over to the [Cortecs console](https://cortecs.ai/), create an account, and generate an API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Cortecs**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 Cortecs API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行 `/models` 命令以选择类似 _Kimi K2 Instruct_ 的型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 深度搜尋
|
||
|
||
1. Head over to the [DeepSeek console](https://platform.deepseek.com/), create an account, and click **Create new API key**.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **DeepSeek**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 DeepSeek API 金钥。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令以选择DeepSeek模型,例如_DeepSeek Reasoner_。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 深層基礎設施
|
||
|
||
1. Head over to the [Deep Infra dashboard](https://deepinfra.com/dash), create an account, and generate an API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Deep Infra**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的深层基础设施 API 金钥。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 韌體
|
||
|
||
1. Head over to the [Firmware dashboard](https://app.firmware.ai/signup), create an account, and generate an API key.
|
||
|
||
2. 执行`/connect`命令并搜索**韧体**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的韧体API金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 煙花人工智慧
|
||
|
||
1. Head over to the [Fireworks AI console](https://app.fireworks.ai/), create an account, and click **Create API Key**.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Fireworks AI**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 Fireworks AI API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行 `/models` 命令以选择类似 _Kimi K2 Instruct_ 的型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### GitLab 二人组
|
||
|
||
GitLab Duo 通过 GitLab 的人工代理提供具有本机工具呼叫功能的人工智慧代理聊天。
|
||
|
||
1. 执行`/connect`命令并选择GitLab。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
2. 選擇您的身份驗證方法:
|
||
|
||
```txt
|
||
┌ Select auth method
|
||
│
|
||
│ OAuth (Recommended)
|
||
│ Personal Access Token
|
||
└
|
||
```
|
||
|
||
#### 使用OAuth(推荐)
|
||
|
||
选择**OAuth**,您的浏览器将开启并进行授权。
|
||
|
||
#### 使用個人訪問令牌
|
||
1. Go to [GitLab User Settings > Access Tokens](https://gitlab.com/-/user_settings/personal_access_tokens)
|
||
2. 單擊**新增新令牌**
|
||
3. Name: `OpenCode`, Scopes: `api`
|
||
4. 复制令牌(以 `glpat-` 发起人)
|
||
5. 在終端中輸入
|
||
|
||
3. 执行 `/models` 命令检视可用模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
提供基于 Claude 的模型:
|
||
- **duo-chat-haiku-4-5**(默认)- 快速任务的快速响应
|
||
- **duo-chat-sonnet-4-5** - 大多数工作流程的平衡失败
|
||
- **duo-chat-opus-4-5** - 最有能力进行复杂分析
|
||
|
||
:::笔记
|
||
如果您不愿意,也可以指定“GITLAB_TOKEN”环境变量
|
||
将令牌存储在opencode身份验证存储中。
|
||
:::
|
||
|
||
##### 自托管 GitLab
|
||
|
||
:::note[合规笔记]
|
||
OpenCode 使用一个小模型来执行一些 AI 任务,例如生成会话标题。
|
||
情况下,其配置为使用 gpt-5-nano,由 Zen 托管。默认 OpenCode
|
||
只需使用您自己的 GitLab 托管示例项,即可将以下内容添加到您的
|
||
`opencode.json` file. It is also recommended to disable session sharing.
|
||
|
||
```json
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"small_model": "gitlab/duo-chat-haiku-4-5",
|
||
"share": "disabled"
|
||
}
|
||
```
|
||
|
||
:::
|
||
|
||
对于自托管 GitLab 示例项目:
|
||
|
||
```bash
|
||
export GITLAB_INSTANCE_URL=https://gitlab.company.com
|
||
export GITLAB_TOKEN=glpat-...
|
||
```
|
||
|
||
如果您的示例项执行自定义AI闸道器:
|
||
|
||
```bash
|
||
GITLAB_AI_GATEWAY_URL=https://ai-gateway.company.com
|
||
```
|
||
|
||
或者添加到您的 bash 配置文件中:
|
||
|
||
```bash title="~/.bash_profile"
|
||
export GITLAB_INSTANCE_URL=https://gitlab.company.com
|
||
export GITLAB_AI_GATEWAY_URL=https://ai-gateway.company.com
|
||
export GITLAB_TOKEN=glpat-...
|
||
```
|
||
|
||
:::笔记
|
||
您的 GitLab 管理员必须启用以下功能:
|
||
|
||
1. [Duo Agent Platform](https://docs.gitlab.com/user/gitlab_duo/turn_on_off/) for the user, group, or instance
|
||
2. 功能标志(透过Rails控制台):
|
||
- `agent_platform_claude_code`
|
||
- `third_party_agents_enabled`
|
||
:::
|
||
|
||
##### 用于自托管项目的 OAuth
|
||
|
||
为了使 Oauth 适用于您的自托管项目,您需要建立
|
||
一個新的應用程式(設定→應用程式)
|
||
回拨 URL `http://127.0.0.1:8080/callback` 和以下范围:
|
||
|
||
- api(您代表访问API)
|
||
- read_user(读取您的个人信息)
|
||
- read_repository(允许对存储库进行只读访问)
|
||
|
||
然后将应用程序ID公开为环境变量:
|
||
|
||
```bash
|
||
export GITLAB_OAUTH_CLIENT_ID=your_application_id_here
|
||
```
|
||
|
||
More documentation on [opencode-gitlab-auth](https://www.npmjs.com/package/@gitlab/opencode-gitlab-auth) homepage.
|
||
|
||
##### 配置
|
||
|
||
Customize through `opencode.json`:
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"gitlab": {
|
||
"options": {
|
||
"instanceUrl": "https://gitlab.com",
|
||
"featureFlags": {
|
||
"duo_agent_platform_agentic_chat": true,
|
||
"duo_agent_platform": true
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
##### GitLab API 工具(可选,但强烈推荐)
|
||
|
||
要访问GitLab工具(合并请求、问题、管道、CI/CD等):
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"plugin": ["@gitlab/opencode-gitlab-plugin"]
|
||
}
|
||
```
|
||
|
||
该外挂提供全面的GitLab存储库管理功能,包括MR审查、问题跟踪、管道监控等。
|
||
|
||
---
|
||
|
||
### GitHub 副驾驶
|
||
|
||
相当于您的 GitHub Copilot 订阅与 opencode 一起使用:
|
||
|
||
:::笔记
|
||
某些型号可能需要 [Pro+
|
||
订阅](https://github.com/features/copilot/plans)使用。
|
||
|
||
Some models need to be manually enabled in your [GitHub Copilot settings](https://docs.github.com/en/copilot/how-tos/use-ai-models/configure-access-to-ai-models#setup-for-individual-use).
|
||
:::
|
||
|
||
1. 执行 `/connect` 命令并搜索 GitHub Copilot。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
2. Navigate to [github.com/login/device](https://github.com/login/device) and enter the code.
|
||
|
||
```txt
|
||
┌ Login with GitHub Copilot
|
||
│
|
||
│ https://github.com/login/device
|
||
│
|
||
│ Enter code: 8F43-6FCF
|
||
│
|
||
└ Waiting for authorization...
|
||
```
|
||
|
||
3. 现在执行 `/models` 命令来选择您想要的型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 谷歌顶点人工智能
|
||
|
||
Google Vertex AI 与 OpenCode 结合使用:
|
||
|
||
1. 前往 Google Cloud Console 中的 **模型花园** 并检查
|
||
您所在地區提供的型號。
|
||
|
||
:::笔记
|
||
您需要有一个启用了 Vertex AI API 的 Google Cloud 专案。
|
||
:::
|
||
|
||
2. 設定所需的環境變數:
|
||
- `GOOGLE_CLOUD_PROJECT`:您的Google云专案ID
|
||
- `VERTEX_LOCATION`(可选):Vertex AI的区域(默认为`global`)
|
||
- 身份驗證(選擇一項):
|
||
- `GOOGLE_APPLICATION_CREDENTIALS`:服务帐户 JSON 密钥文件的路径
|
||
- 使用 gcloud CLI 进行身份验证:`gcloud auth application-default login`
|
||
|
||
在执行 opencode 时设置它们。
|
||
|
||
```bash
|
||
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json GOOGLE_CLOUD_PROJECT=your-project-id opencode
|
||
```
|
||
|
||
或者将它们添加到您的 bash 配置文件中。
|
||
|
||
```bash title="~/.bash_profile"
|
||
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
|
||
export GOOGLE_CLOUD_PROJECT=your-project-id
|
||
export VERTEX_LOCATION=global
|
||
```
|
||
|
||
:::提示
|
||
The `global` region improves availability and reduces errors at no extra cost. Use regional endpoints (e.g., `us-central1`) for data residency requirements. [Learn more](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-partner-models#regional_and_global_endpoints)
|
||
:::
|
||
|
||
3. 执行`/models`命令选择所需的型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 格羅克
|
||
|
||
1. Head over to the [Groq console](https://console.groq.com/), click **Create API Key**, and copy the key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 Groq。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入结构的API金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令来选择您想要的。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 抱臉
|
||
|
||
[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers) provides access to open models supported by 17+ providers.
|
||
|
||
1. Head over to [Hugging Face settings](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) to create a token with permission to make calls to Inference Providers.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **拥抱脸**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 輸入您的擁抱臉標記。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择_Kimi-K2-Instruct_ 或 _GLM-4.6_ 等模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 螺旋錐
|
||
|
||
[Helicone](https://helicone.ai) is an LLM observability platform that provides logging, monitoring, and analytics for your AI applications. The Helicone AI Gateway routes your requests to the appropriate provider automatically based on the model.
|
||
|
||
1. Head over to [Helicone](https://helicone.ai), create an account, and generate an API key from your dashboard.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Helicone**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 Helicone API 金钥。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
For more providers and advanced features like caching and rate limiting, check the [Helicone documentation](https://docs.helicone.ai).
|
||
|
||
#### 可選配置
|
||
|
||
如果您发现Helicone的某些功能或型号未通过opencode自动配置,您始终可以自行配置。
|
||
|
||
Here's [Helicone's Model Directory](https://helicone.ai/models), you'll need this to grab the IDs of the models you want to add.
|
||
|
||
```jsonc title="~/.config/opencode/opencode.jsonc"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"helicone": {
|
||
"npm": "@ai-sdk/openai-compatible",
|
||
"name": "Helicone",
|
||
"options": {
|
||
"baseURL": "https://ai-gateway.helicone.ai",
|
||
},
|
||
"models": {
|
||
"gpt-4o": {
|
||
// Model ID (from Helicone's model directory page)
|
||
"name": "GPT-4o", // Your own custom name for the model
|
||
},
|
||
"claude-sonnet-4-20250514": {
|
||
"name": "Claude Sonnet 4",
|
||
},
|
||
},
|
||
},
|
||
},
|
||
}
|
||
```
|
||
|
||
#### 自定義標頭
|
||
|
||
Helicone 支持快速获取、用户跟踪和会话管理等功能的自定义标头。使用 `options.headers` 将它们添加到您提供的方案配置中:
|
||
|
||
```jsonc title="~/.config/opencode/opencode.jsonc"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"helicone": {
|
||
"npm": "@ai-sdk/openai-compatible",
|
||
"name": "Helicone",
|
||
"options": {
|
||
"baseURL": "https://ai-gateway.helicone.ai",
|
||
"headers": {
|
||
"Helicone-Cache-Enabled": "true",
|
||
"Helicone-User-Id": "opencode",
|
||
},
|
||
},
|
||
},
|
||
},
|
||
}
|
||
```
|
||
|
||
##### 會話跟蹤
|
||
|
||
Helicone's [Sessions](https://docs.helicone.ai/features/sessions) feature lets you group related LLM requests together. Use the [opencode-helicone-session](https://github.com/H2Shami/opencode-helicone-session) plugin to automatically log each OpenCode conversation as a session in Helicone.
|
||
|
||
```bash
|
||
npm install -g opencode-helicone-session
|
||
```
|
||
|
||
將其新增到您的配置中。
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"plugin": ["opencode-helicone-session"]
|
||
}
|
||
```
|
||
|
||
该外挂将 `Helicone-Session-Id` 和 `Helicone-Session-Name` 标头注入您的请求中。在 Helicone 的会话页面中,您将看到每个 OpenCode 对话都是单独的会话。
|
||
|
||
##### 常见螺旋接头
|
||
|
||
|標題 |描述 |
|
||
| -------------------------- | ------------------------------------------------------------- |
|
||
| `Helicone-Cache-Enabled` | Enable response caching (`true`/`false`) |
|
||
| `Helicone-User-Id` | 点击用户跟踪指标 |
|
||
| `Helicone-Property-[Name]` | 新增自定义属性(例如`Helicone-Property-Environment`)|
|
||
| `Helicone-Prompt-Id` |将请求与提示版本相关联 |
|
||
|
||
See the [Helicone Header Directory](https://docs.helicone.ai/helicone-headers/header-directory) for all available headers.
|
||
|
||
---
|
||
|
||
### 呼叫.cpp
|
||
|
||
You can configure opencode to use local models through [llama.cpp's](https://github.com/ggml-org/llama.cpp) llama-server utility
|
||
|
||
```json title="opencode.json" "llama.cpp" {5, 6, 8, 10-15}
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"llama.cpp": {
|
||
"npm": "@ai-sdk/openai-compatible",
|
||
"name": "llama-server (local)",
|
||
"options": {
|
||
"baseURL": "http://127.0.0.1:8080/v1"
|
||
},
|
||
"models": {
|
||
"qwen3-coder:a3b": {
|
||
"name": "Qwen3-Coder: a3b-30b (local)",
|
||
"limit": {
|
||
"context": 128000,
|
||
"output": 65536
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
在這個例子中:
|
||
|
||
- `llama.cpp` 是自定义创建 ID。这可以是您想要的任何字串。
|
||
- `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
|
||
- `name` 是 UI 中提供程序的显示名称。
|
||
- `options.baseURL` 是本地服务器器的端点。
|
||
- `models` 是型号 ID 以及配置的对应映射。型号名称将显示在型号选择列表中。
|
||
|
||
---
|
||
|
||
### IO网路
|
||
|
||
IO.NET提供了17种针对各种例子进行优化的模型:
|
||
|
||
1. Head over to the [IO.NET console](https://ai.io.net/), create an account, and generate an API key.
|
||
|
||
2. 执行`/connect`命令并搜索**IO.NET**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 IO.NET API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### LM工作室
|
||
|
||
您可以通过使用本地模型来使用 LM Studio 配置开放代码。
|
||
|
||
```json title="opencode.json" "lmstudio" {5, 6, 8, 10-14}
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"lmstudio": {
|
||
"npm": "@ai-sdk/openai-compatible",
|
||
"name": "LM Studio (local)",
|
||
"options": {
|
||
"baseURL": "http://127.0.0.1:1234/v1"
|
||
},
|
||
"models": {
|
||
"google/gemma-3n-e4b": {
|
||
"name": "Gemma 3n-e4b (local)"
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
在這個例子中:
|
||
|
||
- `lmstudio` 是自定义创建 ID。这可以是您想要的任何字串。
|
||
- `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
|
||
- `name` 是 UI 中提供程序的显示名称。
|
||
- `options.baseURL` 是本地服务器器的端点。
|
||
- `models` 是型号 ID 以及配置的对应映射。型号名称将显示在型号选择列表中。
|
||
|
||
---
|
||
|
||
### 登月人工智慧
|
||
|
||
要使用 Moonshot AI 中的 Kimi K2:
|
||
|
||
1. Head over to the [Moonshot AI console](https://platform.moonshot.ai/console), create an account, and click **Create API key**.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Moonshot AI**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 Moonshot API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令以选择_Kimi K2_。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 最小最大
|
||
|
||
1. Head over to the [MiniMax API Console](https://platform.minimax.io/login), create an account, and generate an API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **MiniMax**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 MiniMax API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择_M2.1_等型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### Nebius 代币工厂
|
||
|
||
1. Head over to the [Nebius Token Factory console](https://tokenfactory.nebius.com/), create an account, and click **Add Key**.
|
||
|
||
2. 执行`/connect`命令并搜索**Nebius令牌工厂**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 Nebius 令牌工厂 API 金钥。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行 `/models` 命令以选择类似 _Kimi K2 Instruct_ 的型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 成為
|
||
|
||
您可以使用 Ollama 配置 opencode 本地模型。
|
||
|
||
:::提示
|
||
Ollama can automatically configure itself for OpenCode. See the [Ollama integration docs](https://docs.ollama.com/integrations/opencode) for details.
|
||
:::
|
||
|
||
```json title="opencode.json" "ollama" {5, 6, 8, 10-14}
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"ollama": {
|
||
"npm": "@ai-sdk/openai-compatible",
|
||
"name": "Ollama (local)",
|
||
"options": {
|
||
"baseURL": "http://localhost:11434/v1"
|
||
},
|
||
"models": {
|
||
"llama2": {
|
||
"name": "Llama 2"
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
在這個例子中:
|
||
|
||
- `ollama` 是自定义创建 ID。这可以是您想要的任何字串。
|
||
- `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API.
|
||
- `name` 是 UI 中提供程序的显示名称。
|
||
- `options.baseURL` 是本地服务器器的端点。
|
||
- `models` 是型号 ID 以及配置的对应映射。型号名称将显示在型号选择列表中。
|
||
|
||
:::提示
|
||
如果工具暂停,请尝试增加 Ollama 中的 `num_ctx`。从 16k - 32k 左右开始。
|
||
:::
|
||
|
||
---
|
||
|
||
### 奧拉馬雲
|
||
|
||
相当于 Ollama Cloud 与 OpenCode 一起使用:
|
||
|
||
1. 前往 [https://ollama.com/](https://ollama.com/) 并登入或建立账户。
|
||
|
||
2. 导航至**设置** > **金钥匙**,然后单击**添加API金钥匙**以生成新的API金钥匙。
|
||
|
||
3. 复制 API 金钥以在 OpenCode 中使用。
|
||
|
||
4. 执行 `/connect` 命令并搜索 **Ollama Cloud**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
5. 输入您的 Ollama Cloud API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
6. **重要**:在OpenCode中使用云模型之前,必须将模型信息拉取到本地:
|
||
|
||
```bash
|
||
ollama pull gpt-oss:20b-cloud
|
||
```
|
||
|
||
7. 执行 `/models` 命令以选择您的 Ollama Cloud 模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 开放人工智能
|
||
|
||
We recommend signing up for [ChatGPT Plus or Pro](https://chatgpt.com/pricing).
|
||
|
||
1. 注册后,执行`/connect`命令并选择OpenAI。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
2. 您可以选择 **ChatGPT Plus/Pro** 选项,就会在这里开启您的浏览器
|
||
並要求您進行身份驗證。
|
||
|
||
```txt
|
||
┌ Select auth method
|
||
│
|
||
│ ChatGPT Plus/Pro
|
||
│ Manually enter API Key
|
||
└
|
||
```
|
||
|
||
3. 现在,当您使用 `/models` 命令时,所有 OpenAI 模型都应该可用。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
##### 使用 API 键
|
||
|
||
如果您已安装 API 金钥匙,则可以选择 **手动输入 API 金钥匙** 将其贴到终端中。
|
||
|
||
---
|
||
|
||
### OpenCode 一代
|
||
|
||
OpenCode Zen 是 OpenCode 团队提供的经过测试和验证的模型列表。 [了解更多](/docs/zen)。
|
||
|
||
1. 登入 **<a href={console}>OpenCode Zen</a>** 並單擊 **建立 API 金鑰**。
|
||
|
||
2. 执行 `/connect` 命令并搜索 **OpenCode Zen**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 OpenCode API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择_Qwen 3 Coder 480B_等型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 開放路由器
|
||
|
||
1. Head over to the [OpenRouter dashboard](https://openrouter.ai/settings/keys), click **Create API Key**, and copy the key.
|
||
|
||
2. 执行`/connect`命令并搜索OpenRouter。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入结构的API金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 默认情况下预加载了多个OpenRouter模型,执行`/models`命令选择您想要的模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
您还可以通过开放代码配置添加其他模型。
|
||
|
||
```json title="opencode.json" {6}
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"openrouter": {
|
||
"models": {
|
||
"somecoolnewmodel": {}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
5. 您还可以使用开放代码配置自定义它们。这是指定的示例
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"openrouter": {
|
||
"models": {
|
||
"moonshotai/kimi-k2": {
|
||
"options": {
|
||
"provider": {
|
||
"order": ["baseten"],
|
||
"allow_fallbacks": false
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
### SAP人工智慧核心
|
||
|
||
SAP AI Core跨统一平台提供对OpenAI、Anthropic、Google、Amazon、Meta、Mistral和AI21的40多个模型的访问。
|
||
|
||
1. Go to your [SAP BTP Cockpit](https://account.hana.ondemand.com/), navigate to your SAP AI Core service instance, and create a service key.
|
||
|
||
:::提示
|
||
The service key is a JSON object containing `clientid`, `clientsecret`, `url`, and `serviceurls.AI_API_URL`. You can find your AI Core instance under **Services** > **Instances and Subscriptions** in the BTP Cockpit.
|
||
:::
|
||
|
||
2. 执行`/connect`命令并搜索**SAP AI Core**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的服务金号JSON。
|
||
|
||
```txt
|
||
┌ Service key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
或者设置`AICORE_SERVICE_KEY`环境变量:
|
||
|
||
```bash
|
||
AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' opencode
|
||
```
|
||
|
||
或者将其新增内容添加到您的 bash 配置文件中:
|
||
|
||
```bash title="~/.bash_profile"
|
||
export AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}'
|
||
```
|
||
|
||
4. (可选)设置部署ID和资源组:
|
||
|
||
```bash
|
||
AICORE_DEPLOYMENT_ID=your-deployment-id AICORE_RESOURCE_GROUP=your-resource-group opencode
|
||
```
|
||
|
||
:::笔记
|
||
这些设置是可选的,应根据 SAP AI Core 设置进行配置。
|
||
:::
|
||
|
||
5. 执行 `/models` 命令从 40 个多个可用型号中进行选择。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### OVHcloud AI 端点
|
||
|
||
1. Head over to the [OVHcloud panel](https://ovh.com/manager). Navigate to the `Public Cloud` section, `AI & Machine Learning` > `AI Endpoints` and in `API Keys` tab, click **Create a new API key**.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **OVHcloud AI 端点**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 OVHcloud AI 端点 API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择_gpt-oss-120b_等型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 斯卡威
|
||
|
||
To use [Scaleway Generative APIs](https://www.scaleway.com/en/docs/generative-apis/) with Opencode:
|
||
|
||
1. Head over to the [Scaleway Console IAM settings](https://console.scaleway.com/iam/api-keys) to generate a new API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Scaleway**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的Scaleway API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行 `/models` 命令选择 _devstral-2-123b-instruct-2512_ 或 _gpt-oss-120b_ 等模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 一起人工智慧
|
||
|
||
1. Head over to the [Together AI console](https://api.together.ai), create an account, and click **Add Key**.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Together AI**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的Together AI API金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行 `/models` 命令以选择类似 _Kimi K2 Instruct_ 的型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 威尼斯人工智慧
|
||
|
||
1. Head over to the [Venice AI console](https://venice.ai), create an account, and generate an API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Venice AI**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的威尼斯 AI API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择_Llama 3.3 70B_等型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### Vercel人工智慧闸道器
|
||
|
||
Vercel AI Gateway 可以让您跨统一端点访问来自 OpenAI、Anthropic、Google、xAI 等的模型。型号按标价提供,不加价。
|
||
|
||
1. Head over to the [Vercel dashboard](https://vercel.com/), navigate to the **AI Gateway** tab, and click **API keys** to create a new API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Vercel AI Gateway**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 Vercel AI 网关 API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择型号。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
您还可以穿透 opencode 配置自定义模型。以下是指定提供者路由顺序的示例。
|
||
|
||
```json title="opencode.json"
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"vercel": {
|
||
"models": {
|
||
"anthropic/claude-sonnet-4": {
|
||
"options": {
|
||
"order": ["anthropic", "vertex"]
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
一些有用的路由選項:
|
||
|
||
|選項 |描述 |
|
||
| ------------------- | ---------------------------------------------------- |
|
||
| `order` |提供者尝试顺序|
|
||
| `only` |限制特定提供商 |
|
||
| `zeroDataRetention` |仅使用零资料保留的政策|
|
||
|
||
---
|
||
|
||
### 人工智慧
|
||
|
||
1. Head over to the [xAI console](https://console.x.ai/), create an account, and generate an API key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **xAI**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入您的 xAI API 金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行 `/models` 命令来选择类似 _Grok Beta_ 的模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 扎艾
|
||
|
||
1. Head over to the [Z.AI API console](https://z.ai/manage-apikey/apikey-list), create an account, and click **Create a new API key**.
|
||
|
||
2. 执行 `/connect` 命令并搜索 **Z.AI**。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
如果您订阅了**GLM编码计划**,请选择**Z.AI编码计划**。
|
||
|
||
3. 输入您的 Z.AI API 金钥。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 执行`/models`命令选择_GLM-4.7_等模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
---
|
||
|
||
### 多路復用器
|
||
|
||
1. Head over to the [ZenMux dashboard](https://zenmux.ai/settings/keys), click **Create API Key**, and copy the key.
|
||
|
||
2. 执行 `/connect` 命令并搜索 ZenMux。
|
||
|
||
```txt
|
||
/connect
|
||
```
|
||
|
||
3. 输入结构的API金钥匙。
|
||
|
||
```txt
|
||
┌ API key
|
||
│
|
||
│
|
||
└ enter
|
||
```
|
||
|
||
4. 默认情况下预加载了多个 ZenMux 模型,执行 `/models` 命令选择您想要的模型。
|
||
|
||
```txt
|
||
/models
|
||
```
|
||
|
||
您还可以通过开放代码配置添加其他模型。
|
||
|
||
```json title="opencode.json" {6}
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"zenmux": {
|
||
"models": {
|
||
"somecoolnewmodel": {}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
## 定製提供商
|
||
|
||
要新增 `/connect` 命令中未列出的任何 **OpenAI 相容**提供程式:
|
||
|
||
:::提示
|
||
您可以将任何 OpenAI 相容的提供方案与 opencode 一起使用。大多数人工现代智慧工厂都提供 OpenAI 相容 API。
|
||
:::
|
||
|
||
1. 执行`/connect`命令并逐步升级到**其他**。
|
||
|
||
```bash
|
||
$ /connect
|
||
|
||
┌ Add credential
|
||
│
|
||
◆ Select provider
|
||
│ ...
|
||
│ ● Other
|
||
└
|
||
```
|
||
|
||
2. 输入企业的唯一ID。
|
||
|
||
```bash
|
||
$ /connect
|
||
|
||
┌ Add credential
|
||
│
|
||
◇ Enter provider id
|
||
│ myprovider
|
||
└
|
||
```
|
||
|
||
:::笔记
|
||
选择一个容易记住的 ID,您将在配置文件中使用它。
|
||
:::
|
||
|
||
3. 输入您的事业的 API 金钥。
|
||
|
||
```bash
|
||
$ /connect
|
||
|
||
┌ Add credential
|
||
│
|
||
▲ This only stores a credential for myprovider - you will need to configure it in opencode.json, check the docs for examples.
|
||
│
|
||
◇ Enter your API key
|
||
│ sk-...
|
||
└
|
||
```
|
||
|
||
4. Create or update your `opencode.json` file in your project directory:
|
||
|
||
```json title="opencode.json" ""myprovider"" {5-15}
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"myprovider": {
|
||
"npm": "@ai-sdk/openai-compatible",
|
||
"name": "My AI ProviderDisplay Name",
|
||
"options": {
|
||
"baseURL": "https://api.myprovider.com/v1"
|
||
},
|
||
"models": {
|
||
"my-model-name": {
|
||
"name": "My Model Display Name"
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
以下是配置選項:
|
||
- **npm**:要使用AI的SDK包,`@ai-sdk/openai-compatible`用于OpenAI相容的事业
|
||
- **名称**:UI中的显示名称。
|
||
- **型號**:可用型號。
|
||
- **options.baseURL**:API 端点 URL。
|
||
- **options.apiKey**:如果不使用身份验证,可以选择设置API金钥匙。
|
||
- **options.headers**:可选择设置自定义标头。
|
||
|
||
有關高階選項的更多資訊,請參見下面的示例。
|
||
|
||
5. 执行 `/models` 命令,您提供的自定义程序和模型将出现在选择列表中。
|
||
|
||
---
|
||
|
||
##### 例子
|
||
|
||
以下是设置 `apiKey`、`headers` 和模型 `limit` 选项的示例。
|
||
|
||
```json title="opencode.json" {9,11,17-20}
|
||
{
|
||
"$schema": "https://opencode.ai/config.json",
|
||
"provider": {
|
||
"myprovider": {
|
||
"npm": "@ai-sdk/openai-compatible",
|
||
"name": "My AI ProviderDisplay Name",
|
||
"options": {
|
||
"baseURL": "https://api.myprovider.com/v1",
|
||
"apiKey": "{env:ANTHROPIC_API_KEY}",
|
||
"headers": {
|
||
"Authorization": "Bearer custom-token"
|
||
}
|
||
},
|
||
"models": {
|
||
"my-model-name": {
|
||
"name": "My Model Display Name",
|
||
"limit": {
|
||
"context": 200000,
|
||
"output": 65536
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
```
|
||
|
||
配置詳情:
|
||
|
||
- **apiKey**:使用`env`变数语法[了解更多](/docs/config#env-vars)设置。
|
||
- ** headers **:随每个请求传送的自定义标头。
|
||
- **limit.context**:模型接受的最大输入标记。
|
||
- **limit.output**:模型可以生成的最大令牌。
|
||
|
||
`limit` 栏位允许 OpenCode 了解您还剩下多少上下文。标准成功会自动从 models.dev 中提取这些内容。
|
||
|
||
---
|
||
|
||
## 故障排除
|
||
|
||
如果您在配置提供商時遇到問題,請檢查以下內容:
|
||
|
||
1. **Check the auth setup**: Run `opencode auth list` to see if the credentials
|
||
提供商的配置已新增到您的配置中。
|
||
|
||
这并不利于 Amazon Bedrock 等依赖环境变数进行身份验证的工作。
|
||
|
||
2. 对于自定义提供的程序,请检查 opencode 配置并:
|
||
- 确保 `/connect` 命令中使用的提供方案 ID 与 opencode 配置中的 ID 匹配。
|
||
- 正确的 npm 包用于提供程序。例如,对 Cerebras 使用 `@ai-sdk/cerebras`。对于所有其他 OpenAI 相内容的提供程序,请使用 `@ai-sdk/openai-compatible`。
|
||
- 检查 `options.baseURL` 栏位中使用的 API 端点是否正确。
|