EdgeOne Pages Node Functions Proxy for CNB LLM API, fully compatible with OpenAI standard interface, solving CORS issues.
English | 简体中文
Mirror Repositories: CNB, GitHub
This project proxies the CNB AI Chat Completions API, converting it to OpenAI standard interface format for easy use with various OpenAI-compatible clients.
curl --request POST \
--url https://api.cnb.cool/{repo}/-/ai/chat/completions \
--header 'Accept: application/json' \
--header 'Authorization: 123' \
--header 'Content-Type: application/json' \
--data '{
"messages": [
{
"content": "string",
"role": "string"
}
],
"model": "string",
"stream": true
}'
Authentication: Requires CNB access token with repo-code:r permission, passed via Authorization: Bearer <token> header.
curl --request POST \
--url https://your-edge-pages-domain/v1/chat/completions \
--header 'Authorization: Bearer YOUR_CNB_TOKEN' \
--header 'Content-Type: application/json' \
--data '{
"messages": [
{
"content": "string",
"role": "string"
}
],
"model": "string",
"stream": true
}'
| Comparison | Original CNB API | Proxied (OpenAI Standard) |
|---|---|---|
| Endpoint | POST /{repo}/-/ai/chat/completions | POST /v1/chat/completions |
| Authentication | Authorization: Bearer <token> | Authorization: Bearer <token> (pass-through) |
| Repository Config | Specified in URL path | Configured via CNB_REPO environment variable |
sk- prefix (compatibility mechanism)| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions | POST | Chat completions endpoint |
/v1/embeddings | POST | Embeddings endpoint |
/v1/models | GET | Models list endpoint |
Connect your Git repository, EdgeOne Pages will automatically detect Node Functions.
In EdgeOne Pages console: Project Settings → Environment Variables, add:
| Variable | Description | Example | Required |
|---|---|---|---|
CNB_REPO | CNB repository path (owner/project/repo) | Mintimate/code-nest/cnb-edge-gateway | ✅ |
CNB_AI_PATH | Custom AI endpoint path, leave empty for default | /-/ai/chat/completions | ❌ |
CNB_EMBEDDINGS_PATH | Embeddings endpoint path, required for embeddings feature | No default | ❌ |
CUSTOM_MODELS | Custom model list, comma-separated | model-a,model-b,model-c | ❌ |
Note:
CNB_AI_PATHdefaults vary by endpoint:
/v1/chat/completions→/-/ai/chat/completions/v1/models→/-/ai/modelsCNB_EMBEDDINGS_PATHis used for the embeddings endpoint. It is required if you want to use the embeddings feature (e.g.,/-/ai/embeddings).CUSTOM_MODELSis used for the model list returned by/v1/modelsendpoint. If set,owned_bywill becustom; if not set, the default modelhunyuan-2.0-instructis used withowned_byascnb-default.
Push code or manually trigger deployment.
| Configuration | Value |
|---|---|
| Base URL | https://your-edge-pages-domain/v1 |
| API Key | Your CNB Token (with or without sk- prefix) |
from openai import OpenAI
client = OpenAI(
base_url="https://your-edge-pages-domain/v1",
api_key="YOUR_CNB_TOKEN"
)
# Streaming call
stream = client.chat.completions.create(
model="any",
messages=[{"role": "user", "content": "Hello!"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
const response = await fetch('https://your-edge-pages-domain/v1/chat/completions', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_CNB_TOKEN',
'Content-Type': 'application/json',
},
body: JSON.stringify({
stream: true,
messages: [{ role: 'user', content: 'Hello!' }],
}),
});
// Handle SSE stream
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
console.log(decoder.decode(value));
}
All error responses follow OpenAI standard format:
{
"error": {
"message": "Error description",
"type": "error_type",
"param": null,
"code": null
}
}
| Status Code | Type | Description |
|---|---|---|
| 401 | authentication_error | Missing Authorization header |
| 500 | server_error | Server configuration error or internal error |
cnb-edge-gateway/ ├── node-functions/ │ └── v1/ │ ├── chat/ │ │ └── completions/ │ │ └── index.js # POST /v1/chat/completions │ ├── embeddings/ │ │ └── index.js # POST /v1/embeddings │ └── models/ │ └── index.js # GET /v1/models └── README.md
MIT