Model Naming & Routing
1. Convention for the model field
ApiSet.ai Gateway uses a single model field to represent both the provider and the model, separated by a colon:
- Format:
provider:modelId
Examples:
- DeepSeek chat model:
deepseek:deepseek-chat - DeepSeek chat model (calling DeepSeek’s official model directly):
deepseek-chat - AliyunBailian‑hosted DeepSeek chat model:
AliyunBailian:deepseek-chat - DeepSeek reasoning model:
deepseek:deepseek-reasoner - OpenAI GPT‑4.1 mini:
openai:gpt-4.1-mini
Where:
- provider – the provider identifier used inside the gateway, for example:
deepseekopenaisiliconflowAliyunBailian
- modelId – the concrete model name, consistent with each provider’s official docs, for example:
deepseek-chatdeepseek-reasonergpt-4.1-mini
You can optionally omit the provider part.
For example, if you just senddeepseek-chat, the gateway can fall back to the default provider configuration for that model.
2. Routing behavior (transparent to callers)
When you send a request:
- The gateway parses
providerandmodelIdfrom themodelstring. - It chooses the actual upstream endpoint and the real model name accordingly.
- Before forwarding the request, it:
- Rewrites the
modelfield to the provider’s real model name (for exampledeepseek-chat). - Replaces the auth header with the correct provider key.
- Rewrites the
You don’t need to worry about how provider/model mappings are maintained internally.
As long asmodelis set correctly, the gateway will route your request to the expected provider and model.
3. Example: migrating from DeepSeek to the gateway
Original call directly to DeepSeek:
curl https://api.deepseek.com/v1/chat/completions \
-H <span class=<span class="hljs-string">"hljs-string"</span>>&quot;Authorization: Bearer {api_set_key}&quot;</span> \
-H <span class=<span class="hljs-string">"hljs-string"</span>>&quot;Content-Type: application/json&quot;</span> \
-d <span class=<span class="hljs-string">"hljs-string"</span>>&#x27;{
&quot;model&quot;: &quot;deepseek-chat&quot;,
&quot;messages&quot;: [{&quot;role&quot;:&quot;user&quot;,&quot;content&quot;:&quot;Hi&quot;}]
}&#x27;</span>
Equivalent call via ApiSet.ai Gateway:
curl https://apiset.ai/api/v1/chat/completions \
-H <span class=<span class="hljs-string">"hljs-string"</span>>&quot;Authorization: Bearer {api_set_key}&quot;</span> \
-H <span class=<span class="hljs-string">"hljs-string"</span>>&quot;Content-Type: application/json&quot;</span> \
-d <span class=<span class="hljs-string">"hljs-string"</span>>&#x27;{
&quot;model&quot;: &quot;deepseek-chat&quot;,
&quot;messages&quot;: [{&quot;role&quot;:&quot;user&quot;,&quot;content&quot;:&quot;Hi&quot;}]
}&#x27;</span>
You only changed two things:
- Base URL from
https://api.deepseek.comtohttps://apiset.ai/api - Auth from
deepseek_api_keytoxai_api_key
All other fields (model, messages, temperature, max_tokens, etc.) remain unchanged.
4. Supported providers & models
In the ApiSet.ai console’s Pricing page you can view:
- The list of supported
providervalues. - The available
modelIdoptions under each provider. - Detailed billing information for each model (per‑1K‑token pricing, per‑request pricing, etc.).
Before integrating, we recommend confirming in the console that the target provider and
modelIdare enabled for your account and reviewing their pricing rules.