Models
Model identifiers and capabilities.
lmchat uses a provider/model naming convention and exposes a models endpoint for discovery. Routing policies can map a logical model to specific providers.
Model naming
Convention
Use identifiers like anthropic/claude, openai/gpt, google/gemini.
{
"model": "anthropic/claude",
"messages": [{"role":"user","content":"Hello"}]
}List models
GET /api/v1/modelsResponse is an OpenAI-style list envelope where possible.
{
"object": "list",
"data": [
{
"id": "anthropic/claude",
"object": "model",
"created": 1710000000,
"owned_by": "anthropic",
"capabilities": {
"tools": true,
"vision": false,
"json_mode": true
}
}
]
}Choosing a model
- Start with a general-purpose reasoning model for mixed workloads.
- Use smaller/faster models for high-QPS, short outputs, or classification.
- Use tool-capable models for agents and workflows.
- If you care about uptime, configure routing & fallbacks.