The unified interface for LLMs

One API for any model.

Better prices, better uptime, no subscriptions. Route across providers, keep latency low, and control logging with data policies—without changing your OpenAI-compatible client.

OpenAI-compatible
SDKs work out of the box
Provider routing
Fallbacks + sorting strategies
Data policies
Control logging & retention
Higher availability

Reliable inference with routing and fallbacks when a provider is degraded or rate-limited.

Price & performance

Sort providers by cost, latency, or quality—keep control without rewriting clients or prompts.

Custom data policies

Decide what gets logged, where it goes, and what leaves your boundary—per key or org.

How teams ship

Keep your client. Gain leverage.

You talk to one OpenAI-compatible API. We handle provider selection, retries, and policy enforcement—so you can focus on product and prompts.

Get an API key
01
Create a key scoped to your app, team, and data policy.
Choose routing
02
Pick a strategy: cheapest, fastest, or provider preferences.
Ship with one endpoint
03
Use your existing OpenAI SDK client, point it at lmchat.
Start
Get an API key and start routing.
Credits-based billing, transparent usage, and policy controls designed for production.