Quickstart

Ship with one endpoint.

Point your OpenAI-compatible client at /api/v1, provide a bearer token, and choose a provider/model.

Base URL & headers

Base URL: https://lmchat.net/api/v1
Authorization: Bearer $LMCHAT_API_KEY
Content-Type: application/json

# Optional attribution headers:
HTTP-Referer: https://yourapp.com
X-Title: YourAppName

curl (chat completions)

curl https://lmchat.net/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $LMCHAT_API_KEY" \
  -H "HTTP-Referer: https://yourapp.com" \
  -H "X-Title: YourAppName" \
  -d '{
    "model": "anthropic/claude",
    "messages": [
      {"role":"system","content":"You are a concise assistant."},
      {"role":"user","content":"Give me 3 naming options for an AI router."}
    ],
    "temperature": 0.4
  }'

JavaScript/TypeScript (OpenAI SDK)

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.LMCHAT_API_KEY,
  baseURL: "https://lmchat.net/api/v1",
  defaultHeaders: {
    "HTTP-Referer": "https://yourapp.com",
    "X-Title": "YourAppName",
  },
});

const res = await client.chat.completions.create({
  model: "anthropic/claude",
  messages: [{ role: "user", content: "Say hello." }],
});

console.log(res.choices[0]?.message?.content);

Python (OpenAI-compatible client)

from openai import OpenAI
import os

client = OpenAI(
  api_key=os.environ["LMCHAT_API_KEY"],
  base_url="https://lmchat.net/api/v1",
  default_headers={
    "HTTP-Referer": "https://yourapp.com",
    "X-Title": "YourAppName",
  },
)

resp = client.chat.completions.create(
  model="anthropic/claude",
  messages=[{"role":"user","content":"Write a 1-sentence tagline."}],
)

print(resp.choices[0].message.content)

Next steps

Continue with detailed docs for: models, chat completions, streaming, tools, routing, and error handling.