One API key.

Every frontier model. 40 models from 7 providers. Fixed monthly price.

curl https://api.lightweight.one/v1/chat/completions \
  -H "Authorization: Bearer $LW_API_KEY" \
  -d '{
    "model": "gpt-5",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

Every frontier model. Zero config.

40 models from 7 providers. One API key, zero vendor lock-in.

GPT-5
OpenAI
128K context
GPT-4.1
OpenAI
1M context
o3
OpenAI
200K context
Grok 3
xAI
131K context
Llama 4 Maverick
Meta
256K context
DeepSeek R1
DeepSeek
164K context
Phi-4 Reasoning
Microsoft
16K context
+32 more Browse all models

Three steps. That's it.

01

Get your API key

Sign up in seconds. Generate a single key that grants access to every frontier model instantly.

02

Point your client

Update your base URL to api.lightweight.one/v1 and use your key.

03

Build anything

const response = await client.chat.completions.create({
  model: 'gpt-5',
  messages: [...]
});

Compatible with

OpenCode Aider Cursor Zed Continue Cline Windsurf Ollama

Simple, predictable pricing.

No per-token billing. No surprise costs. Just pick a plan.

pricing.ts
export const plans = {
  supporter:  { price: "$1/mo",   tokens: "2M",    slots: 1,  rpm: 10  },
  patron:     { price: "$5/mo",   tokens: "12M",   slots: 2,  rpm: 20  },
  champion:   { price: "$10/mo",  tokens: "30M",   slots: 3,  rpm: 30  },
  legend:     { price: "$15/mo",  tokens: "50M",   slots: 3,  rpm: 40  },
  core:       { price: "$50/mo",  tokens: "200M",  slots: 5,  rpm: 60  },
  ultra:      { price: "$100/mo", tokens: "500M",  slots: 10, rpm: 100 },
  titan:      { price: "$200/mo", tokens: "1.2B",  slots: 25, rpm: 200 },
} as const
$2,750 per month at retail token prices
$5/mo
with Lightweight
550× more value.

By routing requests efficiently and aggregating volume, we bring the true cost of intelligence down to a flat subscription. Stop worrying about individual token prices.

What you can build

For Everyone

Supporter ($1)
Personal AI assistant, quick Q&A
Patron ($5)
Daily coding help, content writing
Champion ($10)
Full-stack development, team prototyping
Legend ($15)
Production apps, high-volume automation

For Developers

Core ($50)
SaaS products, multi-tenant apps
Ultra ($100)
Enterprise APIs, real-time pipelines
Titan ($200)
Platform-scale, white-label solutions

Understanding token consumption

Operation Input Output Total ~Cost (Retail)
Quick question 500 200 700 $0.004
Code review 2K 1K 3K $0.018
Document analysis 5K 2K 7K $0.042
Chat conversation 1K 500 1.5K $0.009
Complex reasoning 3K 3K 6K $0.054

Per-model cost breakdown

Frontier Models (per 1M tokens)

GPT-4o
Most capable
In: $2.50
Out: $10.00
Grok 3
xAI flagship
In: $3.00
Out: $15.00
DeepSeek-R1
Deep reasoning
In: $1.35
Out: $5.40

Efficient Models (per 1M tokens)

GPT-4o mini
Fast, cheap
In: $0.15
Out: $0.60
Llama 3.3 70B
Open weights
In: $0.71
Out: $0.71
Phi-4
Small powerhouse
In: $0.13
Out: $0.50

How it works: Tokens equal input + output. Bigger models cost more. For example, a GPT-4o request with 1K input and 500 output tokens costs roughly $0.0075 at retail. A Supporter plan provides enough quota for ~2,000 of these requests per month. Your plan's token quota covers all models — we handle the conversion automatically.

Infrastructure backed by industry leaders.

NVIDIA Nebius Portkey

Start building in 30 seconds

terminal
# Install any OpenAI-compatible client
npm install openai

# Set your endpoint
export OPENAI_BASE_URL=https://api.lightweight.one/v1
export OPENAI_API_KEY=lw-your-api-key

Built in the open.

Transparency isn't an afterthought. The entire Lightweight platform, from our high-performance router to the dashboard, is open source. Inspect the code, run it yourself, or contribute.

templarsco/lightweight Star
THE REIMAGINERS OF XXI