Every frontier model. One API. Zero config.
curl https://api.lightweight.one/v1/chat/completions \
-H "Authorization: Bearer lw_sk_..." \
-H "Content-Type: application/json" \
-d '{
"model": "claude-opus-4.6",
"messages": [{"role": "user", "content": "Hello"}]
}'
Loading models... · One API key
Works with any OpenAI-compatible client
Beta-only pricing — slots limited. Prices valid only during beta.
// Beta-only pricing — these prices exist only during beta
const plans = {
supporter: { price: 1.0, tokens: "15M", slots: 100, rpm: 60 },
patron: { price: 5.0, tokens: "50M", slots: 200, rpm: 150 },
champion: { price: 10.0, tokens: "150M", slots: 300, rpm: 300 },
legend: { price: 15.0, tokens: "500M", slots: 500, rpm: 600 }, // ≈ Claude Max ($200/mo)
core: { price: 50.0, tokens: "1B", rpm: 1000 }, // ~20× Claude Max value
ultra: { price: 100.0, tokens: "3B", rpm: 1500 },
titan: { price: 200.0, tokens: "10B", rpm: 2000 },
} as const
"Claude Max costs $200/mo for ~200M tokens."
Our Legend tier: 500M tokens for $15/mo — 2.5× more, 1/13th the price.
"Claude Pro gives you ~10M tokens for $20/mo."
Our $1 Supporter tier gives you 15M tokens — 1.5× more for 1/20th the price.
Your tokens are yours. No surprise cutoffs.
Other providers charge $200/mo and still cut you off mid-prompt. You can't finish your thought, your refactor dies halfway, your context vanishes. We don't do that. Every token in your plan is yours to use — no invisible walls, no degraded responses, no "please upgrade" mid-conversation. You finish what you started.
That's 550× more value per dollar.
Every tier unlocks real work. Here's what your tokens actually get you.
Every request uses tokens. Bigger models use more. Here's what to expect.
| Operation | Avg tokens | On Supporter (15M) | On Legend (500M) | On Core (1B) |
|---|---|---|---|---|
| Explain a function | ~2K | 7,500× | 250,000× | 500,000× |
| Generate a component | ~5K | 3,000× | 100,000× | 200,000× |
| Refactor a file | ~10K | 1,500× | 50,000× | 100,000× |
| Long coding session (w/ context) | ~100K | 150× | 5,000× | 10,000× |
| Full codebase review | ~500K | 30× | 1,000× | 2,000× |
Heavier models cost more tokens per request. Here's what the top models cost.
Prices shown per 1M tokens (input / output)
▸ Frontier
▸ Reasoning
▸ Efficient
▸ Compact
Showing 12 of 71+ models.
Full model pricing in documentation.
Infrastructure backed by industry leaders.
Drop-in replacement for any existing tool.
# Install lightweight CLI
curl -fsSL https://lightweight.one/install | bash
# Or use with any existing tool
export OPENAI_BASE_URL=https://api.lightweight.one/v1
export OPENAI_API_KEY=lw_sk_your_key_here
# That's it. Every model, one endpoint.
Lightweight is open source. The CLI, the SDK, the docs.
The only thing that isn't open source is our infrastructure.
THE REIMAGINERS OF XXI
Integrations
Use Lightweight models wherever you code — terminal, IDE, or API aggregator.