One API key.

Every frontier model. One API. Zero config.

bash
curl https://api.lightweight.one/v1/chat/completions \
  -H "Authorization: Bearer lw_sk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-opus-4.6",
    "messages": [{"role": "user", "content": "Hello"}]
  }'
Get Free API Key View Documentation

Every frontier model. Zero config.

Loading models... · One API key

Claude Opus 4.6
Anthropic · 200K tokens
GPT-5.3 Codex
OpenAI · 272K tokens
Gemini 3.1 Pro
Google · 1M tokens
GPT-5
OpenAI · 400K tokens
o3
OpenAI · 200K tokens
Claude Sonnet 4.6
Anthropic · 200K tokens
Grok 3
xAI · 131K tokens
Llama 4 Maverick
Meta · 1M tokens
DeepSeek R1
DeepSeek · 164K tokens
Mistral Medium
Mistral · 131K tokens
Phi-4 Reasoning
Microsoft · 32K tokens
+ more
01. Get your key
export OPENAI_API_KEY=lw_sk_...
02. Point any tool
--api-base https://api.lightweight.one/v1
03. Use any model
model: claude-opus-4.6

Works with any OpenAI-compatible client

OpenCode Aider Cursor Zed Continue Cline Windsurf Ollama

Pricing.

Beta-only pricing — slots limited. Prices valid only during beta.

pricing.ts
// Beta-only pricing — these prices exist only during beta
const plans = {
  supporter: { price: 1.0, tokens: "15M", slots: 100, rpm: 60 },
  patron: { price: 5.0, tokens: "50M", slots: 200, rpm: 150 },
  champion: { price: 10.0, tokens: "150M", slots: 300, rpm: 300 },
  legend: { price: 15.0, tokens: "500M", slots: 500, rpm: 600 },  // ≈ Claude Max ($200/mo)
  core: { price: 50.0, tokens: "1B", rpm: 1000 },  // ~20× Claude Max value
  ultra: { price: 100.0, tokens: "3B", rpm: 1500 },
  titan: { price: 200.0, tokens: "10B", rpm: 2000 },
} as const

"Claude Max costs $200/mo for ~200M tokens."

Our Legend tier: 500M tokens for $15/mo — 2.5× more, 1/13th the price.

"Claude Pro gives you ~10M tokens for $20/mo."

Our $1 Supporter tier gives you 15M tokens — 1.5× more for 1/20th the price.

What 50M tokens costs elsewhere:
Anthropic API:
$2,750/mo
Claude Max (5×):
$1,000/mo
Us (Patron):
$5/mo

Your tokens are yours. No surprise cutoffs.

Other providers charge $200/mo and still cut you off mid-prompt. You can't finish your thought, your refactor dies halfway, your context vanishes. We don't do that. Every token in your plan is yours to use — no invisible walls, no degraded responses, no "please upgrade" mid-conversation. You finish what you started.

Complex projects eat 1B+ tokens fast.
Long codebases, multi-file refactors, deep reasoning — we get it. Core ($50) is where serious dev work starts. We're here to help you ship, not bill you into oblivion.
OG 🏆 INHERITED
Early testers get 2× tokens on any plan. Forever. Can't be bought.
Founders ($1,000) 🔒 LOCKED
100 seats. Once they're gone, they're gone forever.
Frontiers ($5,000) 🔒 LOCKED
Unlimited tokens. Unlimited RPM. The final frontier.
BYOK (Free)
Bring your own API keys. We just route.

The math doesn't lie.

50M tokens on Anthropic API $2,750/mo
50M tokens on Lightweight $5/mo

That's 550× more value per dollar.

What can you build?

Every tier unlocks real work. Here's what your tokens actually get you.

Supporter · 15M tokens
$1/mo
  • ✓ Small scripts & quick prototypes
  • ✓ Learning & experimentation
  • ✓ Code reviews & explanations
  • ✓ ~150 long coding sessions
Patron · 50M tokens
$5/mo
  • ✓ Personal projects & side projects
  • ✓ Small apps (CRUD, APIs, CLIs)
  • ✓ Freelance client work
  • ✓ ~500 long coding sessions
Champion · 150M tokens
$10/mo
  • ✓ Medium apps & multiple repos
  • ✓ Full-stack development
  • ✓ Startup MVPs & team projects
  • ✓ ~1,500 long coding sessions
Legend · 500M tokens ≈ Claude Max
$15/mo
  • ✓ Large codebases & monorepos
  • ✓ Complex architectures
  • ✓ Daily heavy usage, all models
  • ✓ ~5,000 long coding sessions
Core · 1B tokens ~20× Claude Max
$50/mo
  • ✓ Enterprise apps & microservices
  • ✓ Multi-service architectures
  • ✓ CI/CD integrated AI workflows
  • ✓ ~10,000 long coding sessions
Ultra · 3B tokens ~30× Claude Max
$100/mo
  • ✓ Complex platforms & AI-heavy apps
  • ✓ Continuous development teams
  • ✓ Agency-scale multi-project work
  • ✓ ~30,000 long coding sessions
Titan · 10B tokens ~50× Claude Max
$200/mo
  • ✓ Massive codebases (millions of LoC)
  • ✓ Multiple teams, multiple projects
  • ✓ 24/7 automated AI pipelines
  • ✓ ~100,000 long coding sessions

How tokens are spent

Every request uses tokens. Bigger models use more. Here's what to expect.

Operation Avg tokens On Supporter (15M) On Legend (500M) On Core (1B)
Explain a function ~2K 7,500× 250,000× 500,000×
Generate a component ~5K 3,000× 100,000× 200,000×
Refactor a file ~10K 1,500× 50,000× 100,000×
Long coding session (w/ context) ~100K 150× 5,000× 10,000×
Full codebase review ~500K 30× 1,000× 2,000×

Cost per model

Heavier models cost more tokens per request. Here's what the top models cost.

Prices shown per 1M tokens (input / output)

▸ Frontier

Claude Opus 4.6
Frontier
Input: $5.00/1M Output: $25.00/1M
Anthropic's most capable model
GPT-5.3 Codex
Frontier
Input: $1.75/1M Output: $14.00/1M
OpenAI's latest coding powerhouse
Gemini 3.1 Pro
Frontier
Input: $2.00/1M Output: $12.00/1M
Google's flagship with 1M context

▸ Reasoning

o3
Reasoning
Input: $2.00/1M Output: $8.00/1M
Advanced reasoning model
DeepSeek R1
Reasoning
Input: $0.30/1M Output: $1.20/1M
Deep reasoning, chain-of-thought
Phi-4 Reasoning
Reasoning
Input: $0.14/1M Output: $0.56/1M
Microsoft's reasoning specialist

▸ Efficient

GPT-5
Efficient
Input: $1.25/1M Output: $10.00/1M
Most capable general-purpose model
Claude Sonnet 4.6
Efficient
Input: $3.00/1M Output: $15.00/1M
Speed meets intelligence
Grok 3
Efficient
Input: $3.00/1M Output: $15.00/1M
xAI's flagship model

▸ Compact

GPT-5-mini
Compact
Input: $0.25/1M Output: $2.00/1M
Fast, affordable, versatile
Llama 4 Maverick
Compact
Input: $0.15/1M Output: $0.60/1M
Open-source with 1M context
Mistral Medium
Compact
Input: $0.40/1M Output: $2.00/1M
European excellence

Showing 12 of 71+ models.

Full model pricing in documentation.

Microsoft Amazon Google Meta NVIDIA Cloudflare

Infrastructure backed by industry leaders.

Ready when you are.

Drop-in replacement for any existing tool.

terminal
# Install lightweight CLI
curl -fsSL https://lightweight.one/install | bash

# Or use with any existing tool
export OPENAI_BASE_URL=https://api.lightweight.one/v1
export OPENAI_API_KEY=lw_sk_your_key_here

# That's it. Every model, one endpoint.

Built in the open.

Lightweight is open source. The CLI, the SDK, the docs.
The only thing that isn't open source is our infrastructure.

github.com/templarsco

THE REIMAGINERS OF XXI