Paste any text to see how many tokens it consumes across major LLM tokenizers, and how much of each model's context window it would fill.
Characters: 0Words: 0Lines: 0
GPT-4o, GPT-4.1
0
✓ Exact (o200k_base)
GPT-4, GPT-3.5
0
✓ Exact (cl100k_base)
Claude (Opus/Sonnet/Haiku)
0
~ Estimate (3.5 chars/token)
Llama 3, Mistral
0
~ Estimate (3.8 chars/token)
Context Window Used
How much of each model's context window your input would consume:
Rough Cost Estimate (Input Only)
Approximate cost to send this as input to each model. Output tokens are billed separately and usually cost more.
Model
Input price (per 1M tok)
Estimated cost
Honest Caveats
OpenAI counts are exact — they use the published tiktoken tokenizer.
Claude and Llama counts are estimates — Anthropic does not publish their tokenizer, and Llama estimates here use a character-ratio heuristic. Real counts can vary by ±20% depending on language, code, and special characters.
For an exact Claude count, use Anthropic's count_tokens API endpoint.
Cost figures are approximate as of early 2026 and may have changed. Check the provider's pricing page before relying on them for budgeting.
This tool runs entirely in your browser. Nothing is sent anywhere.