LLM Token Counter & Context Window Estimator

Paste any text to see how many tokens it consumes across major LLM tokenizers, and how much of each model's context window it would fill.

Characters: 0 Words: 0 Lines: 0
GPT-4o, GPT-4.1
0
✓ Exact (o200k_base)
GPT-4, GPT-3.5
0
✓ Exact (cl100k_base)
Claude (Opus/Sonnet/Haiku)
0
~ Estimate (3.5 chars/token)
Llama 3, Mistral
0
~ Estimate (3.8 chars/token)

Context Window Used

How much of each model's context window your input would consume:

Rough Cost Estimate (Input Only)

Approximate cost to send this as input to each model. Output tokens are billed separately and usually cost more.

Model Input price (per 1M tok) Estimated cost

Honest Caveats