OpenAI Context Window Calculator

Check if your prompts fit within GPT-4o and GPT-3.5 context windows.

Max output for GPT-4o: 4,096 tokens
Total context usage
5000.4% of 128.0K
System tokens0
User tokens0
Output tokens500
Remaining127,500

Related Tools

FAQ

How large is GPT-4o context window?
GPT-4o and GPT-4o mini both have a 128,000-token context window — roughly 96,000 words or about 200 pages of text. GPT-3.5 Turbo has a smaller 16,000-token window.
Does the system prompt count against the context window?
Yes. Every token in the system prompt, all previous user and assistant messages, and the new user message all count against the context window. Only generated output beyond the context limit gets cut off.
What happens if I exceed the GPT-4o context limit?
OpenAI will return a context_length_exceeded error. You must reduce your prompt length, shorten the conversation history, or switch to a model with a larger context window.

Calculate context window usage for OpenAI GPT models. Enter system prompt, user message, and expected output tokens to see if your request fits within GPT-4o (128K), GPT-4 Turbo (128K), or GPT-3.5 Turbo (16K) limits.