AI Streaming JSON Viewer
Parse and pretty-print SSE stream chunks from OpenAI and similar APIs.
Related Tools
Fix broken LLM JSON: strip markdown fences, trailing commas, single quotes, and more.
Extract and fix JSON from mixed LLM output — handles prose, markdown, and concatenation.
Visually build JSON schemas for AI function calling and structured output.
Validate AI JSON output against a JSON Schema — check types, required fields, enums.
Generate a structured output prompt from a JSON example or schema.
Learn More
FAQ
- What is SSE (Server-Sent Events) in the context of AI APIs?
- When you call an AI API with stream: true, the server sends the response incrementally as a series of text lines in the format "data: {json}". Each line is one chunk. SSE allows the client to display text as it is generated rather than waiting for the full response.
- How do I get SSE stream output to paste into this tool?
- In your browser DevTools, open the Network tab, find the streaming API request, and copy the raw response body. Alternatively, if calling the API programmatically, log the raw response body before parsing. Then paste the raw "data: ..." lines into the tool.
- What does the [DONE] line mean?
- "data: [DONE]" is the standard SSE termination signal used by OpenAI-compatible APIs to indicate that the stream has ended. This tool automatically skips that line and does not count it as a chunk.
Paste raw Server-Sent Events (SSE) stream text from OpenAI chat completions or similar APIs. The viewer parses each line starting with "data: ", skips [DONE] markers, attempts JSON.parse on each chunk, and displays a numbered list of pretty-printed JSON objects. Shows total chunk count. Useful for debugging streaming LLM responses and understanding the chunk structure.