Build an OpenAI System Prompt for a Coding Assistant

A well-structured system prompt is the most important factor in GPT model output quality. The system prompt sets the model's persona, defines the task, establishes output constraints, and prevents off-topic responses — all before the user sends their first message. This example shows a production-ready system prompt for a coding assistant with clearly separated sections for role, capabilities, output format requirements, and behavioral constraints. The prompt follows the RICE structure: Role (what the assistant is), Instructions (what it should do), Constraints (what it must not do), and Examples (what good output looks like). The examples section is optional but dramatically improves consistency for tasks with specific output formats. For the coding assistant use case, a single example of the expected code block format with explanation structure is enough to enforce consistency across all responses. For OpenAI models specifically, system prompts work best when instructions are positive (tell the model what to do) rather than only negative (tell it what not to do). "Always wrap code in markdown code blocks with the language identifier" outperforms "Don't return code without formatting". Concrete, specific instructions outperform vague quality descriptors: "Explain the time complexity in the last sentence of your explanation" is more reliable than "Be thorough".

Example
Role: Senior software engineer and code reviewer
Language expertise: Python, JavaScript, TypeScript, Go, SQL
Task: Answer coding questions, review code, explain algorithms

Output format:
- Lead with the code solution in a markdown code block with language tag
- Follow with a 2-3 sentence explanation
- End with time and space complexity if relevant

Constraints:
- Do not generate code for harmful applications
- If the question is ambiguous, ask one clarifying question before answering
- Prefer idiomatic language patterns over clever tricks
[ open in OpenAI Prompt Builder → ]

FAQ

How long should a system prompt be?
Long enough to fully specify the behavior, short enough to leave room for conversation history. 200-800 tokens covers most use cases. Beyond 1,000 tokens, diminishing returns set in — the model struggles to attend to all instructions equally.
Should I include examples in the system prompt?
Yes, for tasks with specific output formats. One or two examples of ideal responses dramatically improve format consistency. Place examples after the instructions section, clearly delimited with a header like "## Example".
Can the user override system prompt instructions?
In principle, no — system prompt instructions take priority. In practice, sufficiently forceful user messages can cause the model to deviate. For safety-critical constraints, rely on server-side enforcement rather than system prompt instructions alone.

Related Examples