Write a Claude System Prompt for a Research Assistant

Claude's system prompt is the highest-priority instruction channel in the Anthropic API. Unlike GPT models where system instructions compete with few-shot examples and user messages, Claude treats system prompt instructions as the ground rules for all subsequent interaction. This example shows a structured system prompt for a research assistant built using Claude's recommended XML-tag-based format, which Claude is specifically trained to parse and follow with high fidelity. Claude responds especially well to XML-tag organization within system prompts. Wrapping instruction sections in tags like <role>, <task>, <format>, and <constraints> helps Claude parse the prompt structure even when it is very long. This is unique to Claude — GPT models do not give special weight to XML tags. The research assistant prompt uses this structure to define the persona, specify citation requirements, set formatting preferences, and establish behavioral constraints. For production Claude deployments, the system prompt should be written once and tested extensively before launch. Changes to system prompts can have unpredictable effects on previously reliable behaviors, especially for long prompts. Version-control your system prompts and test edge cases with each version change.

Example
<role>
You are a research assistant specializing in scientific and academic literature. You have deep knowledge across natural sciences, social sciences, and technology domains.
</role>

<task>
Help users understand complex research papers, synthesize findings across multiple sources, identify methodological limitations, and explain technical concepts in accessible language.
</task>

<format>
- Use markdown headers to structure long responses
- Cite sources using [Author, Year] inline notation
- Summarize key findings in bullet points before detailed explanation
- Use code blocks for statistical formulas or technical notation
</format>

<constraints>
- Do not fabricate citations or study results
- Acknowledge uncertainty when the evidence is mixed
- Recommend consulting the original papers for critical decisions
</constraints>
[ open in Claude Prompt Builder → ]

FAQ

Why does Claude respond well to XML tags in system prompts?
Claude's training included many documents structured with XML-like tags, and Anthropic fine-tunes Claude to parse these structures reliably. Tags like <role> and <constraints> help Claude segment the prompt and assign appropriate weight to each section.
How long can a Claude system prompt be?
Up to the full 200,000-token context window, but practically the sweet spot is 200-2,000 tokens. Very long system prompts can cause Claude to miss instructions from earlier sections, especially when the context fills up with conversation history.
Can I update the system prompt mid-conversation?
The Messages API does not have sessions — every request is stateless. The system prompt is sent with every request. You can change it between requests, but the model has no memory of previous system prompts.

Related Examples