Prompt Engineering Basics: A Practical Guide
Prompt engineering is the practice of crafting inputs to language models that reliably produce useful outputs. While LLMs are remarkably flexible, the way you phrase a request dramatically affects the quality, format, and accuracy of the response. This guide covers the foundational techniques used by AI engineers — from basic instructions to few-shot prompting and chain-of-thought reasoning — with concrete examples for each.
The Anatomy of an Effective Prompt
Every effective prompt has four components: (1) a role that sets the model's persona and expertise level; (2) a task that clearly states what the model must do; (3) context that provides the information the model needs to complete the task; and (4) an output format that specifies how the response should be structured. Not every prompt needs all four, but including them reduces ambiguity and improves consistency. A weak prompt says "Summarise this text." A strong prompt says "You are a technical writer. Summarise the following code review for a product manager who does not know TypeScript. Use three bullet points and plain English."
Role Prompting
Telling the model to "act as" a specific expert significantly improves the relevance and depth of its responses. "You are a security engineer" produces a different (and usually better) security review than no role at all. Role prompting works because it activates patterns in the model's training data associated with that persona — the typical reasoning style, vocabulary, and priorities of that expert. Use specific roles rather than generic ones: "You are a PostgreSQL database engineer who specialises in query optimisation" outperforms "You are a database expert". Claude responds particularly well to role prompting when the role is placed in a <role> XML tag.
Few-Shot Prompting
Few-shot prompting provides 2–5 examples of the desired input-output pairs before the actual task. This technique is one of the most reliable ways to control output format and style, because the model infers the pattern from the examples rather than requiring lengthy instructions. For example, to extract structured data from free text, show two or three examples with input text and the desired JSON output, then present the real input. The model will format its output to match the examples. Zero-shot prompting (no examples) is faster but less consistent for structured outputs.
Chain-of-Thought Reasoning
Chain-of-thought (CoT) prompting asks the model to reason through a problem step by step before giving the final answer. Adding "Think step by step" to a complex reasoning task (mathematics, multi-step logic, debugging) improves accuracy significantly on tasks that require intermediate reasoning steps. For sensitive decisions, use "Show your reasoning before giving your final answer" to make the model's assumptions visible and checkable. On GPT-4o, extended thinking mode handles this automatically. On Claude, the thinking parameter activates a dedicated reasoning pass. For simpler tasks, CoT is unnecessary and wastes tokens.
Output Format Control
Specifying the output format in detail produces more consistent and parseable responses. For structured data, ask for JSON and provide the schema: "Return a JSON object with keys name (string), score (1-10), and issues (array of strings)." For prose, specify the section structure: "Respond with exactly three paragraphs: background, analysis, recommendation." For code, specify the language and whether to include explanations. Always tell the model whether to include preamble text like "Sure, here's the JSON you requested:" — such preamble breaks JSON parsing. Use "Return only the JSON object, with no additional text" to suppress it.
Common Prompt Engineering Mistakes
The most common mistakes are: (1) vague tasks ("write something about X") that give the model too much latitude; (2) contradictory instructions ("be concise" followed by "provide comprehensive detail"); (3) too many instructions at once — models follow the last instruction most reliably, so put the most important instruction last; (4) assuming the model knows your codebase, business context, or conventions — always include relevant context explicitly; (5) not testing prompts against edge cases before deploying to production, where adversarial inputs can produce unexpected outputs.
Try These Tools
Build structured AI prompts with role, task, context, and output format fields.
Analyze and improve AI prompts with rule-based suggestions.
Analyze how complex your AI prompt is and understand each contributing factor.
Clean and format AI prompts by removing invisible characters and normalizing whitespace.
FAQ
- Is prompt engineering still necessary with newer, smarter models?
- Yes, but the gap between a good prompt and a mediocre one narrows as models improve. Modern models handle ambiguity better, but for production tasks requiring consistent structured outputs, explicit prompt engineering remains essential for reliability.
- Should I use a system prompt or a user message for instructions?
- Use the system prompt for stable instructions that apply to every message (role, output format, constraints). Use the user message for per-request context and the actual task. This separation also enables prompt caching, which reduces costs significantly for stable system prompts.
- How many few-shot examples should I provide?
- Two to five examples is the sweet spot for most tasks. More examples improve consistency but add token costs. If the output format is simple (JSON with 2-3 fields), two examples are usually sufficient. If the task involves complex reasoning or nuanced judgment, four or five examples produce more reliable results.
Related Guides
Getting an LLM to reliably return structured data like JSON is one of the most important s...
Writing Effective System Prompts for ClaudeClaude's system prompt is your primary lever for controlling the model's behaviour across ...
Testing and Evaluating AI PromptsPrompt engineering without evaluation is guesswork. A prompt that works well on the ten ex...