AI Prompt Templates
A free, browsable library of AI prompt templates for ChatGPT, Claude, Gemini, and LLaMA. Each prompt includes variables, an explanation, and a worked example.
coding
Code Review Prompt
This prompt structures code reviews into five clear categories so the AI produces actionable, prioritised feedback rather than vague comments. The severity labels help you triage which issues to fix first, and requiring a concrete fix snippet makes the review immediately actionable. It works best when you include a one-sentence context description so the model understands the intended behaviour.
Code Refactoring Prompt
Effective refactoring prompts must specify what to change and what must not change. This template uses explicit constraints to prevent the AI from breaking the public API or silently altering behaviour, which is the most common failure mode when asking models to rewrite code. The summary table helps you understand every change before you commit it.
Debugging Assistant Prompt
Most debugging sessions fail not because the fix is hard to find but because the problem is poorly defined. This prompt forces you to structure the bug report before asking the AI, which dramatically improves the quality of the response. The five-section output format (root cause, mechanism, fix, verification, prevention) ensures you get a complete answer rather than just a code snippet.
Code Documentation Prompt
Auto-generated documentation is only useful when it goes beyond repeating the function signature. This prompt requires the AI to describe behaviour, edge cases, and side effects, which are the things developers actually need when reading unfamiliar code. Specifying the target audience (junior dev vs. API consumer) significantly changes the level of detail and tone.
Unit Test Generation Prompt
AI-generated tests are most valuable when they catch edge cases that developers miss. This prompt systematically covers four test categories and requires descriptive test names, which makes the test suite serve as living documentation. The mock_strategy field is important — without guidance the model may inline real HTTP calls or database queries that break in CI.
REST API Design Prompt
Good API design is about consistency and predictability. This prompt enforces REST conventions (plural nouns, proper HTTP methods, meaningful status codes) and produces a structured endpoint specification that can be turned into an OpenAPI document. The pagination and versioning sections prevent common architectural mistakes that are expensive to fix after launch.
Error Handling Prompt
Error handling is frequently added as an afterthought, resulting in inconsistent patterns and swallowed exceptions. This prompt designs the complete error handling strategy before writing code — typed errors, logging context, and user-facing messages — ensuring a consistent approach throughout the codebase.
Performance Optimisation Prompt
Optimisation without measurement is guessing. This prompt requires current metrics upfront so the AI can target the right bottleneck, and asks for a benchmarking plan so you can verify the improvement actually happened. The trade-offs section prevents micro-optimisations that make code unmaintainable.
Code Explanation Prompt
Code explanations are most valuable during onboarding, code review, and incident analysis. This prompt produces a layered explanation (summary → walkthrough → concepts → gotchas) that serves readers at different levels of depth. The concrete example walkthrough is what makes complex algorithms click.
Migration Guide Prompt
Migrations fail when the rollback plan is an afterthought. This prompt designs rollback capability as a first-class requirement, alongside the zero-downtime strategy. The pre-migration checklist prevents migrations that fail immediately due to missing prerequisites, which is the most common cause of emergency rollbacks.
writing
Technical Documentation Prompt
Technical documentation is most often abandoned because it is unclear who it is for and what they need to do. This prompt forces a specific audience and documentation type, which determines the appropriate level of detail and structure. The troubleshooting section is the highest-value addition — it captures tribal knowledge that often exists only in Slack threads.
Changelog Writing Prompt
Most changelogs are written by developers for developers, burying user-visible changes in implementation details. This prompt enforces user-centric language and the Keep a Changelog convention, which produces a changelog that users actually read. The breaking change detection prevents users from being surprised by removed features.
README Generation Prompt
A good README is the most important documentation a project has. This prompt generates all standard sections including the often-forgotten configuration section (listing all env vars) and contributing guide, which reduces the bus factor and speeds up onboarding. The quick start section is critical — new users decide within 5 minutes whether to invest further.
Commit Message Prompt
Good commit messages are as important as good code — they are the primary source of context when debugging production issues six months later. This prompt focuses on the WHY rather than the WHAT (which the diff already shows) and produces multiple subject line options so you can choose the level of detail appropriate for the change.
Blog Post Outline & Draft Prompt
Blog posts generated without structure guidance tend to produce generic, meandering content. This prompt forces a specific narrative arc (hook, intro, body, summary, CTA) and requires at least one practical code example, which is what developer readers value most. Specifying the SEO keyword ensures the model works it into the content naturally rather than requiring a separate pass.
Professional Email Draft Prompt
Effective professional emails require tone, brevity, and a single clear action. This prompt generates three subject line options (which you can A/B test) and enforces a word limit to prevent rambling. The sender and recipient roles help the AI calibrate the appropriate level of formality and the persuasion style.
Text Summarisation Prompt
Summaries fail when the AI does not know what the reader already knows or what they need to do with the information. Specifying the audience changes the vocabulary and level of detail. The focus_areas and exclusions fields prevent the AI from summarising the less important parts of a long document while omitting the key decisions or metrics.
Proofreading & Editing Prompt
This prompt treats editing as a collaboration rather than a rewrite. The change log is the most important feature — it lets you accept or reject individual edits and understand why each change was made, preserving your editorial control. The preserve field prevents the AI from stripping out the personality or specific terms that make your writing distinctive.
data
JSON Transformation Prompt
JSON transformations are error-prone when null fields or unexpected types appear. This prompt requires both a sample input and the expected output, so the AI can derive the exact transformation logic rather than guessing. The null-handling section prevents silent data loss when real data differs from the sample.
Regex Generation Prompt
Regex is one of the most misread and miswritten tools in programming. This prompt generates regexes with built-in test cases and a breakdown table explaining each component, which dramatically reduces the chance of a subtle bug making it to production. The must-not-match examples are as important as the must-match ones.
API Response Parsing Prompt
API integrations break most often when the response structure changes or contains unexpected null values. This prompt generates both the type definitions and a robust mapper with validation, so unexpected API changes produce a clear error at the parsing boundary rather than a cryptic downstream failure. The zod/pydantic validation section is particularly valuable for catching schema drift early.
Data Validation Rules Prompt
Validation that returns only the first error forces users to submit a form multiple times to fix all issues. This prompt generates a collect-all-errors validator that surfaces every problem at once, along with human-readable messages rather than technical codes. The sanitisation step (trimming whitespace, normalising unicode) prevents a class of bugs where visually identical inputs produce different results.
Database Schema Design Prompt
Database schema mistakes are the most expensive to fix after launch because they require data migrations. This prompt surfaces design decisions (UUID vs. serial IDs, soft vs. hard deletes, indexing strategy) before they become permanent, and aligns the index strategy with actual access patterns rather than adding indexes speculatively.
Data Analysis Prompt
Most AI data analysis prompts produce vague observations like "sales increased in Q2". This prompt forces the model to answer a specific business question, cite supporting numbers, and distinguish findings from recommendations. The Limitations section is critical — it prevents overconfident conclusions from small or incomplete datasets.
SQL Query Generation Prompt
Generating SQL without schema context produces queries that reference non-existent columns or miss join conditions. This prompt requires the schema upfront and asks for comments inside the query, which makes the generated SQL much easier to understand and debug. The performance notes section often surfaces missing index suggestions that would otherwise require a separate query.
CSV Data Processing Prompt
CSV processing tasks involve numerous small decisions about null handling, column types, and deduplication logic. This prompt makes those decisions explicit before code generation, which prevents the AI from making silent assumptions. The validation section (row count before/after) is essential for catching bugs where transformations silently drop more rows than expected.
devops
Nginx Configuration Prompt
Nginx configurations are easy to get almost right but dangerously wrong in security details. This prompt generates configurations with security headers, modern TLS settings, and rate limiting by default — the settings that are most often missing from hand-written configs. The test command and tuning parameters make the config immediately operational.
GitHub Actions Workflow Prompt
GitHub Actions workflows are deceptively easy to write insecurely. This prompt enforces pinned action versions (prevents supply chain attacks), minimal GITHUB_TOKEN permissions, and dependency caching by default. The timeout requirement prevents runaway jobs from consuming billable minutes indefinitely.
Terraform Configuration Prompt
Terraform configurations without modules and variable abstraction become unmanageable at scale. This prompt enforces variable-driven configurations, proper tagging, and module structure from the start, preventing the "spaghetti Terraform" that accumulates when infrastructure grows incrementally without architectural planning.
Monitoring & Alerting Setup Prompt
Alert fatigue is the biggest cause of missed incidents. This prompt designs alerts from the SLO backwards — starting with what the user experiences and working inward — rather than alerting on every metric. The runbook requirement ensures that whoever responds to an alert at 3am has immediate guidance on how to diagnose it.
Docker Compose Prompt
docker-compose.yml files often use :latest tags and bind mounts, which cause "works on my machine" problems. This prompt generates configs with pinned tags, named volumes, and proper dependency health checks, so the entire stack starts cleanly every time. The .env.example file prevents the common problem of missing environment variables causing mysterious startup failures.
Dockerfile Generation Prompt
Dockerfiles generated without guidance often use :latest tags, run as root, and copy the entire source tree before installing dependencies (which breaks layer caching). This prompt enforces production best practices by default: multi-stage builds, non-root user, specific tags, HEALTHCHECK, and cache-optimised layer order. The size estimate helps you catch accidentally large images before they slow down deployments.
CI/CD Pipeline Configuration Prompt
CI/CD configurations involve many interdependent jobs and conditional triggers that are easy to get wrong. This prompt defines the full pipeline topology (test on PR, build on merge, deploy with approval gate) so the AI generates a complete, working pipeline rather than a fragment. The caching requirement is especially important — without it, dependency installs on every run significantly increase build times and costs.
Kubernetes Manifest Generation Prompt
Kubernetes manifests have many interacting fields that are easy to misconfigure. This prompt generates four interdependent resources (Deployment, Service, Ingress, HPA) with consistent naming and labels, which is critical for Kubernetes to correctly associate them. The comment requirement makes the manifest self-documenting, which is essential for teams maintaining K8s configurations.
security
Threat Modeling Prompt
Threat modeling is most valuable when done during design, not after implementation. This prompt applies the STRIDE framework systematically, ensuring every threat category is considered rather than just the ones currently in the news. The attack narrative requirement forces the AI to confirm that threats are actually exploitable, not just theoretical.
Penetration Test Plan Prompt
Penetration tests without a detailed plan often miss systematic coverage of all attack vectors. This prompt creates a structured test plan aligned with OWASP methodology that ensures consistent coverage and produces a report structure that satisfies compliance requirements. The rules of engagement section is critical for preventing misunderstandings about what testers are permitted to do.
Security Headers Configuration Prompt
Content Security Policy is the most complex security header to configure correctly because it must allowlist every legitimate resource while blocking everything else. This prompt generates a CSP tailored to the specific third-party resources in use, avoiding the over-permissive `unsafe-inline` and `unsafe-eval` directives that negate much of the protection.
Dependency Audit Prompt
Dependency audits done quarterly catch security issues before they become incidents. This prompt goes beyond vulnerability scanning to include abandoned packages (no maintainer = no future patches), license compliance (legal risk), and functional duplication (maintenance burden). The prioritised upgrade plan makes the audit immediately actionable.
Security Code Audit Prompt
Security audits require a systematic approach that covers every vulnerability category, not just the obvious ones. This prompt walks the AI through the OWASP Top 10 categories and requires a proof-of-concept for each finding, which forces the model to confirm exploitability rather than listing theoretical risks. The threat model field is critical — a vulnerability that requires database access is low severity for an unauthenticated attacker but critical for a malicious insider.
Dependency Vulnerability Check Prompt
Dependency audits are most valuable when they prioritise by actual impact rather than just CVE score. This prompt asks for the full attack chain (impact on your specific application type) and a breaking-changes note for each upgrade, which is what developers actually need to plan the remediation work. The abandoned package and typosquatting checks catch supply chain risks that automated scanners often miss.
testing
Integration Test Prompt
Integration tests are more valuable than unit tests for catching real bugs because they test the interactions between components. However, they require more setup. This prompt generates test suites with proper isolation (separate test database, seeded fixtures, cleanup) to prevent the common problem of tests passing in isolation but failing when run together.
API Testing Prompt
API tests without authorisation testing are incomplete — they miss the most common class of API vulnerabilities (IDOR and broken access control). This prompt systematically tests authentication, authorisation, and validation for every endpoint, producing a test suite that doubles as security regression coverage.
Load Testing Prompt
Load tests that hit a single endpoint with constant load do not reflect real user behaviour and produce misleading results. This prompt generates realistic traffic mixes with think time, parameterised data (to avoid cache inflation), and proper ramp-up profiles that match the system's actual access patterns.
Regression Test Planning Prompt
Regression testing without prioritisation wastes time retesting stable areas while missing the high-risk areas that changed. This prompt generates a risk-ordered regression plan that fits within the available time budget, with a smoke test subset for rapid confidence checks before the full suite runs.
Test Plan Generation Prompt
Test plans generated without structure produce long lists of obvious cases while missing the most valuable edge cases. This prompt generates test cases in a structured format (ID, preconditions, steps, expected result) that maps directly to test management tools like TestRail or Jira. The explicit out_of_scope field prevents the plan from growing indefinitely and forces clear boundaries.
Edge Case Generation Prompt
Developers typically test the happy path and one or two obvious error cases. Edge case generation is where AI adds the most value — it systematically enumerates categories like encoding attacks, timezone boundary conditions, and concurrency issues that developers routinely miss. Requesting 30-50 cases forces the model to be exhaustive rather than stopping at the obvious five.