Test Plan Generation Prompt (Claude)
Test plans generated without structure produce long lists of obvious cases while missing the most valuable edge cases. This prompt generates test cases in a structured format (ID, preconditions, steps, expected result) that maps directly to test management tools like TestRail or Jira. The explicit out_of_scope field prevents the plan from growing indefinitely and forces clear boundaries. This variant is formatted for Claude: Optimised for Claude 3.5 Sonnet and Claude 3 Opus. Uses XML tags for structured input and output.
<role>You are an expert AI assistant with deep knowledge in this domain.</role>
<task>
You are a senior QA engineer. Generate a comprehensive test plan for the following feature.
Feature name: {{feature_name}}
Feature description: {{feature_description}}
Technical implementation: {{technical_details}}
User stories / acceptance criteria:
{{acceptance_criteria}}
Testing scope:
- In scope: {{in_scope}}
- Out of scope: {{out_of_scope}}
Generate a test plan covering:
1. **Test Objectives** — what this test plan is designed to verify
2. **Test Scenarios** — high-level scenarios grouped by functional area
3. **Test Cases** — for each scenario, provide: Test ID, Description, Preconditions, Steps, Expected Result
4. **Edge Cases** — unusual inputs, boundary values, concurrent access
5. **Integration Tests** — interactions with other systems or services
6. **Negative Tests** — invalid inputs, unauthorised access attempts, error conditions
7. **Performance Criteria** — response time thresholds and load assumptions
8. **Test Data Requirements** — what test data needs to be set up
</task>
<instructions>Structure your response clearly with headers and concrete examples.</instructions>Variables
{{feature_name}}Name of the feature being tested{{feature_description}}What the feature does from the user's perspective{{technical_details}}How the feature is built (APIs, databases, third-party services involved){{acceptance_criteria}}The user stories or given-when-then acceptance criteria{{in_scope}}What this test plan covers{{out_of_scope}}What is explicitly not covered by this test planExample
feature_name: User Password Reset feature_description: Users can reset their password via a one-time email link technical_details: Sends SendGrid email with a JWT token (24h expiry) that enables a one-time password change acceptance_criteria: - Given I forgot my password, when I enter my email, then I receive a reset link within 2 minutes - Given I click the reset link, when it is within 24 hours, then I can set a new password - Given I have already used the reset link, when I click it again, then I see an error message
**TC-001: Successful password reset** Preconditions: User account exists with verified email Steps: 1) Navigate to /forgot-password 2) Enter valid email 3) Check inbox 4) Click link 5) Enter new password Expected: Password changed, old password rejected, user logged in **TC-005: Expired token (edge case)** Preconditions: Token generated 25 hours ago in test database Steps: 1) Click expired reset link Expected: HTTP 400, message "Reset link has expired", link to request new reset **TC-008: Token reuse (security)** Preconditions: Complete one successful reset Steps: 1) Navigate to the same reset URL again Expected: HTTP 400, "Reset link already used"
Related Tools
FAQ
- Can I export this test plan to Jira or TestRail?
- Ask the AI to format the test cases as CSV with columns matching your tool's import format, or as Gherkin (Given/When/Then) for BDD frameworks like Cucumber.
- How do I generate test data for the test cases?
- Use the edge case generation prompt with the test cases as input to produce the specific data values needed for each boundary condition test.
- Should this replace manual QA testing?
- No. AI-generated test plans cover the logical space well but miss context-dependent UX issues and accessibility problems that require human judgement. Use this as a foundation that a QA engineer then reviews and extends.
Related Prompts
Test plans generated without structure produce long lists of obvious cases while missing t...
Test Plan Generation Prompt (ChatGPT)Test plans generated without structure produce long lists of obvious cases while missing t...
Test Plan Generation Prompt (Gemini)Test plans generated without structure produce long lists of obvious cases while missing t...
Test Plan Generation Prompt (LLaMA / Ollama)Test plans generated without structure produce long lists of obvious cases while missing t...
Edge Case Generation PromptDevelopers typically test the happy path and one or two obvious error cases. Edge case gen...
Unit Test Generation PromptAI-generated tests are most valuable when they catch edge cases that developers miss. This...