testingClaude

Integration Test Prompt (Claude)

Integration tests are more valuable than unit tests for catching real bugs because they test the interactions between components. However, they require more setup. This prompt generates test suites with proper isolation (separate test database, seeded fixtures, cleanup) to prevent the common problem of tests passing in isolation but failing when run together. This variant is formatted for Claude: Optimised for Claude 3.5 Sonnet and Claude 3 Opus. Uses XML tags for structured input and output.

Prompt Template
<role>You are an expert AI assistant with deep knowledge in this domain.</role>

<task>
You are a senior test engineer specialising in integration and end-to-end testing.

Write integration tests for the following:

System under test: {{system}}
What to test: {{test_target}}
Test framework: {{framework}}
Database: {{database}}
External services to mock: {{mock_services}}
Test environment setup: {{test_env}}

The integration tests must:
1. Test the full flow from HTTP request to database and back
2. Use a real test database (not mocks) seeded with known fixtures
3. Test success paths, error paths, and boundary conditions
4. Clean up all test data after each test (test isolation)
5. Verify side effects: database state, emails sent, events published

For each test scenario provide:
- Test description
- Setup (fixtures, mocks)
- Request/action
- Assertions (response body, status code, database state)
- Teardown
</task>

<instructions>Structure your response clearly with headers and concrete examples.</instructions>

Variables

{{system}}System being tested, e.g., "Express.js REST API", "Django web app", "Go microservice"
{{test_target}}Specific feature or flow to test, e.g., "user registration and email verification flow"
{{framework}}Test framework: Jest + Supertest, pytest, Go test, RSpec, etc.
{{database}}Database type and how to handle it, e.g., "PostgreSQL — use test database, run migrations before suite"
{{mock_services}}External services to mock, e.g., "SendGrid email, Stripe API", or "None"
{{test_env}}How the test environment is configured, e.g., "docker-compose test stack", "in-memory SQLite"

Example

Input
system: Express.js REST API
test_target: User login and JWT token issuance
framework: Jest with Supertest
database: PostgreSQL — create test database, run migrations
mock_services: None (login is fully internal)
test_env: NODE_ENV=test with .env.test
Output
describe('POST /auth/login', () => {
  beforeAll(async () => {
    await db.migrate.latest();
    await db.seed.run();
  });

  afterAll(async () => {
    await db.destroy();
  });

  it('returns 200 and a JWT for valid credentials', async () => {
    const res = await request(app)
      .post('/auth/login')
      .send({ email: '[email protected]', password: 'Password1!' });
    expect(res.status).toBe(200);
    expect(res.body.token).toMatch(/^eyJ/);
  });

  it('returns 401 for wrong password', async () => {
    const res = await request(app)
      .post('/auth/login')
      .send({ email: '[email protected]', password: 'wrong' });
    expect(res.status).toBe(401);
  });
});

Related Tools

FAQ

Should integration tests use a real database or an in-memory one?
Use a real database (same engine as production) in a Docker container or test schema. In-memory databases (SQLite for PostgreSQL apps) behave differently and hide bugs that only appear with the real database engine.
How do I handle test data that depends on order of insertion?
Use database fixtures or factories (e.g., fishery, factory_boy) that create all required entities atomically. Each test should be independent — never rely on data created by a previous test.
How do I mock external HTTP services in integration tests?
Use nock (Node.js) or responses (Python) to intercept HTTP calls at the library level. Alternatively, run a local mock server (WireMock, Prism) against an OpenAPI spec. Avoid setting process.env variables to point at fake URLs — it is fragile.

Related Prompts