testingOpenai

API Testing Prompt (ChatGPT)

API tests without authorisation testing are incomplete — they miss the most common class of API vulnerabilities (IDOR and broken access control). This prompt systematically tests authentication, authorisation, and validation for every endpoint, producing a test suite that doubles as security regression coverage. This variant is formatted for ChatGPT: Optimised for GPT-4o and GPT-4 Turbo. Uses markdown formatting and system/user message separation.

Prompt Template
## System
You are an expert AI assistant. Respond using clear markdown formatting.

## User
You are a senior QA engineer specialising in API testing.

Generate a comprehensive API test suite for the following API:

API description: {{api_description}}
Base URL: {{base_url}}
Authentication: {{auth_method}}
Test framework: {{framework}}
Endpoints to test:
{{endpoints}}

For each endpoint, generate tests covering:
1. **Happy path** — successful request with valid data
2. **Authentication** — missing token, expired token, invalid token
3. **Authorisation** — accessing resources belonging to another user
4. **Validation** — missing required fields, invalid formats, boundary values
5. **Error responses** — verify correct HTTP status codes and error message structure
6. **Response structure** — assert all expected fields are present with correct types

Also generate:
- A setup function for authentication token retrieval
- A shared assertion helper for common response validations
- Data cleanup after each test

Variables

{{api_description}}What the API does, e.g., "REST API for a task management app"
{{base_url}}Base URL, e.g., "https://api.example.com/v1" or "http://localhost:3000"
{{auth_method}}How the API is authenticated: JWT Bearer, API key, OAuth2, Basic auth
{{framework}}Testing tool: Postman/Newman, pytest with requests, Jest with axios, k6, etc.
{{endpoints}}List of endpoints to test, e.g., "GET /users, POST /users, GET /users/{id}, DELETE /users/{id}"

Example

Input
api_description: Task management REST API
base_url: http://localhost:3000/api/v1
auth_method: JWT Bearer token
framework: Jest with axios
endpoints:
GET /tasks — list current user's tasks
POST /tasks — create a task
GET /tasks/:id — get a specific task
PATCH /tasks/:id — update a task
DELETE /tasks/:id — delete a task
Output
const api = axios.create({ baseURL: 'http://localhost:3000/api/v1' });

async function getToken(email, password) {
  const { data } = await api.post('/auth/login', { email, password });
  return data.token;
}

describe('GET /tasks/:id', () => {
  it('returns the task for the owner', async () => {
    const token = await getToken('[email protected]', 'Password1!');
    const res = await api.get('/tasks/1', { headers: { Authorization: `Bearer ${token}` } });
    expect(res.status).toBe(200);
    expect(res.data).toMatchObject({ id: 1, title: expect.any(String) });
  });

  it('returns 403 when accessing another user\'s task', async () => {
    const token = await getToken('[email protected]', 'Password2!');
    await expect(api.get('/tasks/1', { headers: { Authorization: `Bearer ${token}` } }))
      .rejects.toMatchObject({ response: { status: 403 } });
  });
});

Related Tools

FAQ

Should I test every possible combination of parameters?
No — use pairwise testing or equivalence partitioning to select a representative subset. Test one value per equivalence class (valid, too short, too long, wrong type) rather than every possible invalid input.
How do I generate Postman collections from this prompt?
Change the framework to "Postman collection JSON" and the AI will generate a Postman v2.1 collection you can import directly. The tests will use Postman's pm.test() assertions instead of Jest.
How do I test WebSocket or GraphQL APIs?
For GraphQL, specify the framework as "Apollo Client test utils" or "Postman" and describe your queries/mutations as the endpoints. For WebSockets, use socket.io-client or ws in Jest tests — specify this in the framework field.

Related Prompts