codingGemini

Unit Test Generation Prompt (Gemini)

AI-generated tests are most valuable when they catch edge cases that developers miss. This prompt systematically covers four test categories and requires descriptive test names, which makes the test suite serve as living documentation. The mock_strategy field is important — without guidance the model may inline real HTTP calls or database queries that break in CI. This variant is formatted for Gemini: Optimised for Gemini 1.5 Pro and Gemini Ultra. Uses Google AI markdown formatting conventions.

Prompt Template
# Gemini AI Prompt

You are a helpful AI assistant powered by Google Gemini.

## Instructions
You are a senior test engineer. Generate a complete unit test suite for the following {{language}} function using {{test_framework}}.

The test suite must cover:
1. **Happy path** — the normal expected use case
2. **Edge cases** — empty inputs, zero values, maximum values, single-element collections
3. **Error cases** — invalid types, null/undefined, values out of range
4. **Boundary conditions** — values at and just outside the documented limits

Test naming convention: use descriptive names like "should return X when Y"

Mock strategy: {{mock_strategy}}

Function to test:
```{{language}}
{{code}}
```

Function signature / public contract: {{contract}}

## Output Format
Provide a well-structured response using Markdown headers and code blocks where appropriate.

Variables

{{language}}Programming language (e.g., TypeScript, Python, Go)
{{test_framework}}Testing framework to use (e.g., Jest, pytest, Go test, JUnit)
{{code}}The function or class to test
{{mock_strategy}}How to handle dependencies, e.g., "mock all external calls with jest.fn()"
{{contract}}Expected inputs/outputs in plain English, or "infer from code"

Example

Input
language: TypeScript
test_framework: Jest
mock_strategy: mock all external calls with jest.fn()
contract: takes an array of numbers, returns the median value
code:
function median(numbers: number[]): number {
  if (numbers.length === 0) throw new Error('Array must not be empty');
  const sorted = [...numbers].sort((a, b) => a - b);
  const mid = Math.floor(sorted.length / 2);
  return sorted.length % 2 !== 0 ? sorted[mid] : (sorted[mid - 1] + sorted[mid]) / 2;
}
Output
describe('median', () => {
  it('should return the middle value for an odd-length array', () => {
    expect(median([3, 1, 2])).toBe(2);
  });
  it('should return the average of the two middle values for an even-length array', () => {
    expect(median([4, 1, 3, 2])).toBe(2.5);
  });
  it('should return the single element for a one-element array', () => {
    expect(median([7])).toBe(7);
  });
  it('should throw when given an empty array', () => {
    expect(() => median([])).toThrow('Array must not be empty');
  });
  it('should handle negative numbers correctly', () => {
    expect(median([-3, -1, -2])).toBe(-2);
  });
  it('should not mutate the original array', () => {
    const arr = [3, 1, 2];
    median(arr);
    expect(arr).toEqual([3, 1, 2]);
  });
});

Related Tools

FAQ

Should I trust AI-generated tests without reviewing them?
No. Always read every generated test. The most common problem is tests that always pass because the assertion checks the wrong property, or tests that do not actually invoke the function under test.
How do I generate integration tests instead of unit tests?
Replace "unit test suite" in the prompt with "integration test suite" and change the mock strategy to "use a real test database seeded with fixtures". Also include the relevant database schema in the code block.
The AI generates tests for edge cases that cannot happen. Is that a problem?
Not necessarily. Those tests document the assumption that such inputs cannot occur. If the assumption ever breaks due to a refactor, the test will catch it. Delete the tests only if you are certain the case is impossible at the call site.

Related Prompts