codingOpenai

Performance Optimisation Prompt (ChatGPT)

Optimisation without measurement is guessing. This prompt requires current metrics upfront so the AI can target the right bottleneck, and asks for a benchmarking plan so you can verify the improvement actually happened. The trade-offs section prevents micro-optimisations that make code unmaintainable. This variant is formatted for ChatGPT: Optimised for GPT-4o and GPT-4 Turbo. Uses markdown formatting and system/user message separation.

Prompt Template
## System
You are an expert AI assistant. Respond using clear markdown formatting.

## User
You are a performance engineering expert specialising in {{language}} optimisation.

Analyse and optimise the following code for performance.

Performance goal: {{goal}}
Current metrics: {{current_metrics}}
Constraints: {{constraints}}

Code to optimise:
```{{language}}
{{code}}
```

Provide:
1. **Bottleneck Analysis** — identify the top 3 performance bottlenecks with estimated impact
2. **Optimised Code** — the improved version with inline comments explaining each change
3. **Complexity Analysis** — time and space complexity before and after (Big O)
4. **Benchmarking Plan** — how to measure the improvement
5. **Trade-offs** — any correctness, readability, or maintainability trade-offs introduced

Variables

{{language}}Programming language, e.g., Python, TypeScript, Java
{{goal}}Performance target, e.g., "reduce response time from 2s to 200ms", "process 10k records/sec"
{{current_metrics}}Measured performance data, e.g., "2.1s p99 latency, profiler shows 80% in db.query"
{{constraints}}What cannot change, e.g., "must remain single-threaded", "no new dependencies", or "None"
{{code}}The slow code to optimise

Example

Input
language: Python
goal: Reduce processing time from 8s to under 1s for 10,000 records
current_metrics: profiler shows 95% of time in a nested loop comparing records
constraints: Must remain compatible with Python 3.9, no external dependencies
code:
def find_duplicates(records):
    duplicates = []
    for i, record in enumerate(records):
        for j, other in enumerate(records):
            if i != j and record['email'] == other['email']:
                duplicates.append(record)
    return duplicates
Output
def find_duplicates(records: list[dict]) -> list[dict]:
    # O(n) lookup using a set instead of O(n²) nested loop
    seen = set()
    duplicates = []
    for record in records:
        email = record['email']
        if email in seen:
            duplicates.append(record)
        else:
            seen.add(email)
    return duplicates

# Complexity: O(n) time, O(n) space — down from O(n²) time, O(1) extra space
# Expected speedup: ~10,000x for 10k records (100M → 10k comparisons)

Related Tools

FAQ

Should I optimise before profiling?
Never. "Premature optimisation is the root of all evil" (Knuth). Profile first to identify the actual bottleneck. The hottest 5% of code accounts for 95% of runtime in most applications.
What if the bottleneck is database queries?
Include the slow query log output or EXPLAIN ANALYZE results in the current_metrics field. The AI will suggest index additions, query restructuring, or N+1 query elimination strategies.
Can AI optimise concurrent/parallel code?
Yes. Describe the concurrency model (threads, async, workers) in the constraints field. The AI can suggest parallelisation strategies, but always test concurrent optimisations under load to catch race conditions.

Related Prompts