securityClaude

Threat Modeling Prompt (Claude)

Threat modeling is most valuable when done during design, not after implementation. This prompt applies the STRIDE framework systematically, ensuring every threat category is considered rather than just the ones currently in the news. The attack narrative requirement forces the AI to confirm that threats are actually exploitable, not just theoretical. This variant is formatted for Claude: Optimised for Claude 3.5 Sonnet and Claude 3 Opus. Uses XML tags for structured input and output.

Prompt Template
<role>You are an expert AI assistant with deep knowledge in this domain.</role>

<task>
You are a security architect specialising in threat modeling and risk assessment.

Perform a STRIDE threat model for the following system:

System name: {{system_name}}
System description: {{description}}
Data flows: {{data_flows}}
Trust boundaries: {{trust_boundaries}}
External actors: {{actors}}
Tech stack: {{tech_stack}}

For each STRIDE category (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege):

1. **Threats** — list the specific threats applicable to this system
2. **Attack scenarios** — one concrete attack narrative per threat
3. **Mitigations** — technical controls to prevent or detect the threat
4. **Residual risk** — risk level after mitigations (High/Medium/Low)

Then provide:
- Top 5 highest-priority security findings
- Recommended security controls to implement immediately
- Data Flow Diagram description (textual, suitable for converting to a diagram)
</task>

<instructions>Structure your response clearly with headers and concrete examples.</instructions>

Variables

{{system_name}}Name of the system or feature being modelled
{{description}}What the system does and its business purpose
{{data_flows}}How data moves through the system, e.g., "user → web app → API → database"
{{trust_boundaries}}Where trust changes, e.g., "internet/DMZ boundary, DMZ/internal network boundary"
{{actors}}External users and systems interacting with the system
{{tech_stack}}Technologies used

Example

Input
system_name: User Authentication Service
description: Handles login, registration, password reset, and MFA for a SaaS platform
data_flows: browser → HTTPS → auth API → PostgreSQL; auth API → email service → user inbox
trust_boundaries: internet/API gateway, API/database
actors: end users, admin users, email service provider
tech_stack: Node.js, PostgreSQL, SendGrid, Redis (session store)
Output
**Spoofing — Account Takeover via Credential Stuffing**
Attack: Attacker uses list of leaked username/password pairs to log in as real users
Mitigation: Rate limiting per IP + per username, CAPTCHA after 3 failures, breach password detection (HaveIBeenPwned API)
Residual Risk: Low after mitigations

**Tampering — JWT Token Modification**
Attack: Attacker changes the user_id claim in a JWT to access another user's data
Mitigation: Sign JWTs with RS256 (asymmetric), verify signature on every request, short token expiry
Residual Risk: Low

Related Tools

FAQ

What is STRIDE and when should I use it?
STRIDE is a threat categorisation framework: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege. Use it during system design and for any significant feature that handles sensitive data or authentication.
How often should I perform threat modeling?
Perform an initial threat model before building new systems. Update it when: adding new data flows, changing authentication, integrating new third-party services, or after a security incident reveals a blind spot.
Can AI replace a professional threat modeling session?
No. AI threat models are a useful starting point and checklist supplement, but they miss business-logic-specific threats that require domain knowledge. Use the AI output as input for a team threat modeling workshop.

Related Prompts