AI Failure Pattern Analyzer

Detect hedging, refusal, truncation, repetition, and format violations in LLM output.

Paste LLM output above to detect failure patterns

Related Tools

Learn More

FAQ

What failure patterns does the analyzer detect?
The analyzer detects 5 pattern types: Hedging (I think, possibly, perhaps, I'm not sure), Refusal (I can't, I'm unable, As an AI), Truncation (text ends mid-sentence or with "..."), Repetition (any 3+ word phrase repeated 3+ times), and Format Violation (text claims JSON format but is invalid JSON).
What do the severity levels mean?
Critical patterns indicate the output is unusable (refusal, truncation, format violation). Warning patterns indicate quality issues that may affect usefulness (repetition). Info patterns flag stylistic concerns that might indicate uncertainty (hedging).
Can I use this for automated LLM testing?
The tool runs entirely in the browser and is designed for manual review. For automated testing pipelines, you can implement the same heuristics in your backend code using the same regex patterns displayed here.

Paste any LLM output and automatically detect common failure patterns: hedging language, refusal phrases, truncation (text cut off mid-sentence), phrase repetition, and format violations. Each detected pattern shows its severity (critical/warning/info), matched text, and line numbers. Useful for evaluating model outputs, automated LLM testing, and prompt debugging.