Compare Prompts with Token Awareness
Paste two prompts to see exact differences, token count changes, and structural shifts. Perfect for tracking prompt iterations and understanding what changed.
What This Tool Does
This tool compares two prompts and shows:
- Text Differences: Green for added text, red for removed text
- Token Impact: See how many tokens each prompt uses and the change
- Structural Changes: Detect if you added/removed system instructions, roles, or constraints
How to Use
- Paste your first prompt in Prompt A
- Paste your revised prompt in Prompt B
- Click Compare to see differences
- Use toggles to ignore case or punctuation if needed
- Review structural changes and token impact
Why Compare Prompts?
- Track iterations: See what changed across prompt versions
- Understand token impact: Know if changes make your prompt cheaper or more expensive
- Debug behavior: Correlate output changes with specific wording changes
- Optimize costs: Find the minimal prompt that still works
- Spot mistakes: Catch accidental deletions or changes
Understanding the Output
Added Text (Green)
Text present in Prompt B but not in A. New instructions, examples, or constraints.
Removed Text (Red)
Text present in Prompt A but not in B. Deleted instructions or changed approach.
Token Count
Approximate tokens used by each prompt. Uses GPT-style estimation (~4 chars per token).
Structural Changes
Detects role changes, new constraints, and system instruction modifications.
FAQ
What is a prompt diff?
A prompt diff shows the exact differences between two prompts. It highlights what was added (green), removed (red), and changed, making it easy to see what's different without manually comparing.
Why do small prompt changes matter?
Small wording changes can significantly affect LLM outputs. This tool helps you track exact changes and understand their token impact, so you can iterate confidently.
Does this tool use an LLM?
No. This is a pure frontend tool. It uses text diffing algorithms and heuristics to highlight changes locally. No data is sent to any server or LLM.
How accurate is the token count?
The token count uses an approximation based on the GPT tokenizer (roughly 4 characters = 1 token). For exact counts, use your LLM provider's official tokenizer, but this gives you a reliable estimate for comparison.
Is my data sent anywhere?
No. All processing happens in your browser. Your prompts never leave your device. This tool works completely offline after the page loads.
Related Tools
- Token Counter â Count tokens for any text
- Prompt Cost Estimator â Calculate API costs
- Prompt Template Generator â Build structured prompts
- Context Window Calculator â Check remaining context