Safety Score
28
Unsafe
Cost / 1k
$3.20
Monthly (100k)
$320
Monthly (3M)
$9,600
CostGuardAI Safety Score
High injection risk — untrusted user input is not isolated from system instructions. Cost explosion risk from open-ended output requirements.
CostGuard Safety Score measures how resistant a prompt is to prompt injection, system override, jailbreak behavior, token cost explosion, and tool misuse. Higher scores indicate stronger prompt isolation, safer structure, and lower operational risk.
Your score: 28/100
No known Prompt CVE match yet.
This score is based on structural safety analysis and known exploit patterns.
- · Isolate user input from system prompts
- · Set strict max_output_tokens
- · Apply input sanitization layer
- · Use prompt caching for repeated context
- · Remove vague quality modifiers
Estimated workload cost
Estimates based on current model pricing. Actual costs vary by provider and model version.
Protect a real repo next
Run CostGuard on your own codebase to catch cost spikes, risky prompts, and model misconfiguration before shipping.
No code changes required for first scan.