CostGuardAI

Know before you send.

Demo · Safety reportgpt-4o

Safety Score

28

Unsafe

Cost / 1k

$3.20

Monthly (100k)

$320

Monthly (3M)

$9,600

Scan completeAnalyzer v0.2Pricing loadedGenerated for demo reviewShare-safe

CostGuardAI Safety Score

0/100High

High injection risk — untrusted user input is not isolated from system instructions. Cost explosion risk from open-ended output requirements.

What this score means

CostGuard Safety Score measures how resistant a prompt is to prompt injection, system override, jailbreak behavior, token cost explosion, and tool misuse. Higher scores indicate stronger prompt isolation, safer structure, and lower operational risk.

85–100Safe
70–84Low
40–69Warning
0–39High

Your score: 28/100

Top Risk Drivers
Injection RiskHigh
Cost ExplosionHigh
Ambiguity RiskMedium
Threat Intelligence

No known Prompt CVE match yet.

This score is based on structural safety analysis and known exploit patterns.

Mitigations
  • · Isolate user input from system prompts
  • · Set strict max_output_tokens
  • · Apply input sanitization layer
  • · Use prompt caching for repeated context
  • · Remove vague quality modifiers
Estimated Cost Impact

Estimated workload cost

Per call$0.0032
Per 1k calls$3.20
Monthly (100k calls)$320.00
Monthly (3M calls)$9600.00
Model Mixgpt-4o-mini (classification) · gpt-4o (generation)

Estimates based on current model pricing. Actual costs vary by provider and model version.

Protect a real repo next

Run CostGuard on your own codebase to catch cost spikes, risky prompts, and model misconfiguration before shipping.

No code changes required for first scan.

Report Integrity
Analysis Version1.0.0
Score Versionv1.0
Prompt[demo — not stored]
Run your own preflight →How CostGuard Safety Score works