CostGuardAI Docs
Preflight analysis for LLM prompts — catch cost overruns and failure risk before they ship.
Install
npm install -g @camj78/costguardai
Quick Start
# Analyze a prompt file costguardai analyze my-prompt.txt # CI gate — block if Safety Score <= 30 (risk_score >= 70) costguardai ci --fail-on-risk 70 # Initialize config in your repo costguardai init
CostGuardAI Safety Score
Every analysis produces a CostGuardAI Safety Score from 0 to 100. Higher is safer. The score combines five weighted factors: context pressure, output collision risk, output cap risk, prompt verbosity, and estimation uncertainty. Scores below 60 indicate meaningful failure risk and should be reviewed before deployment.
Commands
analyze <path>Analyze a file or directory of prompt files
ci [path]CI-native scan with exit codes
trendsRisk trend intelligence from git history
initBootstrap CostGuardAI config in your repo
Run costguardai --help for full options.