Open Source · Deterministic prompt governance · Zero LLM Calls Inside

The control plane
for AI prompts

Score, enforce policy, lock config, and audit every prompt decision — before it reaches any LLM.

Deterministic. Offline. Hash-chained audit trail. Policy enforcement built in.

npm version npm downloads GitHub release License
Prompt Optimization — live analysis
Raw Prompt
PCP Analysis

Routes across 4 providers · 11 models

Anthropic · OpenAI · Google · Perplexity

See all supported models →
The Problem

Prompts run blind today

Every prompt goes straight to the model with no quality check, no cost estimate, and no governance. Here’s what that costs you.

🌫️

Vague prompts waste tokens

“Make the code better” gives the model no constraints, no target, and no success criteria — leading to unpredictable results that cost 3× what they should.

💸

Cost is invisible until after

Most teams have no idea how many tokens their prompts consume across Haiku, Sonnet, Opus, GPT-4o, and Gemini — until the bill arrives.

🎲

Wrong model every time

Without routing logic, every prompt hits the same expensive model. A simple factual question doesn’t need Opus. PCP routes it to Gemini Flash instead — 94% cheaper.

🚫

No governance or sign-off gate

Claude starts working immediately on whatever you type. There’s no review step, no policy check, and no way to enforce prompt standards across a team.

📜

No audit trail

Enterprise compliance needs to know what prompts ran, when, at what risk score, and what policy decision was made. There’s no log of any of that today.

How It Works

Five steps from rough prompt to production-ready. All analysis is deterministic — no AI calls inside.

1

Score your prompt

Get a quality score (0–100) with dimensional breakdown. Every deduction has a traceable reason.

2

Detect ambiguities

Deterministic rules catch scope explosion, missing constraints, hallucination risk, and vague instructions.

3

Compile a structured version

Adds role, goal, constraints, workflow, and output format. Choose your target: Claude XML, OpenAI system/user, or Markdown.

4

Know the cost before you run

Estimates token count and cost across 11 models from Anthropic, OpenAI, Google, and Perplexity.

5

Review & approve

Review the compiled result, answer blocking questions if needed, then approve. Nothing executes without your sign-off.

Explainer

How it works in 5 minutes

See a vague prompt get analyzed, improved, routed to the right model, and approved — all without a single LLM call.

0:00 / 2:00

Enterprise Controls

The governance layer you need before approving AI in production. All managed through a license-gated Enterprise Console.

🖥

Enterprise Console

A web-based governance control panel, accessible only with a valid Enterprise license key. Configure policy, inspect audit logs, build custom rules, and manage session retention — all from one authenticated place. License verification is fully offline.

🔒

Policy-Locked Configuration

Lock your governance config with a passphrase derived from your enterprise license. No one can change policy, strictness, or audit settings without authenticating through the console. Every attempt — successful or blocked — is audit-logged.

🛡

Policy Enforcement

Switch from advisory to enforce mode. BLOCKING rules (built-in + custom) gate every prompt. Risk threshold gating blocks high-risk approvals. Deterministic — same input, same verdict, every time.

📜

Hash-Chained Audit Trail

Every action generates a JSONL audit entry with SHA-256 hash chaining. If any line is deleted or modified, the chain breaks. Local-only, opt-in, never stores prompt content. Explore and verify your audit logs visually in the Enterprise Console.

🗑

Data Lifecycle Management

Delete individual sessions or bulk-purge by age policy. Dry-run mode previews what would be removed. A configurable retention window protects your newest sessions. Purge only touches session files — config, audit, and license are never deleted.

Benchmarks

Real results on real prompts

Every prompt reaches 90/100 after optimization. Average improvement: +32 points. Deterministic — same prompt, same result, every time.

Prompt Type Before After Δ Model Blocked?
“make the code better” other 4890+42 sonnet
“fix the login bug” debug 5190+39 opus ⚠ 3 BQs
Multi-task (4 in 1) refactor 5190+39 opus ⚠ 3 BQs
Auth middleware (well-specified) refactor 7690+14 opus
Retry logic (precise) code 6190+29 sonnet
Create REST API server create 5190+39 opus ⚠ 2 BQs
LinkedIn post (technical) writing 5990+31 sonnet
Redis vs Memcached research research 5690+34 sonnet
Data transformation (CSV) data 5690+34 haiku

BQ = Blocking Question. Prompts in ANALYZING state require refinement before compilation completes. All results are deterministic.

Verified

Why teams trust it

Every result is reproducible, every decision is auditable, and nothing leaves your machine.

100%
Reproducible
Same prompt, same result. Every time. No randomness, no drift, no surprises between runs.
0
External Calls
Runs entirely on your machine. No data sent anywhere. No API keys needed. No network dependency.
4
Providers Covered
Accurate cost estimates across Anthropic, OpenAI, Google, and Perplexity — always up to date.
<1s
Response Time
No LLM in the loop means instant feedback. Score, route, and estimate cost in milliseconds.
20
Tools, One Install
Scoring, routing, compression, policy enforcement, audit logging, session management — all included.
Audit Trail
Hash-chained log of every decision. Cryptographically chained. Know who changed what and when.

Extensively tested. Fully open for inspection. Run the suite yourself to verify.

Pricing

Free tier includes 10 optimizations, unlimited scoring, and all 20 tools. No credit card required.

Free
₹0
forever
  • 10 optimizations total
  • Unlimited scoring & checking
  • 5 per minute rate limit
  • All 20 tools
  • All 3 output formats
Get Started
Power
₹899
per month
  • Unlimited optimizations
  • Unlimited scoring & checking
  • 60 per minute rate limit
  • Always-on mode
  • Priority support
Get Power
Enterprise
Custom
contact us
  • Unlimited optimizations
  • 120 per minute rate limit
  • Policy enforcement + config lock
  • Hash-chained audit logging
  • Custom rules + session retention
  • Dedicated support & SLA
Contact Sales
CLI

Full engine on the command line

Run the same scoring, routing, and policy enforcement from your terminal or CI pipeline. No MCP needed.

Pre-flight — the lead command

Classify, score, route, and enforce policy in one call. The single command that covers 90% of use cases.

$ pcp preflight "Review auth module security" --json

task_type: review · risk: medium · model: opus
score: 62/100 · policy: advisory

All subcommands

pcp preflight — full pre-flight analysis
pcp optimize — optimization pipeline
pcp check — quick quality check
pcp score — detailed scoring breakdown
pcp classify — task classification
pcp route — model routing
pcp cost — cost estimation
pcp compress — context compression
pcp config — show governance config
pcp doctor — validate environment

Enterprise Console writes config — CLI enforces it

🖥
Enterprise Console
Governance authority
Writes config, locks policy
CLI (pcp)
Developer workflows
Reads config, respects enforce
🤖
MCP
AI integration
Same policy gates as CLI

CLI and MCP respect the same governance config. Policy enforcement applies identically across all interfaces.

Install in 10 seconds

Add to any MCP-compatible client: Claude Code, Cursor, or Windsurf.

Works with any codebase — Python, Java, Go, Rust, or any language. Node.js is only needed to run the MCP server, not your project.

MCP config (recommended)
{ "mcpServers": { "prompt-optimizer": { "command": "npx", "args": ["-y", "claude-prompt-optimizer-mcp"] }}}
Or install globally via npm
npm install -g claude-prompt-optimizer-mcp
Or one-line curl install
curl -fsSL https://prompt-control-plane.pages.dev/install.sh | bash