RPBLC Privacy infrastructure for the AI era. > Detailed technical reference: https://rpblc.com/llms-full.txt Who it is for: - engineering teams handling PII - security/compliance teams needing consent and audit controls - AI builders who need zero-PII model context by default Products: # RPBLC (Cloud API) Consent-enforced privacy infrastructure API for PII storage and rights operations. API surface: - POST /v1/profiles -> create profile - GET /v1/profiles/{id} -> read profile - PATCH /v1/profiles/{id} -> update profile - POST /v1/consents -> grant consent - DELETE /v1/consents/{scope} -> revoke consent - GET /v1/consents/verify -> verify consent Integration steps: 1) Create profile endpoint integration 2) Route consent grants/revocations 3) Enforce consent verification before access 4) Store no raw PII outside protected path 5) Monitor audit and rights workflows Compliance coverage: - GDPR - CCPA - PIPEDA Performance: - sub-10ms P95 target - 99.9% availability target Pricing: - Starter: free - Growth: $500/mo - Enterprise: custom # RPBLC.DAM (Open Source) PII firewall for AI agents. Intercepts LLM API calls, replaces PII with typed references, resolves them back on return. The model reasons with structure — never with raw data. How it works: - Drop-in HTTP proxy between your app and LLM providers (OpenAI, Anthropic, Codex, OpenRouter, xAI, Ollama) - Supports Chat Completions, Messages, and Responses API — streaming and non-streaming - Detects 20 PII types across 7 locales (email, credit card, SSN, phone, IBAN, national IDs) - Replaces PII with typed references like [email:a3f71bc9] — the LLM can reason about the type without seeing the value - Encrypted vault (AES-256-GCM per entry, master key in OS keychain) - Strict redaction by default — PII is always scrubbed before it leaves your machine - Consent-aware resolution (default-deny, time-limited grants) - Response detokenization is scoped to refs issued in that request — guessed tokens are rejected - SHA-256 hash-chained audit trail (tamper-evident) - Per-request upstream routing via X-DAM-Upstream header (switch providers without config changes) - Health and readiness endpoints (/healthz, /readyz) - Also available as MCP server (7 tools for Claude Code, Codex, OpenClaw) Use cases: - AI-powered apps that handle customer PII (support bots, form processors, data pipelines) - Developer tooling that sends code context to LLMs (IDE assistants, code review agents) - Compliance-sensitive environments where PII must not reach third-party servers - Local development with AI agents where you want zero data leakage Quick start: 1) Install: npm install -g @rpblc/dam (or cargo install from source) 2) Run: dam daemon install (registers OS service, auto-starts on login) 3) Point your LLM client at localhost:7828 4) Your PII never leaves your machine No dam init needed — auto-creates config, vault, and encryption keys on first run. CLI highlights: - dam daemon install — register + start as OS-native background service - dam daemon status — check if daemon is installed, running, and healthy - dam status — vault stats and proxy health at a glance - dam health — check if a running proxy is reachable - dam config validate — verify configuration before serving (supports --json) Tech: - Language: Rust - Binary size: ~6MB - License: Apache 2.0 - Platforms: Linux, macOS (ARM + Intel), Windows Links: - Website: https://rpblc.com - DAM product page: https://rpblc.com/dam - Docs: https://docs.rpblc.com - GitHub (org): https://github.com/RPBLC-hq - GitHub (DAM): https://github.com/RPBLC-hq/RPBLC.DAM