Back to Articles

Crust: A Zero-Config Security Gateway That Intercepts AI Agent Tool Calls Before Execution

[ View on GitHub ]

Crust: A Zero-Config Security Gateway That Intercepts AI Agent Tool Calls Before Execution

Hook

Your AI coding assistant just tried to read ~/.aws/credentials. Did you notice? Most developers don’t realize their agents have unrestricted filesystem access until it’s too late.

Context

AI coding assistants like Cursor, Windsurf, and Claude Code have become indispensable development tools, but they operate with minimal security constraints. When you grant an agent permission to “read a file” or “run a command,” there’s typically no mechanism preventing it from accessing SSH keys, reading environment variables with API tokens, or executing destructive shell commands. The agent’s safety relies entirely on the LLM’s instruction-following—not a security boundary.

Crust addresses this by implementing a transparent security layer between agents and LLM providers. Unlike application-level sandboxing or container isolation, Crust intercepts tool calls at the protocol level, evaluating each action against security rules before execution. It’s designed for the reality of modern development: you need AI assistance, but you also have secrets scattered across dotfiles, credential stores, and project directories that should remain off-limits.

Technical Insight

Crust’s architecture centers on five distinct entry points that funnel into a unified evaluation pipeline. The HTTP proxy mode sits between your agent and the LLM API endpoint, scanning tool calls in both directions. When an agent sends a request containing conversation history with previous tool calls, Crust evaluates those. When the LLM responds with new tool invocations, Crust intercepts again before your agent executes them. Configuration is remarkably simple:

# Start the gateway
crust start

# Point Cursor to use Crust instead of OpenAI directly
# Settings → Models → Override OpenAI Base URL → http://localhost:9090/v1

# Your existing OPENAI_API_KEY is passed through automatically

The “auto mode” default uses model name pattern matching to detect providers—gpt-4 routes to OpenAI, claude-3 to Anthropic, gemini to Google—with zero configuration files. Your agent’s authentication headers pass through unchanged, so Crust never sees or stores your API keys.

For Model Context Protocol (MCP) servers, Crust offers stdio and HTTP wrapping modes. MCP is Anthropic’s protocol for connecting AI agents to data sources and tools. A typical MCP server exposes file system access, database queries, or API integrations through a standardized interface. Crust’s wrap command creates a security boundary:

# Wrap an MCP server that provides filesystem access
crust wrap -- npx -y @modelcontextprotocol/server-filesystem /path/to/dir

# Crust intercepts tools/call and resources/read in both directions
# It even scans server responses for accidentally leaked secrets

The evaluation pipeline processes requests through multiple stages, with each step operating in microseconds: self-protection checks ensure Crust’s own config files and logs are protected from agent access, input sanitization normalizes Unicode to detect homoglyph attacks (using Cyrillic ‘а’ instead of Latin ‘a’ to bypass filters), obfuscation detection catches base64-encoded paths or hex-escaped commands, DLP secret scanning uses pattern matching for AWS keys, private keys, and tokens, path normalization resolves .. traversal and ~ expansion, symlink resolution prevents bypassing rules through symbolic links, and finally rule matching applies your security policies.

Crust supports Agent Client Protocol (ACP) as well, which defines how editors communicate with AI agents. The stdio proxy mode wraps ACP agents, intercepting read_file, write_file, and run_command messages before the IDE executes them. The --auto-detect flag inspects both MCP and ACP method names simultaneously, useful when you’re unsure which protocol a subprocess uses.

All activity logs to encrypted local storage. The logs capture every tool call, whether blocked or allowed, with full context—timestamp, agent identity, rule match details, and the complete request. This creates an audit trail without sending telemetry to external services.

Gotcha

Crust’s effectiveness depends on its rule configuration. The project appears to ship with defaults that protect common secret locations like ~/.ssh, ~/.aws, and .env files, but the protection is pattern-based. You’re playing defense with a rule-matching system, which requires ongoing maintenance and updates to remain effective against new patterns.

The Elastic 2.0 license presents adoption friction for some organizations. While source-available and permissive for most uses, it’s not OSI-approved open source and includes restrictions on providing Crust as a managed service or offering competing products. Enterprises with strict open source policies or cloud providers considering integration may find these terms prohibitive. Additionally, as a man-in-the-middle proxy, Crust introduces a new trust dependency—if an attacker compromises Crust itself, they control the security boundary. The local-only operation mitigates some risk (no cloud attack surface), but a vulnerability in Crust’s evaluation pipeline or a misconfiguration could create a false sense of security while leaving actual gaps.

Verdict

Use Crust if you’re running AI coding assistants in environments with sensitive codebases or scattered secrets, and you want immediate protection without modifying agent code or LLM integration logic. It’s particularly valuable for development teams who need visibility into agent behavior through audit logs, or solo developers concerned about what their AI assistant might accidentally read. The zero-config auto-detection makes it trivial to try—install, start, change one environment variable, and you’re protected. Skip it if you’re in a regulated environment requiring OSI-approved licenses, or if you need semantic code analysis rather than pattern-based rule matching (Crust won’t catch “exfiltrate this data cleverly” logic, only obvious tool call patterns). Also skip if you’re already running comprehensive endpoint security with application control policies that cover AI agent processes. Finally, consider alternatives if you need stronger isolation guarantees—container sandboxing provides better security boundaries, though at the cost of significantly more complex integration.

// QUOTABLE

Your AI coding assistant just tried to read ~/.aws/credentials. Did you notice? Most developers don't realize their agents have unrestricted filesystem access until it's too late.

[ Tweet This ]
// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/developer-tools/bakelens-crust.svg)](https://starlog.is/api/badge-click/developer-tools/bakelens-crust)