Back to Articles

OpenEdison: The Firewall Your AI Agents Actually Need

[ View on GitHub ]

OpenEdison: The Firewall Your AI Agents Actually Need

Hook

Your LangChain agent just read your entire customer database and sent it to an attacker via a carefully crafted prompt injection. Your LLM-based guardrails? They happily approved the request because it looked like “legitimate business analysis.”

Context

The rise of agentic AI has created what security researchers call the “lethal trifecta”: AI agents with tool-calling capabilities, access to sensitive data sources, and vulnerability to prompt injection attacks. Unlike traditional applications where access control is deterministic and auditable, AI agents make autonomous decisions about which tools to call and what data to access based on natural language instructions that can be manipulated by attackers.

The industry’s first response has been LLM-based guardrails—asking the AI to police itself. But this is fundamentally flawed: you’re using a probabilistic system to enforce deterministic security policies. What developers actually need is a security layer that sits outside the LLM’s decision-making process, one that can inspect, log, and block agent actions regardless of what the model thinks is appropriate. OpenEdison implements this as a proxy gateway for the Model Context Protocol (MCP), Anthropic’s standardized way for AI agents to interact with data sources and tools. Instead of letting your agent directly connect to MCP servers that expose your databases, filesystems, or APIs, OpenEdison forces all traffic through a policy enforcement layer where you define exactly what’s allowed—no negotiation with the LLM required.

Technical Insight

MCP Protocol

Check Policies

Allow/Block

Filtered Requests

Responses

Forward

Telemetry Stream

AI Agent

LangGraph/LangChain

OpenEdison Proxy

FastAPI + WebSocket

MCP Server

PostgreSQL/Filesystem

Web Dashboard

Audit & Visualization

Policy Engine

Access Control

System architecture — auto-generated

OpenEdison’s architecture is elegantly simple: it’s a man-in-the-middle proxy for MCP connections. When your AI agent wants to connect to an MCP server (say, a PostgreSQL database or a filesystem), it connects to OpenEdison instead. OpenEdison maintains the actual connection to the MCP server and forwards requests after applying security policies. The backend is FastAPI with WebSocket support for real-time MCP protocol handling, while the frontend dashboard provides dataflow visualization and audit logs.

The integration pattern is surprisingly unintrusive. For observability without changing your agent’s behavior, you wrap your agent functions with Edison’s tracking decorator:

import edison

@edison.track(
    session_id="customer-analysis-2024",
    metadata={"user": "analyst@company.com", "workflow": "quarterly-review"}
)
def analyze_customer_data(query: str):
    # Your existing LangGraph or LangChain agent code
    agent = create_react_agent(llm, tools)
    return agent.invoke({"messages": [("user", query)]})

This decorator instruments your agent execution, sending telemetry to Edison’s backend without interfering with the agent’s logic. Every tool call, every MCP resource access, every data retrieval flows through Edison’s audit pipeline. You get complete visibility into what your agent is doing—something that’s nearly impossible to achieve by reading LLM traces alone.

The real power comes from policy enforcement at the MCP gateway level. OpenEdison uses a configuration-driven approach where you define which MCP tools and resources each agent context can access:

{
  "mcp_servers": {
    "postgres-prod": {
      "command": "mcp-server-postgres",
      "args": ["postgresql://localhost/production"],
      "allowed_tools": ["query"],
      "allowed_resources": ["postgres://customers/*"],
      "blocked_resources": ["postgres://customers/*/ssn", "postgres://customers/*/credit_card"]
    }
  },
  "policies": {
    "data_exfiltration_prevention": {
      "max_rows_per_query": 1000,
      "require_approval_for_write_operations": true,
      "block_sensitive_patterns": ["SSN", "credit_card", "password"]
    }
  }
}

This configuration creates deterministic boundaries. Even if an attacker successfully prompt-injects your agent into attempting a full table dump or accessing PII columns, Edison blocks the request at the protocol level. The LLM never gets to make that decision.

The architecture also solves the observability problem that plagues agentic AI. Traditional LLM observability tools like LangSmith show you token usage and trace trees, but they don’t give you a security-focused view of data access patterns. Edison’s dashboard visualizes dataflow: which agent sessions accessed which data sources, what queries were executed, how much data was retrieved. For security teams accustomed to database audit logs and API gateway metrics, this is the missing piece that makes AI agents auditable.

Under the hood, OpenEdison implements the MCP protocol’s stdio and SSE transport layers, meaning it’s compatible with any standard MCP server. The proxy maintains WebSocket connections for real-time streaming and implements request buffering for analysis. When a request comes in, Edison’s policy engine evaluates it against configured rules before forwarding to the actual MCP server. Responses are similarly inspected—you can configure Edison to redact sensitive fields or truncate large result sets even if the MCP server returns them.

Gotcha

The open-source version’s single-user limitation is a real constraint. This isn’t a “seat limit” you can work around—there’s no user authentication system at all. For teams, you’ll need to either share credentials (bad security practice) or upgrade to the commercial EdisonWatch product. This creates an awkward gap: solo developers can use it free, enterprises can pay for multi-tenancy, but mid-sized teams might find themselves in limbo.

The proxy architecture also introduces latency and operational complexity. Every MCP request now requires an extra network hop through Edison’s gateway. For agents making hundreds of tool calls, this adds up. You’re also creating a single point of failure: if Edison goes down, your agents lose access to all MCP servers. The documentation doesn’t provide clear guidance on running Edison in high-availability configurations, and the codebase doesn’t show evidence of horizontal scaling support. For production deployments, you’ll need to carefully consider the operational overhead of running and monitoring another critical service in your agent infrastructure. The Node.js dependency for certain MCP client connections (via mcp-remote) also means your deployment isn’t pure Python, adding to the complexity.

Verdict

Use OpenEdison if you’re deploying AI agents with access to production databases, customer data, or any system where unauthorized access would cause real harm. It’s particularly valuable when you need audit trails for compliance (GDPR, HIPAA, SOC 2) or when your threat model includes prompt injection attacks that could lead to data exfiltration. The deterministic policy enforcement and dataflow visibility justify the operational overhead. Skip it if you’re in early prototyping phases without sensitive data, if your agents only interact with read-only public APIs, or if the added latency and architectural complexity of running a proxy gateway don’t align with your performance requirements. Also skip if you’re a team larger than one person and aren’t prepared to evaluate the commercial version—the single-user limitation makes the open-source edition unsuitable for collaborative development environments.

// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/ai-agents/edison-watch-open-edison.svg)](https://starlog.is/api/badge-click/ai-agents/edison-watch-open-edison)