Teaching AI Assistants to Hunt Bugs: Inside h1-brain’s MCP Architecture
Hook
What if your AI assistant could remember every vulnerability you’ve ever found, cross-reference 3,600 public bounty reports, and suggest untested attack vectors—all in a single function call?
Context
Bug bounty hunting is knowledge work at its core. Every researcher builds an internal database: which endpoints paid bounties, what weakness types work on specific programs, where you’ve already tested. But this intelligence lives in scattered notes, browser bookmarks, and your own memory. When you return to a program after months away, you’re reconstructing context from scratch.
h1-brain solves this by turning AI assistants into persistent research companions. Built on the Model Context Protocol (MCP)—Anthropic’s standard for connecting Claude to external data sources—it syncs your HackerOne history into a local SQLite database and exposes it through MCP tools. The AI can query your past findings, analyze weakness patterns across programs, and generate attack briefings that contextualize it as an offensive security researcher. It ships with a pre-built database of 3,600+ publicly disclosed bounty-awarded reports, so even researchers without extensive personal history gain immediate access to community intelligence.
Technical Insight
The architecture centers on dual SQLite databases and a flagship orchestration tool. Your personal database (h1_data.db) stores reports, programs, and scopes synced from HackerOne’s API via fetch_rewarded_reports and fetch_programs tools. The public database (disclosed_reports.db) ships with the repository—no scraping required. Both are exposed through MCP’s tool protocol, allowing Claude to treat them as callable functions.
The real innovation is the hack(handle) tool, which compresses an entire reconnaissance workflow into a single MCP call. According to the README, when you invoke it, the tool:
- Fetches fresh program scopes from the HackerOne API
- Pulls your past rewarded reports for that program
- Cross-references your full report history for weakness patterns
- Identifies untouched bounty-eligible assets
- Pulls public disclosed reports for this program
- Suggests attack vectors based on weaknesses that paid elsewhere but haven’t been found here
- Returns an attack briefing that puts the AI in offensive mode
The briefing structure is carefully designed to put the AI in attack mode. It starts with Scope (bounty-eligible assets, severity caps), surfaces Your Past Findings (what you’ve already reported and what paid), identifies Untouched Scope (assets with zero coverage), and suggests Attack Vectors by cross-referencing weaknesses that paid bounties elsewhere but haven’t been found on this program. The Public Disclosed Reports section adds community intelligence—what other researchers discovered, their vulnerability write-ups, bounty amounts.
MCP’s tool protocol means these capabilities integrate natively into Claude Desktop or Claude Code. Instead of copy-pasting report data or manually searching HackerOne, you ask: “Run hack() against Shopify” and receive a multi-page briefing with actionable next steps. The AI can then follow up with specific queries like searching your XSS reports across programs using search_reports(weakness='XSS') or querying public disclosures for a specific program.
The dual-database design is deliberate. Personal data queries hit SQLite directly—no API calls, instant results, no rate limits. Only the hack() tool’s scope-fetching step touches the HackerOne API, pulling fresh asset lists. This means you can run unlimited historical analysis without burning API quota. The pre-built public database eliminates the cold-start problem: even researchers with zero personal reports can immediately query 3,600+ community disclosures to understand what weakness types pay bounties.
Configuration happens through MCP’s standard environment variable mechanism. You add h1-brain to Claude Desktop’s claude_desktop_config.json with your HackerOne credentials, and the server bootstraps automatically:
{
"mcpServers": {
"h1-brain": {
"command": "/path/to/h1-brain/venv/bin/python",
"args": ["/path/to/h1-brain/server.py"],
"env": {
"H1_USERNAME": "your_hackerone_username",
"H1_API_TOKEN": "your_api_token"
}
}
}
}
The MCP protocol handles communication between Claude and the server process, with each tool invocation becoming a protocol call that streams responses back to the AI.
Gotcha
The tool’s utility scales directly with your HackerOne activity. If you’re new to bug bounties with few or no rewarded reports, fetch_rewarded_reports will populate an empty database. The weakness pattern analysis and “what worked before” suggestions depend on having personal history to mine. The public disclosed reports database provides baseline intelligence, but the flagship hack() briefing loses its personalization without your own data.
The pre-built disclosed_reports.db is a static snapshot shipped with the repository. It contains 3,600+ reports but represents a point-in-time snapshot and won’t automatically update as new bounties are disclosed on HackerOne. The README doesn’t describe a mechanism for refreshing this database with newly disclosed reports.
Platform lock-in is absolute. This only works with HackerOne. If you primarily hunt on Bugcrowd, Intigriti, YesWeHack, or private programs on other platforms, you’d need to fork the entire server and rewrite the API integration layer. The tool interfaces and data structures appear designed specifically for HackerOne’s platform.
Verdict
Use h1-brain if you’re an active HackerOne researcher with rewarded reports who wants AI-augmented reconnaissance. The hack() briefing excels at surfacing untested assets and cross-referencing your vulnerability patterns with community disclosures. It transforms Claude from a general assistant into a domain-specific research tool that remembers your security work. The MCP integration is seamless—no context-switching to external dashboards. Skip it if you’re new to bug bounties (insufficient personal data), work primarily on non-HackerOne platforms (no multi-platform support), or prefer traditional manual workflows without AI intermediation. It’s a force multiplier for experienced hunters, not a learning tool for beginners.