Dotai: Engineering AI Context as Infrastructure for Claude and Cursor
Hook
What if your AI coding assistant’s behavior could be versioned, reviewed in pull requests, and distributed across your team like any other infrastructure-as-code tool?
Context
AI coding assistants like Claude and Cursor have exploded in popularity, but they suffer from a consistency problem. Each developer crafts their own ad-hoc prompts, leading to wildly different behaviors across sessions and team members. Ask Claude to debug code on Monday and you might get stack traces; ask on Tuesday and you get logging suggestions. This inconsistency stems from treating AI context as ephemeral chat history rather than engineered infrastructure.
Dotai takes the radical approach of treating prompt engineering as a compositional, versioned system. Built by the team behind Plate (the popular rich-text editor framework), dotai emerged from their own frustration managing AI workflows across contributors. Instead of copy-pasting ‘good prompts’ into Slack or maintaining a Google Doc of best practices, they built a plugin architecture that injects structured context into AI sessions automatically. The result is a marketplace of reusable ‘skills’ and ‘commands’ that transform AI assistants from stateless chatbots into workflow-aware development partners.
Technical Insight
At its core, dotai is a hook-based context injection system. When you install dotai into a project, it creates two key files: .claude/settings.json for plugin configuration and .claude/prompt.yml for custom prompt injection. These files are version-controlled alongside your code, making AI behavior reproducible across the team.
The architecture revolves around three injection points:
# .claude/prompt.yml
beforeStart: |
You are working on a TypeScript project.
Always use functional patterns over classes.
beforeComplete: |
Before responding, check:
1. Are there existing tests for this code?
2. Does this follow our error handling patterns?
afterCompact: |
When context is pruned, preserve:
- All error messages from the current session
- File paths that were modified
beforeStart fires when the AI session initializes, setting ground rules. beforeComplete runs before each AI response, acting as a checklist. afterCompact determines what context survives when token limits force history pruning—crucial for long debugging sessions where early error messages contain vital clues.
Plugins extend this foundation with packaged workflows. The ‘dig’ plugin demonstrates the anti-hallucination pattern:
# In your Claude/Cursor chat
@dig react-hook-form
This command clones the react-hook-form repository into a temporary directory, extracts key source files, and injects them directly into context. Instead of the AI guessing at API signatures from potentially outdated training data, it reads the actual current implementation. For library-heavy work, this is transformative—you get accurate API usage without constantly tab-switching to documentation.
The TDD skill showcases auto-invoked workflows:
// .claude/settings.json
{
"skills": [
{
"name": "tdd",
"trigger": "file:create|modify",
"pattern": "\\.(ts|tsx|js|jsx)$",
"exclude": "\\.(test|spec)\\.",
"prompt": "After modifying source files, generate or update corresponding test files following our testing patterns."
}
]
}
When you create or modify a source file, the TDD skill automatically triggers, reminding the AI to generate tests. No need to explicitly ask ‘now write tests for this’—the behavior is baked into your project configuration.
The real power emerges with MCP (Model Context Protocol) integration. MCP servers expose structured data—API documentation, database schemas, design systems—in a format AI models can query. Dotai’s plugin system wraps MCP servers as installable packages:
# In Claude/Cursor chat
@install mcp-server-nextjs
This installs an MCP server that provides Next.js 14 documentation, including App Router patterns, server components, and caching strategies. When you ask about Next.js routing, the AI queries fresh documentation instead of relying on training data that might predate App Router entirely.
The Compound Engineering plugin takes this to the extreme with 27 specialized agents for code review. Install it and type @review, and a choreographed sequence of agents examines security, performance, accessibility, and architecture—each with focused expertise. This isn’t just prompt engineering; it’s workflow orchestration encoded as configuration.
The CLI-style command syntax inside AI chat is particularly clever. Commands like @dig, @install, and @review are parsed by dotai’s prompt injection layer and transformed into structured instructions the AI understands. You’re not asking the AI to ‘please clone this repo’—you’re invoking a scripted capability that the AI executes following a precise template. The result feels like extending your AI assistant with new shell commands.
Gotcha
The biggest limitation is vendor lock-in. Dotai is tightly coupled to Claude and Cursor—there’s no portability to GitHub Copilot, Cody, or other AI assistants. The prompt injection mechanism relies on how these specific tools handle context, and the plugin marketplace is built around their extension models. If your team uses a mix of AI tools, or if you’re wary of betting on a single vendor, dotai’s value proposition crumbles.
Documentation is the second major pain point. The repository is marked as ‘Shell’ language but contains minimal implementation code. How are plugins actually executed? Where’s the runtime that parses @dig commands? The project appears to rely heavily on conventions and undocumented behavior in Claude/Cursor. For instance, the YAML prompt injection—how does it actually get injected? Is there a daemon watching .claude/ files? Is it compiled into system prompts? The lack of architectural documentation means you’re adopting based on trust rather than understanding. Production use requires clarity that dotai doesn’t yet provide.
Performance and token costs are also concerns. Auto-invoked skills and MCP integrations inject significant context into every session. That TDD skill? It’s burning tokens on every file save, whether you need tests or not. The 27-agent code review? That’s potentially dozens of API calls for a single review command. For teams on usage-based billing, dotai could get expensive fast without careful skill configuration and trigger tuning.
Verdict
Use if: You’re a team standardizing on Claude/Cursor for development and need reproducible AI behavior across contributors. The ability to version-control prompts, share skills via pull requests, and enforce workflows like TDD or structured debugging is genuinely powerful for maintaining consistency. Early adopters comfortable with bleeding-edge tools who can tolerate sparse documentation will find the plugin marketplace approach compelling, especially if you’re already investing in MCP server infrastructure. Skip if: You use multiple AI assistants (the vendor lock-in is real), prefer the simplicity of .cursorrules files for basic prompt customization, or need production-ready tooling with comprehensive documentation before adopting. Also skip if you’re cost-sensitive on API usage—the auto-invoked skills and context injection can burn through tokens quickly without careful configuration. This is infrastructure for teams treating AI-assisted development as a first-class workflow, not a casual enhancement.