Back to Articles

Maestro: Orchestrating Parallel AI Coding Agents with Git Worktrees

[ View on GitHub ]

Maestro: Orchestrating Parallel AI Coding Agents with Git Worktrees

Hook

What if you could run multiple AI coding agents simultaneously on the same repository—each on its own branch, in its own directory—without conflicts? That’s not a hypothetical. That’s Git worktrees meets AI orchestration.

Context

AI coding assistants like Claude Code and OpenAI Codex are powerful, but they’re built for single-threaded workflows. You open one project, have one conversation, work on one branch. If you’re juggling multiple features, experiments, or client projects, you’re constantly context-switching—closing sessions, changing directories, losing flow state. Maestro emerged from this frustration: a recognition that developers managing multiple parallel work streams need infrastructure that matches their cognitive load, not tools that force serialization.

The core insight is treating AI agents as first-class citizens in your development workflow. Just as you wouldn’t run all your microservices in a single process, you shouldn’t funnel all your AI-assisted work through one conversation thread. Maestro is a cross-platform desktop application that wraps Claude Code, OpenAI Codex, OpenCode, and Factory Droid with a unified orchestration layer. It’s not a replacement for these tools—it’s a pass-through that preserves your existing MCP tools, authentication, and permissions while adding parallel execution, automated task processing, and keyboard-driven session management. The target user is clear: power users shipping fast across multiple repositories who rarely touch the mouse.

Technical Insight

The architectural centerpiece of Maestro is its Git worktree integration, which solves the fundamental problem of parallel AI development. Git worktrees let you check out multiple branches from the same repository into separate directories simultaneously. While developers have used worktrees for years to work on multiple features in parallel, Maestro extends this pattern to AI agents. From the git branch menu, you can spawn a worktree sub-agent: Maestro creates a new directory, checks out the branch, and launches an isolated AI session. The main repository remains untouched while sub-agents operate independently. Each worktree agent has its own conversation history, workspace state, and terminal access. When an agent completes its work, you can create a PR with one click. This architecture enables true parallel development without conflicts because each agent works in physical isolation until you explicitly integrate changes.

The second major architectural decision is the file-system-based task runner called Auto Run. Instead of building a custom workflow DSL or YAML configuration, Maestro processes markdown checklists. Here’s what a playbook looks like:

# Feature Implementation Playbook

- [ ] Create database migration for user preferences table
- [ ] Implement UserPreferencesService with CRUD operations
- [ ] Add API endpoints for preferences management
- [ ] Write integration tests for preferences workflow
- [ ] Update API documentation with new endpoints

Each unchecked task becomes a prompt sent to a fresh AI session. The agent processes the task, Maestro captures the response, and marks it complete. The crucial detail: each task gets clean context. There’s no conversation drift from earlier tasks, no token limit pressure from accumulated history. This design enables the nearly 24 hours of continuous runtime mentioned in the README—because context doesn’t degrade over time, agents can process lengthy playbooks unattended. You can run playbooks in loops, track full execution history, and resume from failures. The file-system approach also means playbooks are version-controlled alongside your code, reviewable in PRs, and easily shared across teams.

Maestro maintains dual-mode sessions for each agent: an AI Terminal for conversation with the coding assistant and a Command Terminal for shell access. You switch between modes with Cmd+J. This separation respects the mental model that AI conversation and system commands are different activities requiring different contexts. The AI Terminal queues messages when the agent is busy—you can keep typing thoughts without blocking—and sends them automatically when ready. The Command Terminal gives you direct shell access within the agent’s workspace, useful for running tests, checking logs, or manual git operations that don’t require AI assistance.

Session discovery is handled through filesystem scanning. Maestro automatically finds existing Claude Code, Codex, and OpenCode sessions by reading their native storage locations. This means all your historical conversations—including those from before you installed Maestro—appear in the session browser. You can search, star, rename, and resume any session. The technical implication is that Maestro doesn’t own your data or lock you into proprietary formats. Your AI provider stores sessions in its native format; Maestro just provides a unified interface for accessing and orchestrating them.

The keyboard-first design goes beyond standard shortcuts. Maestro implements a mastery tracking system that monitors which shortcuts you use and rewards progression. Think of it as achievement unlocks for keyboard efficiency. The Cmd+K quick actions palette provides fuzzy-search access to all functionality, while dedicated shortcuts handle rapid agent switching, focus management, and common operations. For power users, this design philosophy means the mouse is available but optional, and you can achieve flow state without leaving the keyboard.

Gotcha

Maestro’s architecture creates hard constraints you need to understand upfront. First, you’re limited to the four supported AI coding assistants: Claude Code, OpenAI Codex, OpenCode, and Factory Droid. The pass-through design means Maestro inherits the capabilities and limitations of the underlying provider—it’s an orchestration layer, not an enhancement layer. If your AI assistant can’t handle a specific task, Maestro won’t magically enable it. This is both a strength (you keep your existing tools and configurations) and a weakness (you can’t use Maestro to access different models or capabilities than your provider offers).

Second, while Maestro is a desktop application, it does include remote access capabilities through its built-in web server with QR code access for mobile control. However, the desktop app must be running for this remote access to work. The CLI exists for headless operation, but the full GUI requires a desktop environment. Maestro is also specifically designed for coding agents. If you want to orchestrate general-purpose AI workflows, document processing pipelines, or non-coding tasks, this tool isn’t built for that use case.

Verdict

Use Maestro if you’re managing three or more active development projects simultaneously and frequently context-switch between them, especially if you’re already comfortable with Git worktrees or want to leverage parallel branch development. It’s ideal for consultants juggling client projects, open-source maintainers handling multiple repositories, or teams doing exploratory feature development where you need to run experiments in parallel. The Auto Run playbooks excel for repetitive workflows like updating documentation across multiple repos, running standardized refactoring tasks, or processing backlogs overnight. If you’re a keyboard power user who finds mouse-driven interfaces slow, the mastery tracking and Cmd+K quick actions will feel like home. Skip Maestro if you work on one project at a time with linear feature development—the orchestration overhead isn’t worth it for single-threaded workflows. Also skip if you use AI coding tools outside the four supported providers, or prefer IDE-integrated assistants where the AI is embedded in your editor. If you’re orchestrating non-coding AI agents, this tool isn’t designed for your use case. Maestro assumes you’re comfortable with Git, markdown, and keyboard-driven interfaces; if that’s not your workflow, the learning curve will feel steep for limited benefit.

// QUOTABLE

What if you could run multiple AI coding agents simultaneously on the same repository—each on its own branch, in its own directory—without conflicts? That's not a hypothetical. That's Git worktrees...

[ Tweet This ]
// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/developer-tools/runmaestro-maestro.svg)](https://starlog.is/api/badge-click/developer-tools/runmaestro-maestro)