Back to Articles

Vibecraft: Visualizing Claude Code as a 3D Workshop with Zero API Modifications

[ View on GitHub ]

Vibecraft: Visualizing Claude Code as a 3D Workshop with Zero API Modifications

Hook

What if you could watch your AI coding assistant physically walk between a bookshelf, workbench, and terminal as it reads files, edits code, and runs bash commands? That’s exactly what Vibecraft does, and it never touches Anthropic’s binaries.

Context

Claude Code’s command-line interface is powerful but opaque. When you’re running multiple Claude instances simultaneously—one refactoring your authentication layer, another writing tests, a third debugging a deployment script—keeping track of which agent is doing what becomes a cognitive nightmare. You’re context-switching between tmux panes, scrolling through terminal output, and losing track of which Claude just asked you a question.

Vibecraft solves this by creating a spatial representation of AI tool usage. Instead of parsing text logs, you see an animated character move between distinct “stations” in an isometric 3D workshop. When Claude reads a file, it walks to the bookshelf. When it runs bash commands, it visits the terminal station. When it spawns subagents using the task tool, mini-Claudes appear through a glowing portal. The tool uses hook scripts and WebSocket broadcasting to intercept Claude’s tool usage without modifying the actual Claude executable, making it version-agnostic and surprisingly resilient.

Technical Insight

The architecture is deceptively clever. Rather than forking Claude Code or writing a proxy server, Vibecraft configures hooks that intercept tool calls at the process level. During setup, it installs hook scripts that integrate with your Claude installation to capture tool invocations:

npx vibecraft setup

This installs hook scripts that capture Claude’s execution flow. When Claude invokes a tool like read_file, the hook captures the JSON payload, forwards it to the Vibecraft server running on port 4003, and lets the original command proceed unchanged. The server parses the tool name and arguments, then broadcasts state updates to any connected browser clients. The Three.js frontend receives these updates and choreographs the character’s movement between stations.

The station mapping is hardcoded but intuitive. The read_file tool triggers movement to the Bookshelf (complete with book models on shelves), write_to_file sends Claude to the Desk (paper, pencil, ink pot), edit_file goes to the Workbench (wrench and gears), and bash_execute activates the Terminal station with a glowing screen. Each station is a distinct hex tile in the isometric grid, and the character pathfinds between them using A* on the hex coordinate system.

What makes this genuinely useful is the multi-session orchestration. You can spawn up to six Claude instances simultaneously, each running in its own tmux session. The browser UI shows all sessions as glowing zones in the 3D space, with status indicators (idle/working/offline) and keyboard shortcuts (1-6) to switch between them. When you need to send a prompt to a specific Claude, you select its session and type into the browser input field—Vibecraft injects the text into the corresponding tmux session using tmux send-keys:

tmux new -s claude
claude

Once your Claude instance is running inside a named tmux session, the browser’s “Send to tmux” checkbox enables prompt injection. This turns your browser into a visual orchestration layer for multiple AI agents, which is surprisingly powerful when you’re coordinating parallel work streams.

The visualization isn’t just eye candy—it provides genuine cognitive offloading. Spatial audio means you can hear which session is active even when looking away. Context-aware animations (Claude celebrates on git commits, shakes its head on errors) create peripheral awareness of agent state without requiring active monitoring. Floating labels above stations show file paths and bash commands, so you know Claude is editing src/auth.ts without reading logs. The activity feed captures Claude’s responses in a scrollable panel, giving you both spatial and textual views simultaneously.

The codebase is entirely TypeScript, with the server using Node.js and the client built on Three.js with Vite. The hex grid rendering uses instanced meshes for performance—important when you’re rendering six separate zones with animated characters, particle effects, and dynamic labels. State synchronization happens over WebSocket with a simple JSON protocol: {type: 'tool', session: 'claude-1', tool: 'read_file', args: {path: 'README.md'}}.

Gotcha

The Unix dependency is non-negotiable. Vibecraft explicitly requires bash for hook scripts and tmux for session management, with the README clearly stating “Windows not supported - hooks require bash.” This immediately cuts the potential user base by a third, and even WSL won’t help much because Claude Code itself expects a native environment.

The hook configuration requires specific dependencies: jq for JSON parsing and tmux for session management must be installed separately. You’ll need to run brew install jq tmux on macOS or apt install jq tmux on Ubuntu/Debian before the setup process works. This is standard for developer tools but adds friction to the initial setup.

Voice input requires a paid Deepgram API key, which the README documents upfront in the features list. If you want real-time transcription, you’ll need to provide your own API credentials. The tmux-based prompt injection follows documented conventions—if you’re using a different terminal multiplexer like Zellij or Screen, you’ll need to adapt your workflow or type directly into Claude’s stdin.

The tool is local-first by design: it connects to your own Claude Code instances and explicitly states that “no files or prompts are shared.” This means all processing happens on your machine, which is great for privacy but requires manual session management. The web interface at vibecraft.sh is just a hosted frontend that still connects to your local WebSocket server.

Verdict

Use Vibecraft if you’re a macOS/Linux developer regularly running multiple Claude Code instances in parallel and struggling to track which agent is doing what. The spatial visualization genuinely reduces cognitive load when orchestrating three or more simultaneous workflows, and the browser-based prompt injection beats alt-tabbing between tmux panes. It’s particularly valuable for teams doing live demos or streaming AI-assisted development—watching Claude walk around a 3D workshop is far more engaging than showing terminal output. Skip it if you’re on Windows (explicitly unsupported), run Claude infrequently (the setup overhead isn’t worth it for casual use), or aren’t comfortable with terminal-based workflows requiring tmux and hook configuration. Also skip if you need voice input but don’t want to pay for a Deepgram API key. But if you live in the terminal and want to gamify your AI coding workflow, Vibecraft is a delightful piece of developer tooling that actually makes multi-agent orchestration tractable.

// QUOTABLE

What if you could watch your AI coding assistant physically walk between a bookshelf, workbench, and terminal as it reads files, edits code, and runs bash commands? That's exactly what Vibecraft do...

[ Tweet This ]
// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/developer-tools/nearcyan-vibecraft.svg)](https://starlog.is/api/badge-click/developer-tools/nearcyan-vibecraft)