Back to Articles

Attractor: The Coding Agent That Exists Only as a Specification

[ View on GitHub ]

Attractor: The Coding Agent That Exists Only as a Specification

Hook

What if the future of open source isn’t repositories full of code, but repositories full of instructions for AI agents to generate that code on demand?

Context

The explosion of AI coding assistants has created a paradox: we have powerful tools like GitHub Copilot, Cursor, and Claude that can generate code, but when we want to build automated development pipelines—what some call ‘software factories’—we’re back to writing traditional code. Companies building high-volume code generation systems face a dilemma. Off-the-shelf coding agents are designed for interactive use, sitting beside a human developer. They wait for prompts, ask clarifying questions, and expect someone to review their output. But in a software factory context—generating thousands of API clients, maintaining parallel service implementations, or producing customer-specific code variations—this interactive model breaks down.

StrongDM’s Attractor takes a radical approach: instead of providing an implementation, it provides natural language specifications that describe how to build a non-interactive coding agent. It’s a meta-project, a blueprint that assumes you’ll use an existing coding agent (like Claude or Cursor) to implement the actual system. This inverts the traditional open-source model. Rather than forking code and modifying it, you use your preferred AI tooling to generate an implementation from human-readable specs, getting a system that integrates naturally with your existing stack. The repository contains three core NLSpecs: the Attractor agent itself, a coding agent loop for orchestrating LLM-driven tasks, and a unified LLM client for abstracting model interactions.

Technical Insight

Autonomous Loop (max 10 iterations)

Read relevant files

System prompt + context

Generated code changes

Parsed changes

Apply changes

Validation passed

Validation errors

Max iterations reached

Task Specification

Attractor Loop Controller

Workspace Manager

Context Builder

LLM Client

Change Parser

Validation Engine

Artifacts & Exit

MaxIterationsExceeded

System architecture — auto-generated

Attractor’s architecture consists of three interconnected specifications that work together as a system blueprint. The core Attractor spec defines a non-interactive coding agent optimized for batch operations. Unlike interactive agents that maintain conversational context, Attractor is designed for fire-and-forget tasks: receive a specification, generate code, write results, exit. This stateless design makes it suitable for containerized deployments and parallel execution.

The coding agent loop spec orchestrates the interaction between user requests and LLM responses. Here’s what a typical implementation flow looks like based on the spec:

# Pseudocode based on Attractor's loop specification
class AttractorLoop:
    def __init__(self, llm_client, workspace):
        self.llm = llm_client
        self.workspace = workspace
        self.max_iterations = 10
    
    def execute(self, task_spec):
        context = self.workspace.read_relevant_files(task_spec)
        
        for iteration in range(self.max_iterations):
            # Generate code without human intervention
            response = self.llm.complete(
                system_prompt=ATTRACTOR_SYSTEM_PROMPT,
                user_context=context,
                task=task_spec
            )
            
            # Apply changes directly to workspace
            changes = self.parse_changes(response)
            self.workspace.apply(changes)
            
            # Validate without asking for confirmation
            validation = self.workspace.validate()
            
            if validation.passed:
                return validation.artifacts
            
            # Add validation errors to context for next iteration
            context.append_errors(validation.errors)
        
        raise MaxIterationsExceeded()

The key difference from interactive agents is the absence of human checkpoints. Each iteration automatically proceeds based on validation results, making the system suitable for scenarios where you’re generating hundreds of similar artifacts—think API clients for 50 microservices or localized versions of a codebase for different regulatory environments.

The unified LLM client spec addresses a practical problem: LLM APIs are inconsistent. Claude uses one request format, OpenAI another, and local models often have their own quirks. The spec describes an abstraction layer that normalizes these differences:

# Implementation concept from the LLM client spec
class UnifiedLLMClient:
    def complete(self, system_prompt, user_context, task, model_hint=None):
        provider = self.select_provider(model_hint)
        
        # Normalize request format
        normalized_request = {
            'system': system_prompt,
            'messages': self.format_context(user_context),
            'temperature': 0.2,  # Low temp for consistency
            'max_tokens': self.calculate_token_budget(task)
        }
        
        # Provider-specific transformation
        provider_request = provider.transform(normalized_request)
        raw_response = provider.call(provider_request)
        
        # Normalize response format
        return self.normalize_response(raw_response)
    
    def select_provider(self, hint):
        # Route based on task characteristics
        # Large refactors -> Claude (longer context)
        # Quick generations -> GPT-4 (faster)
        # Cost-sensitive -> local model
        pass

What makes Attractor unusual is that these examples don’t exist in the repository—you generate them. The specs describe the behavior, architectural constraints, and integration points in natural language. You then feed these specs to your existing coding agent, which produces an implementation in your language of choice, using your preferred frameworks. One team might get a Python implementation using asyncio and the Anthropic SDK. Another might get TypeScript with a custom LLM client that talks to their internal model API.

This approach shines in software factory scenarios. Imagine you need to generate CRUD APIs for 100 database tables every time your schema changes. You create a task specification describing the table and desired API shape, feed it to Attractor, and get generated code. Because Attractor is non-interactive, you can run these generations in parallel, containerized, as part of your CI/CD pipeline. The spec recommends implementing workspace isolation so multiple Attractor instances can run simultaneously without conflicts:

# Parallel execution pattern from the spec
import asyncio
from attractor import AttractorLoop, IsolatedWorkspace

async def generate_api(table_schema):
    workspace = IsolatedWorkspace(f"/tmp/gen_{table_schema.name}")
    loop = AttractorLoop(llm_client, workspace)
    
    task = f"""
    Generate a REST API for {table_schema.name}
    Columns: {table_schema.columns}
    Include: GET, POST, PUT, DELETE endpoints
    Use: FastAPI framework
    Add: Input validation and error handling
    """
    
    return await loop.execute(task)

tables = load_database_schema()
results = await asyncio.gather(*[generate_api(t) for t in tables])

The natural language spec format also enables rapid customization. If you need Attractor to integrate with your company’s specific code review system or use your internal style guide, you modify the spec and regenerate rather than navigating an unfamiliar codebase.

Gotcha

The fundamental limitation is the bootstrapping paradox: you need a sophisticated coding agent to implement Attractor, which means you already have access to the kind of AI tooling that Attractor is designed to create. This creates a narrow use case. If you have Claude or Cursor, why not just use them directly? The answer is non-interactive batch operations, but that’s a specialized need.

The lack of a reference implementation creates validation problems. Natural language specifications are inherently ambiguous. Two developers using different coding agents might generate implementations that diverge in subtle but important ways. How do you know your generated Attractor matches the intended behavior? The project provides no test suite, no validation harness, no examples of expected output. You’re flying blind, trusting that your coding agent correctly interpreted the specs. This works reasonably well for straightforward interpretations, but edge cases become problematic. What happens when workspace validation fails after max iterations? How should the system handle LLM timeouts? The specs may describe these scenarios, but different implementations will handle them differently.

There’s also a maintenance consideration. When you generate code from specs, you typically want to treat that generated code as an artifact you don’t manually modify. But in practice, you’ll need to customize Attractor for your environment—integrating with your auth system, adding observability hooks, handling your specific file formats. Now you’ve got generated code with manual modifications. When StrongDM updates the specs, regenerating means either losing your customizations or carefully merging changes. This tension between generated and hand-modified code is solvable but requires thoughtful architecture from day one.

Verdict

Use Attractor if you’re building software factory infrastructure and need complete control over your AI coding pipeline implementation. It’s ideal for teams that have specific requirements—integrating with proprietary systems, using specialized LLMs, or needing unusual execution environments—that off-the-shelf coding agents can’t accommodate. The spec-based approach gives you maximum flexibility to generate an implementation that fits your exact stack. It’s also valuable if you’re experimenting with non-interactive AI development workflows and want a conceptual framework to build on rather than a black-box tool to integrate. Skip Attractor if you need a production-ready solution today, lack access to capable coding agents for bootstrapping, or prefer traditional codebases where behavior is explicit in code rather than derived from specifications. Also skip it if your use case is primarily interactive development—just use Claude, Cursor, or Copilot directly. The complexity of implementing and maintaining a spec-derived system only pays off when you’re doing high-volume, automated code generation where customization and control matter more than time-to-first-result.

// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/ai-agents/strongdm-attractor.svg)](https://starlog.is/api/badge-click/ai-agents/strongdm-attractor)