Back to Articles

Accomplish: The Local-First AI Agent That Actually Touches Your Files

[ View on GitHub ]

Accomplish: The Local-First AI Agent That Actually Touches Your Files

Hook

Most AI assistants can only tell you how to rename 500 files. Accomplish actually does it—with your permission, on your machine, using your choice of 15+ different AI models.

Context

The explosion of LLM-powered tools has created a curious gap: we have chatbots that can write entire applications, but they can’t move a file on your desktop. Tools like ChatGPT and Claude excel at conversation and code generation, but they live in browser tabs, disconnected from the actual work environment where developers spend their time. When you ask Claude to “organize my project files by date,” it gives you a bash script to copy-paste. It can’t just… do it.

This disconnect spawned a wave of “AI agent” frameworks—Auto-GPT, BabyAGI, and others that promised autonomous task execution. But these tools skewed heavily technical, requiring Python environments, API orchestration knowledge, and comfort with agents that could theoretically do anything (which is either exciting or terrifying, depending on your perspective). Accomplish, formerly known as Openwork, emerged to fill the middle ground: a polished desktop application that lets AI models interact with your local filesystem, documents, and browser—but only with explicit permission and transparent action approval.

Technical Insight

Multi-Provider LLM

Electron App (Main Process)

Task Description

Structured Actions

FileAction Array

Present Actions

Approved Actions

Verify Access

Execute Operations

Operation Results

User Input

Natural Language Goal

Permission System

Folder Access Control

Action Queue

Pending Operations

Filesystem Executor

Sandboxed Operations

OpenAI/Anthropic/etc

Response Parser

JSON Action Plans

User Approval

Review Interface

Local Filesystem

Permitted Directories

System architecture — auto-generated

Accomplish’s architecture is fundamentally about bridging two worlds: the cloud-based LLM APIs that provide intelligence, and the local filesystem operations that constitute actual work. Built on Electron with TypeScript, it’s essentially a sandboxed execution environment where AI-generated actions become filesystem operations after user approval.

The key architectural decision is the permission model. Unlike traditional automation tools that require broad system access, Accomplish uses a granular folder-level permission system. Users explicitly grant access to specific directories, and every file operation—read, write, delete—generates an action that must be approved before execution. Under the hood, this is implemented as a queuing system where the LLM generates a series of proposed operations in a structured format, likely JSON, that the Electron app parses and presents to the user.

Here’s a simplified example of what that action structure might look like internally:

interface FileAction {
  type: 'create' | 'modify' | 'delete' | 'move' | 'read';
  path: string;
  content?: string;
  destination?: string;
  reasoning: string;
}

interface ActionPlan {
  goal: string;
  actions: FileAction[];
  estimatedImpact: {
    filesAffected: number;
    reversible: boolean;
  };
}

// Example action plan from an LLM
const plan: ActionPlan = {
  goal: "Organize project screenshots by date",
  actions: [
    {
      type: 'create',
      path: '/Users/dev/Screenshots/2024-01',
      reasoning: 'Create directory for January 2024 screenshots'
    },
    {
      type: 'move',
      path: '/Users/dev/Screenshots/Screenshot 2024-01-15.png',
      destination: '/Users/dev/Screenshots/2024-01/',
      reasoning: 'Move January screenshot to new folder'
    }
  ],
  estimatedImpact: {
    filesAffected: 47,
    reversible: true
  }
};

The multi-provider LLM integration is another architectural highlight. Rather than locking users into a single model provider, Accomplish implements a provider abstraction layer. It supports OpenAI, Anthropic, Google, xAI, Ollama, LM Studio, and others—over 15 providers total. This is achieved through a common interface that normalizes request and response formats across different APIs. The bring-your-own-API-key model means no subscription to Accomplish itself; users pay providers directly (or use local models via Ollama for zero API costs).

The “skills” system takes this further by allowing users to save and reuse workflows. A skill is essentially a parameterized prompt template combined with a set of expected actions. For instance, a “Weekly Report Generator” skill might scan specific folders, extract data from markdown files, and compile a formatted report. Once defined, skills become one-click operations that apply consistent logic across different contexts.

Browser automation integration suggests Accomplish likely uses something like Playwright or Puppeteer under the hood, allowing the AI to navigate web pages, extract data, and fill forms. This transforms it from a filesystem tool into a genuine “coworker” that can bridge local files and web-based workflows—imagine an agent that reads your local CSV, navigates to a web dashboard, and inputs the data.

The Electron foundation is both practical and limiting. It provides a single codebase for macOS and Windows with native filesystem APIs and system tray integration, but it also means the app carries the typical Electron overhead. For an AI agent that might run continuously in the background, memory footprint matters. The TypeScript implementation suggests a focus on maintainability and type safety, critical for an app that performs potentially destructive file operations.

Gotcha

The platform limitation is immediate and significant: macOS (Apple Silicon only) and Windows 11. No Linux support, no Intel Macs. For a tool targeting developers—a demographic that skews heavily toward Linux and older Mac hardware—this is a substantial restriction. The Apple Silicon requirement in particular suggests the codebase may have architecture-specific dependencies or performance optimizations that aren’t easily portable.

The action-approval workflow, while essential for safety, creates friction that compounds with scale. Approving 5 file operations is reasonable; approving 200 becomes tedious. For users wanting to fully automate repetitive tasks, the requirement to review each action batch disrupts the “set it and forget it” promise of automation. There’s no apparent “trusted skills” mode where pre-approved workflows can run without intervention. Power users will likely find themselves wishing for more granular trust controls—perhaps approving a skill once and letting it run autonomously within its defined scope.

The bring-your-own-API-key model, while avoiding vendor lock-in, shifts complexity to users. You need to understand provider pricing, rate limits, token costs, and model capabilities. If you’re using Claude for file operations and hit Anthropic’s rate limits mid-task, your workflow stops. There’s no built-in cost tracking or budget alerts visible in the architecture, so users monitoring multiple provider APIs need to track spending externally. For developers, this is manageable; for the broader “knowledge worker” audience Accomplish seems to target, it’s a barrier.

Verdict

Use Accomplish if: you value data sovereignty and want AI that works on your files without uploading them to proprietary platforms; you’re comfortable managing multiple LLM provider API keys and understanding their cost structures; you run macOS on Apple Silicon or Windows 11; you need vendor-agnostic AI integration where you can switch between Claude, GPT-4, local Llama models, and others based on task requirements; you want repeatable automation through the skills system but are okay with approving action batches. Skip if: you need Linux support or run an Intel Mac; you want fully autonomous automation without approval steps; you prefer managed services where someone else handles API keys, billing, and model selection; you need an agent that runs on servers rather than desktop machines; you’re looking for mobile access to your AI coworker. Accomplish occupies a specific niche: privacy-conscious professionals who want AI assistance for local file and document work without sacrificing control or freedom. It’s not trying to be everything—it’s trying to be the AI coworker that respects your space while actually getting work done.

// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/ai-agents/accomplish-ai-accomplish.svg)](https://starlog.is/api/badge-click/ai-agents/accomplish-ai-accomplish)