Back to Articles

SmartGPT: A Dual-Agent Architecture for Autonomous LLM Tasks in Rust

[ View on GitHub ]

SmartGPT: A Dual-Agent Architecture for Autonomous LLM Tasks in Rust

Hook

Most autonomous LLM frameworks treat tool execution as a prompt engineering problem. SmartGPT splits it into two specialized agents—one to think, one to execute—fundamentally changing how errors propagate through the system.

Context

The explosion of GPT-3.5 and GPT-4 capabilities created a new challenge: how do you let an LLM complete complex, multi-step tasks autonomously without constant human intervention? Early solutions like AutoGPT and BabyAGI emerged in Python, using prompt-based reasoning loops where the LLM would think, select tools, and execute actions all in one step. This works, but introduces a critical failure mode—when the LLM hallucinates tool arguments or missequences operations, the entire task derails.

SmartGPT takes a different architectural bet. Built in Rust by a developer willing to sacrifice stability for innovation, it implements a dual-agent system that separates the concerns of task decomposition and tool execution. Instead of asking one LLM instance to both decide what to do and precisely how to do it, SmartGPT delegates high-level reasoning to a Dynamic Agent and tool orchestration to a Static Agent. The project explicitly warns users it’s ‘incredibly experimental’ with no backwards compatibility guarantees—this is research-grade software exploring what happens when you rethink the autonomous agent stack from first principles.

Technical Insight

Runner

Assistant

Brainstorm

Dispatch Action

Final Response

User Task

Auto Type

Runner Auto

Assistant Auto

Dynamic Agent

REACT Loop

Static Agent

Planning Phase

Tool Sequence

Execution Phase

Fill Arguments

Plugin Tools

Web/URL/etc

Asset Storage

Task Observation

Vector DB

Long-term Memory

Context Retrieval

System architecture — auto-generated

SmartGPT’s architecture centers on the concept of Autos, which come in two flavors: Runners (given a single task to complete) and Assistants (conversational interfaces, though marked as highly experimental). Under the hood, each Auto coordinates two specialized agents.

The Dynamic Agent implements a REACT-style reasoning loop. It thinks through the problem, reasons about next steps, and makes one of three decisions: brainstorm more context, dispatch an action to the Static Agent, or return a final response to the user. This agent never directly executes tools—it only plans at a strategic level. When it decides an action is needed, it hands off control.

The Static Agent receives subtasks from the Dynamic Agent and handles precise execution through a two-phase process:

  1. Planning Phase: Generate a sequential plan of exactly which tools are needed and in what order
  2. Execution Phase: Step through the plan, filling in arguments for each tool one at a time

This separation has a subtle but important implication for error handling. In traditional prompt-based tool calling, if the LLM hallucinates a parameter value for tool three in a five-tool sequence, the entire chain often fails. SmartGPT’s Static Agent fills arguments step-by-step, which means it can potentially course-correct at each boundary. The Static Agent also manages assets—data artifacts that persist between subtasks, allowing the Dynamic Agent to reference previous outputs without re-executing expensive operations.

Memory is handled through vector database storage. After completing a task, the agent serializes observations into long-term memory. When starting a new task, it performs semantic search against this vector store to retrieve relevant context. This isn’t as sophisticated as AutoGPT’s memory systems, but it provides basic cross-session learning.

The plugin system is where extensibility lives. Plugins define tools (like google_search and browse_url are mentioned as examples) that agents can invoke. The README describes first-class plugin support and modular Auto composition, though it doesn’t provide implementation details. To get started, you install cargo, clone the repository with git clone https://github.com/Cormanz/smartgpt.git && cd smartgpt, and run cargo run --release, which auto-generates a config.yml file for configuration.

The Rust implementation is a deliberate choice. Most autonomous agent frameworks default to Python for its LLM ecosystem integration, but Rust likely brings performance advantages for the reasoning loops. What makes SmartGPT’s approach interesting is the bet that tool execution consistency matters more than reasoning flexibility. By constraining the Static Agent to sequential planning and argument filling, you lose some of the creative chaos that makes LLMs powerful, but you gain predictability in how tools are actually called. Whether this tradeoff pays off depends entirely on your task domain—if you’re orchestrating API calls and data transformations, the Static Agent’s discipline helps. If you need creative problem-solving where the path forward is genuinely unclear, constraining execution might limit what’s possible.

Gotcha

The README is refreshingly honest: SmartGPT is ‘incredibly experimental’ and ‘backwards compatibility is a fever dream here.’ This isn’t marketing copy—it’s a warning. If you deploy this in anything resembling production, expect breaking changes between versions. The maintainer is a high school student funding development through Patreon and testing primarily with GPT-3.5 due to cost constraints, which means the codebase hasn’t been battle-tested at scale with GPT-4’s full capabilities.

The ecosystem gap is real. AutoGPT has thousands of contributors, extensive plugin libraries, and integrations with established memory systems. SmartGPT has a small Discord community and limited tooling. If you need to connect to multiple services out of the box, you’ll likely be writing those plugins yourself. The memory system, while functional, is described as ‘simple but limited’—don’t expect sophisticated episodic memory or hierarchical context management. Assistant mode (the conversational interface) is flagged as ‘highly experimental,’ with the README explicitly recommending Runners instead.

Verdict

Use SmartGPT if you’re researching novel agent architectures, want to contribute to cutting-edge autonomous LLM experimentation, or specifically need Rust for performance reasons in your agent orchestration layer. It’s a playground for developers who read research papers and want to implement unconventional ideas, not a drop-in solution for shipping features. The dual-agent separation is genuinely interesting and worth studying even if you don’t adopt the framework. Skip if you need production stability, a mature plugin ecosystem, or comprehensive memory management—AutoGPT remains the safer choice for real-world deployments. Also skip if you’re not comfortable reading Rust source code when documentation is sparse (the README points to GitBook documentation for more details), or if breaking changes between versions would derail your project timeline. This is a tool for the experimental phase of a project, not the deployment phase.

// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/ai-dev-tools/cormanz-smartgpt.svg)](https://starlog.is/api/badge-click/ai-dev-tools/cormanz-smartgpt)