Back to Articles

AIx: ProjectDiscovery's Minimalist LLM CLI for Security Automation

[ View on GitHub ]

AIx: ProjectDiscovery’s Minimalist LLM CLI for Security Automation

Hook

While most LLM CLI tools try to recreate ChatGPT in your terminal with conversation history and fancy TUIs, ProjectDiscovery’s AIx deliberately strips everything away to become the perfect Unix pipeline component.

Context

ProjectDiscovery, known for building security tools, created AIx to solve a specific problem: existing LLM CLI tools were built for humans having conversations, not for machines processing data in automated workflows.

Most LLM CLIs in 2023 focused on replicating the ChatGPT experience locally—maintaining conversation state, rendering rich markdown, storing message history. But security automation pipelines need something fundamentally different: a stateless, stdin-consuming, JSON-emitting tool that can process prompts without maintaining session state. AIx exists to behave like grep, not like a chatbot.

Technical Insight

flags/stdin/file

env vars & flags

model/temp/top-p

system context + prompt

HTTP request

completion response

raw response

markdown/JSON/JSONLines

CLI Input Handler

Config Manager

Prompt Builder

go-openai Client

OpenAI API

Output Formatter

stdout/file

System architecture — auto-generated

AIx’s architecture is deliberately minimal. Built in Go and leveraging the sashabaranov/go-openai library (acknowledged in the README), the tool functions as a focused interface that maps CLI flags to OpenAI API parameters.

Input flexibility is where AIx shines for automation. You can pass prompts via flag (-p), stdin (for piping), or file reference. The tool also supports system context injection through -sc, letting you provide instructions separately from dynamic input. Here’s an example from the documentation:

echo "list top trending web technologies" | aix

This returns a formatted list directly to stdout. But the real power emerges with JSONLines output mode (-jsonl), which structures responses for downstream processing:

aix -p "What is the capital of France?" -jsonl -o output.txt | jq .
{
  "timestamp": "2023-03-26 17:55:42.707436 +0530 IST m=+1.512222751",
  "prompt": "What is the capital of France?",
  "completion": "Paris.",
  "model": "gpt-3.5-turbo"
}

This JSON structure includes prompt echoing and model tracking—useful for debugging automated workflows where you need to trace which model generated which response.

The model selection interface is pragmatic: -g3 for GPT-3.5 (default), -g4 for GPT-4, or -m for explicit model specification like gpt-4-0314. Temperature and top-p tuning are exposed as direct flags (-t, -tp), mapping to OpenAI’s API parameters. There’s no abstraction layer—AIx assumes you understand LLM parameters.

Streaming mode (-s) trades formatted output for immediate feedback. When enabled, responses stream to stdout in real-time, but markdown rendering gets disabled—useful when running expensive GPT-4 queries where you want to see progress:

aix -p "Explain Kubernetes architecture" -g4 -s

Authentication follows the standard environment variable pattern. You export OPENAI_API_KEY, and AIx reads it on every invocation. There’s no credential storage, session management, or configuration file—just environment variables. For automation, this means your API keys live in your CI/CD secrets manager, not in AIx’s own config system.

The tool’s statelessness is its defining architectural choice. Each invocation appears to be completely independent. There’s no conversation history database, no previous message context, no session files. If you need multi-turn conversations, you manually include previous exchanges in your prompt or system context. This makes AIx suitable for parallel execution—you can run multiple simultaneous AIx processes without coordination concerns.

Gotcha

Despite the README description mentioning ‘Large Language Models (LLM) APIs’ in general terms, AIx only supports OpenAI. The features list explicitly states ‘Query LLM APIs (OpenAI)’ and ‘Supports GPT-3.5 and GPT-4.0 models.’ There’s no Claude integration, no Google Gemini, no local model support. The tool is fundamentally an OpenAI CLI wrapper, not a generic LLM interface. If OpenAI changes their API or pricing becomes prohibitive, you have no fallback options without switching tools entirely.

The lack of conversation memory is both a feature and a limitation depending on your use case. For one-off queries in automation pipelines, statelessness is perfect. But for interactive development work where you’re iterating on prompts and building on previous responses, AIx becomes tedious. You can’t simply ask follow-up questions—every query must be self-contained or manually include conversation history. The repository’s star count (313) and examples showing version 0.0.1 suggest this is a relatively niche tool with potentially limited ongoing development.

Verdict

Use AIx if you’re building automation pipelines where you need to process LLM queries, especially if you value tools that follow Unix philosophy. Its stateless design, JSONLines output, and stdin piping make it suitable for CI/CD integration, batch processing, or automated workflows. The Go binary is deployable with no runtime dependencies (requires Go 1.19+ to install). Skip AIx if you need multi-provider support for cost optimization or API resilience, want conversation history for interactive development work, or require multi-model flexibility beyond OpenAI. AIx is a specialist tool for automation, not a general-purpose LLM CLI.

// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/llm-engineering/projectdiscovery-aix.svg)](https://starlog.is/api/badge-click/llm-engineering/projectdiscovery-aix)