subjs: The Unix Philosophy Applied to JavaScript Reconnaissance
Hook
While most security tools try to do everything, subjs deliberately does almost nothing—and that’s exactly why 839 developers starred it.
Context
Modern web applications leak secrets through JavaScript files. API endpoints, authentication tokens, AWS credentials, internal domain names—they’re all sitting in plain sight inside the JS bundles that power single-page applications. Security researchers and bug bounty hunters know this, which is why JavaScript analysis has become a critical phase of reconnaissance.
The problem isn’t finding individual JavaScript files—it’s doing it at scale. When you’re testing hundreds or thousands of subdomains, you need to extract every .js file URL before feeding them into analysis tools like LinkFinder or SecretFinder. You could write a bash one-liner with curl and grep, but it’s painfully slow. You could use a full web crawler, but that’s overkill and wastes time following HTML links you don’t care about. What you need is a tool that sits between subdomain enumeration and JS analysis: fast, focused, and composable. That’s the exact niche subjs fills.
Technical Insight
subjs is architecturally simple by design. It’s a single-purpose command-line tool written in Go that accepts URLs, makes HTTP requests, identifies JavaScript file references, and outputs their URLs.
The tool appears to leverage Go’s concurrency capabilities for performance. When you pipe a list of URLs into subjs, you can control the number of concurrent workers via the -c flag. This makes it viable for large-scale reconnaissance:
# Fetch URLs with gau and extract JavaScript files
cat domains.txt | gau | subjs
The tool’s stdin/stdout interface makes it composable with other security tools. The README recommends pairing it with gau for URL enumeration and LinkFinder for endpoint analysis. You can chain it with other security tools in typical Unix fashion:
# Example from README: combine with gau
cat hosts.txt | gau | subjs
Looking at the documented flags reveals the design priorities. The -c flag controls concurrency, letting you balance speed against rate limiting concerns. The -t timeout flag (15 seconds default) prevents hanging on slow servers. The -ua flag lets you customize the User-Agent header. The -i flag allows reading from a file instead of stdin:
# Process URLs from a file with custom concurrency and timeout
subjs -i urls.txt -c 40 -t 20
What subjs deliberately focuses on is extraction of JavaScript file URLs—it doesn’t parse JavaScript content, extract endpoints, or perform complex analysis. This narrow scope appears to be what keeps it fast and maintainable.
The tool integrates naturally into automated reconnaissance pipelines through its standard input/output interface, allowing it to be wrapped in any scripting language or combined with other tools without complex configuration.
Gotcha
subjs makes deliberate tradeoffs that come from its focused design. The tool provides a minimal set of configuration options—concurrency control, timeout settings, and user-agent customization—but beyond these documented flags, you’re working with a straightforward input-output pipeline.
The tool’s simplicity means you need to handle certain concerns yourself. For example, when processing large numbers of URLs, you’ll want to manage the concurrency settings carefully. Set -c too high against a small server and you could overwhelm it. The tool gives you the controls but expects you to use them responsibly.
Because subjs follows the Unix philosophy of doing one thing, you’ll need to combine it with other tools for a complete workflow. The README explicitly recommends pairing it with gau for URL discovery and LinkFinder for JavaScript analysis—subjs itself only handles the extraction step in between.
Verdict
Use subjs if you’re building reconnaissance pipelines for bug bounty hunting or security research and need a fast, lightweight component to extract JavaScript URLs at scale. It’s effective when you already have a list of URLs (from gau or similar tools) and need to filter down to just the JavaScript files before passing them to analysis tools. The stdin/stdout interface makes it ideal for shell scripts and automation, and the documented flags (concurrency, timeout, user-agent) provide the essential controls for making HTTP requests at scale. Skip subjs if you need an all-in-one solution that combines URL discovery, JavaScript extraction, and content analysis in a single tool. Also skip it if you’re processing small numbers of URLs where simpler approaches would suffice. This is a focused tool for people building Unix-style pipelines who want each component to do one thing well.