Second Order: Finding Subdomain Takeovers Hidden in Plain Sight
Hook
Most subdomain takeover scanners check DNS records directly. But what if the real vulnerability isn’t in your DNS—it’s in a forgotten CDN link buried three levels deep in your checkout flow?
Context
Traditional subdomain takeover detection tools work by enumerating DNS records and testing whether they point to claimable resources on platforms like AWS S3, GitHub Pages, or Heroku. This approach catches “first-order” takeovers—subdomains you directly control that are misconfigured. But there’s a more insidious class of vulnerability that DNS-based tools completely miss: second-order subdomain takeover.
Second-order takeover happens when your application references external resources—CDN scripts, third-party widgets, font files, analytics trackers—hosted on domains you don’t control. When those third-party services shut down or change ownership, the domains expire. An attacker can claim them and suddenly your production application is loading malicious JavaScript from what looks like a legitimate source. Because the vulnerable reference lives in your HTML, CSS, or JavaScript rather than your DNS configuration, traditional subdomain enumeration tools never see it. You need a crawler that actually visits your pages and analyzes what they load.
Technical Insight
Second Order is built in Go and operates as a configurable web crawler with a query-based extraction engine. The tool fetches HTML, parses it to extract matching data, and optionally verifies that extracted URLs return 200 status codes.
The core abstraction is the configuration file, which defines three types of extraction queries. Here’s a practical example for detecting subdomain takeover candidates:
{
"LogQueries": {
"link[href]": "href",
"img[src]": "src"
},
"LogNon200Queries": {
"script[src]": "src",
"link[href]": "href",
"img[src]": "src",
"iframe[src]": "src"
},
"LogInline": [
"script"
]
}
The LogQueries section extracts every matching attribute unconditionally—useful for building comprehensive maps of external dependencies. The LogNon200Queries section is where the takeover detection happens: it extracts the same attributes but only logs them if the URL returns a non-200 status code. A broken external script reference returning 404 is your smoking gun—that domain might be available for registration.
The output structure maintains provenance, showing exactly which page contained which vulnerable reference:
{
"https://example.com/checkout": {
"script[src]": [
"https://cdn.old_abandoned_service.com/analytics.js"
]
}
}
This context is critical for triage. A broken tracking pixel on your 404 page is low severity. A broken JavaScript library loaded on your payment page is a potential code execution vector.
The tool’s flexibility extends beyond its primary use case. The same query engine that finds subdomain takeover candidates can extract form input names for parameter fuzzing wordlists, collect all inline JavaScript for static analysis, or map CDN usage across your application. One config file might look like:
{
"LogQueries": {
"input[name]": "name",
"input[id]": "id",
"textarea[name]": "name"
},
"LogInline": []
}
Run this against a target and you’ll build a custom parameter wordlist tailored to that specific application’s naming conventions—far more effective than generic lists.
The crawler respects depth limits (default is 1, configurable via -depth) and uses concurrent threads (default 10, adjustable via -threads). For authenticated scanning, you can inject custom headers:
second-order -target https://app.example.com \
-config takeover.json \
-depth 3 \
-threads 20 \
-header "Authorization: Bearer <token>" \
-header "X-Custom-Header: value" \
-output ./results
The tool appears to focus on static HTML parsing based on its tag-attribute query system, keeping it fast and resource-efficient. For most applications, static HTML parsing catches the majority of external references anyway—script tags, link tags, and img tags are typically rendered server-side. The tradeoff is potentially missing dynamically loaded resources, but that’s often acceptable when you’re doing broad reconnaissance rather than comprehensive coverage.
Gotcha
Second Order’s approach to HTML parsing means modern single-page applications that render everything client-side may appear nearly empty to the crawler. If your target heavily uses React, Vue, or Angular with client-side routing, you’ll miss most references unless they’re in the initial server-rendered HTML.
The non-200 status checking also has nuances. The tool only flags URLs that return something other than 200—but a clever attacker-controlled domain could return 200 to avoid detection while still serving malicious content. The real vulnerability isn’t “does this return 404” but “can an attacker claim this domain.” Second Order gives you candidates; you still need to manually verify whether those domains are actually claimable on their respective platforms. A 404 from an active CDN isn’t vulnerable. A 404 from an expired domain that’s available for registration is.
Finally, the default crawl depth of 1 keeps scans fast, but critical vulnerabilities might hide on pages that are two or three clicks deep into an application flow. Increasing depth helps coverage but can dramatically extend runtime on large applications with thousands of pages.
Verdict
Use Second Order if you’re performing security assessments on web applications and need to find second-order subdomain takeover vulnerabilities, especially on traditional server-rendered apps or sites with significant static HTML. The configurable extraction queries make it valuable for reconnaissance beyond just takeover detection—parameter harvesting, JavaScript collection, and CDN mapping are all practical applications. It’s particularly effective when you need to correlate findings with their source pages for accurate severity assessment. Skip it if your target is a modern SPA that renders everything client-side—you need a browser-based crawler instead. Also skip it if you want automated verification of takeover viability rather than just candidate identification, or if you’re only interested in first-order DNS-based takeovers. The tool fills a specific niche: finding vulnerabilities that live in application code rather than infrastructure configuration.