Bass: Why Extra DNS Resolvers Matter for Subdomain Enumeration at Scale
Hook
Your subdomain enumeration is probably querying the same 500 public resolvers as everyone else—leaving thousands of provider-specific authoritative nameservers completely untapped.
Context
Subdomain enumeration is the cornerstone of reconnaissance in security research, bug bounty hunting, and attack surface mapping. Tools like massdns and shuffledns excel at brute-forcing DNS queries against massive wordlists, but they’re only as good as the resolver lists you feed them. Most security practitioners use the same recycled public resolver lists—Google’s 8.8.8.8, Cloudflare’s 1.1.1.1, and maybe a few hundred from public-dns.info. This creates two problems: first, you’re hammering the same resolvers as thousands of other researchers, making rate limiting inevitable on large jobs; second, you’re missing a critical insight about how DNS providers actually work.
DNS providers like Dynect, UltraDNS, and NSOne don’t just run two or three nameservers. They operate entire ASN ranges filled with hundreds or thousands of nameservers that share zone files. These aren’t just backup servers—they’re fully functional authoritative resolvers for any domain hosted on that provider. If example.com uses Dynect’s DNS, you can query any of Dynect’s nameservers across their infrastructure, not just the two listed in the domain’s NS records. Bass exploits this architectural reality by detecting which DNS provider a target uses, then adding hundreds or thousands of that provider’s nameservers to your resolver pool. Combined with validated public resolvers, you can push your resolver count from 500 to over 6,000, dramatically improving both throughput and stealth.
Technical Insight
Bass’s architecture is deceptively simple but architecturally informed. The tool performs three core operations: provider detection, resolver aggregation, and deduplication. When you run bass against a target domain, it first queries the domain’s NS records to identify which DNS provider hosts it. This happens through standard DNS lookups that examine nameserver patterns—for example, ns1.p23.dynect.net immediately identifies Dynect, while dns1.p09.nsone.net signals NSOne.
The real value comes from what happens next. Rather than performing runtime ASN enumeration (which is slow and can trigger network defenses), Bass maintains pre-collected, validated resolver lists for each major DNS provider in the resolvers/ directory. These lists were assembled by scanning provider ASN ranges and validating that each IP responds to DNS queries. When Bass identifies your target uses Dynect, it simply pulls resolvers/dynect_resolvers.txt, which contains hundreds of verified Dynect nameservers. Here’s what a typical execution looks like:
# Basic usage - detect provider and merge resolvers
python3 bass.py -d example.com -o resolvers.txt
# Output shows provider detection and resolver aggregation
[*] Querying nameservers for example.com
[+] Detected provider: Dynect
[+] Loading Dynect resolvers: 847 IPs
[+] Loading public resolvers: 3,521 IPs
[+] Total unique resolvers: 4,368
[+] Written to resolvers.txt
The resolver files themselves are straightforward text lists, one IP per line. But the curation matters. Each provider file represents hours of scanning and validation. For example, resolvers/ultradns_resolvers.txt contains IPs from UltraDNS’s ASN ranges (AS19905, AS5620) that respond correctly to DNS queries. The tool doesn’t just grab every IP in the ASN—it includes only validated resolvers that returned proper responses during collection.
The deduplication logic ensures no resolver appears twice when merging provider-specific and public lists. Bass reads both files into Python sets, performs a union operation, and writes the result. This matters because some public resolver lists overlap with provider nameservers, and sending duplicate queries wastes bandwidth:
# Simplified deduplication logic (conceptual)
provider_resolvers = set()
public_resolvers = set()
with open(f'resolvers/{provider}_resolvers.txt') as f:
provider_resolvers = set(line.strip() for line in f)
with open('resolvers/public.txt') as f:
public_resolvers = set(line.strip() for line in f)
combined = provider_resolvers.union(public_resolvers)
with open(output_file, 'w') as f:
f.write('\n'.join(sorted(combined)))
The integration with massdns is where this pays dividends. Massdns distributes queries across all resolvers in your list, implementing smart rate limiting per resolver. With 4,000+ resolvers instead of 500, you’re spreading the same query load across 8x more infrastructure. If you’re enumerating 10 million potential subdomains, that’s the difference between 20,000 queries per resolver versus 2,500. Lower per-resolver traffic means less chance of hitting rate limits, and using provider-specific nameservers means you’re querying authoritative sources that must answer for their zones—they can’t refuse or redirect queries for domains they host.
The provider detection itself uses dnspython under the hood for NS record queries, with fallback logic if the primary nameserver is unresponsive. Bass checks NS records against known patterns for each supported provider. This pattern matching is hardcoded but comprehensive, covering Dynect, UltraDNS, NSOne, Cloudflare, AWS Route53, Azure DNS, and others. The tool’s simplicity is intentional—it does one thing (resolver aggregation) exceptionally well rather than attempting full-featured DNS enumeration.
Gotcha
Bass’s Achilles heel is its dependency on pre-collected, static resolver lists. These files were curated at a specific point in time, and DNS infrastructure changes. Providers add new nameservers, decommission old ones, and reassign IPs. A resolver that worked perfectly six months ago might now be dead, unresponsive, or worse—repurposed for something else entirely. Bass provides no runtime validation mechanism. When you run it, you’re trusting that the resolvers in those text files still function correctly. For massdns users, this means some percentage of your resolver pool will timeout or fail, reducing your effective parallelism below the advertised count. The README acknowledges this, recommending you validate resolvers with dnsvalidator before critical jobs, but that defeats some of the convenience.
The second limitation is coverage. Bass only helps when your target uses one of the dozen or so supported DNS providers. If you’re researching a government agency running in-house DNS infrastructure, or a company using a regional provider not included in Bass’s resolver collection, you get zero provider-specific resolvers—just the public.txt list you could’ve downloaded anywhere. The tool also doesn’t handle complex scenarios where a domain uses multiple DNS providers (primary/secondary configurations) or CDN providers with anycast DNS. You’ll get resolvers for whichever provider Bass detects first, but you might miss additional infrastructure. For bug bounty hunters targeting Fortune 500 companies, this usually isn’t an issue since they overwhelmingly use major providers. For offensive security work against diverse targets, you’ll frequently find Bass adds no value beyond what you already have.
Verdict
Use if: You’re performing large-scale subdomain enumeration (1M+ queries) against targets hosted on major DNS providers (Dynect, UltraDNS, NSOne, Cloudflare, Route53) where resolver diversity directly impacts your success rate and stealth. This is particularly valuable for bug bounty programs with massive attack surfaces, red team engagements requiring low-and-slow reconnaissance, or security research mapping entire TLD subsets. Bass shines when you need to stay under rate limit thresholds while maximizing throughput with massdns or similar tools. Skip if: Your targets use in-house or unsupported DNS infrastructure, you’re doing small-scale enumeration where 500 public resolvers suffice, or you need guaranteed resolver validity (use dnsvalidator for real-time verification instead). Also skip if you’re already maintaining your own curated resolver lists through automated ASN enumeration pipelines—Bass’s static lists offer nothing over fresh scans. For most casual subdomain discovery work, the standard public resolver lists in shuffledns or subfinder are adequate without the added complexity.