Back to Articles

Finding the Gaps: How Missing CVE Templates Give Bug Bounty Hunters an Edge

[ View on GitHub ]

Finding the Gaps: How Missing CVE Templates Give Bug Bounty Hunters an Edge

Hook

Out of 155,631 analyzed CVEs, 47,578 lack corresponding Nuclei templates. That gap represents uncharted territory where bug bounty hunters can find vulnerabilities before they become automated commodities.

Context

Nuclei, ProjectDiscovery’s vulnerability scanning engine, has revolutionized how security researchers find vulnerabilities at scale. Its power lies in community-contributed templates that codify detection logic for known vulnerabilities. But there’s an inherent problem: the moment a CVE gets a public template, its value for bug bounty hunting plummets. Everyone running Nuclei scans will find the same low-hanging fruit simultaneously.

This creates a strategic imperative for competitive researchers: identify CVEs that don’t yet have templates, write custom detection logic, and scan targets before the vulnerability becomes commoditized. The challenge is discovering which CVEs lack template coverage. Manually cross-referencing CVE databases against the nuclei-templates repository is tedious and error-prone. That’s the gap edoardottt’s missing-cve-nuclei-templates fills—it’s automated competitive intelligence for security researchers who need to stay ahead of the curve.

Technical Insight

Output Files

GitHub Actions Weekly Trigger

Fetch Trickest CVE Data

Clone Nuclei Templates Repo

Set Difference Analysis

Keyword Pattern Matching

Categorize by Vulnerability Type

Categorize by Year

data/type/ Files

data/year/ Files

System architecture — auto-generated

The architecture is deliberately simple: a Shell script orchestrated through GitHub Actions that runs weekly. The tool appears to fetch CVE data from Trickest’s CVE repository (a curated aggregation of CVE information), pulls the latest nuclei-templates repository state, and performs set difference operations to identify missing CVEs. But the clever part is the categorization layer.

The script searches CVE descriptions for vulnerability-specific keywords to classify them into tracked vulnerability types. The README documents the exact search terms: for XSS it looks for “reflected,” “xss,” “Cross-Site Scripting,” and “Cross Site Scripting.” For RCE, it searches for “rce,” “remote code execution,” “remote command execution,” “command injection,” and “code injection.” The complete list includes keywords for SQL injection, Local File Inclusion, SSRF, Prototype Pollution, SSTI, XXE, Request Smuggling, Open Redirect, and Path Traversal. Each CVE can appear in multiple category files if its description matches multiple vulnerability types. The results are organized into two directory structures: data/type/ for vulnerability classification (xss.txt, rce.txt, sqli.txt) and data/year/ for temporal analysis (2024.txt, 2025.txt).

The conceptual workflow involves cloning the repository, examining the categorized CVE lists, researching specific CVEs through external sources like NVD or vendor advisories, determining exploitability, and then crafting custom Nuclei templates. The repository provides the gap analysis—which CVEs lack templates—but researchers must handle everything downstream: vulnerability research, exploitability assessment, and template development.

The data quality hinges entirely on keyword matching accuracy. The README explicitly acknowledges this limitation: “Why there can be errors in categorizing CVEs? Because when grepping for these words there can be false positives, meaning that an XXE vulnerability can be categorized as RCE because e.g. it says ‘in certain situations can be escalated to rce’.” A CVE description mentioning multiple vulnerability types would appear in multiple category files, creating both false positives (where secondary vulnerability types are categorized as primary) and potential false negatives (where CVEs use non-standard terminology).

The statistics reveal significant patterns: 22,572 missing XSS templates represent the largest gap, followed by 12,065 SQL injection CVEs. This reflects both the prevalence of these vulnerability types and the reality that many XSS and SQLi CVEs affect niche software that hasn’t attracted template authors. Meanwhile, only 80 Server-Side Template Injection CVEs lack templates—SSTI is rarer and more likely to receive immediate attention when discovered.

The temporal breakdown shows acceleration in recent years: 9,533 missing CVEs from 2024 and 6,897 from 2025 (the latter’s high count may reflect CVE database anomalies or reserved identifiers). Interestingly, there are 212 entries for 2026. This growing gap suggests CVE issuance may be outpacing template creation, making tools like this increasingly valuable for maintaining situational awareness.

Gotcha

The tool’s fully automated nature is both its strength and weakness. Keyword-based categorization produces unavoidable false positives, as the README acknowledges. A CVE describing a denial-of-service vulnerability that mentions “similar to SQL injection techniques” might incorrectly land in sqli.txt. More problematically, there’s no severity filtering—a CVE requiring physical access and local authentication gets equal billing with remotely exploitable critical vulnerabilities. You’ll need to manually vet each CVE before investing time in template development.

The tracked vulnerability types represent common web application weaknesses, but this leaves blind spots. CVEs related to authentication bypasses, privilege escalation, insecure deserialization, or cryptographic failures won’t appear unless their descriptions coincidentally mention the documented keywords. The README notes: “the tracked vuln types are just 10 (the most famous ones), but a lot of other types are reported as well (and they will be supported).” If you’re hunting for specific vulnerability classes outside these categories, this tool provides incomplete coverage.

Additionally, the tool only identifies gaps—it doesn’t provide exploit details, affected version information, or proof-of-concept code. You’re getting a reading list, not a ready-to-scan arsenal. Each CVE still requires research through NVD, vendor advisories, or security blogs before you can write an effective template. The README also notes a quirk: “Why if I subtract the ‘CVEs missing’ from the ‘CVEs analyzed’ I don’t get the exact official nuclei templates count? Because as said before the tracked vuln types are just 10 (the most famous ones), but a lot of other types are reported as well.”

Verdict

Use this tool if you’re a bug bounty hunter or penetration tester who writes custom Nuclei templates for competitive advantage. It’s invaluable for discovering template-worthy CVEs before they’re commoditized, especially when focusing on recent vulnerabilities (2024-2025 files) or specific weakness types. It’s also useful for security teams conducting gap analysis on their internal scanning coverage—identifying which CVEs affecting your technology stack lack automated detection.

Skip this if you need ready-to-use scanning capabilities (you want nuclei-templates itself, not gap analysis), require high-precision categorization without manual verification, or are looking for CVE coverage beyond the tracked vulnerability types. This is reconnaissance infrastructure for template authors, not a turnkey scanning solution. The value proposition is simple: invest time weekly reviewing new entries to maintain a competitive edge, or ignore it and compete with everyone else using only public templates. As the README emphasizes, this is “mainly built for bug bounty, but useful for penetration tests and vulnerability assessments too.”

// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/cybersecurity/edoardottt-missing-cve-nuclei-templates.svg)](https://starlog.is/api/badge-click/cybersecurity/edoardottt-missing-cve-nuclei-templates)