Back to Articles

vulnx: Query-Based Vulnerability Intelligence That Beats Scraping NVD

[ View on GitHub ]

vulnx: Query-Based Vulnerability Intelligence That Beats Scraping NVD

Hook

While most security teams still write Python scripts to scrape NVD’s sluggish web interface, vulnx delivers enriched CVE data with KEV status, EPSS scores, and Nuclei templates in milliseconds—all through an advanced search syntax with boolean logic and faceted filtering.

Context

The National Vulnerability Database has been the authoritative CVE source for two decades, but using it programmatically is painful. The official NVD API has rate limits and offers minimal filtering capabilities, returning raw JSON that requires extensive post-processing to extract actionable intelligence. Security engineers end up building brittle scrapers, maintaining local databases, or paying for commercial feeds just to answer questions like “show me critical Apache vulnerabilities from 2024 that have public exploits.”

ProjectDiscovery built vulnx (formerly cvemap) to solve this friction. Rather than giving you another NVD wrapper, they ingest CVE data into their cloud infrastructure and enrich it with signals that matter for prioritization: Known Exploited Vulnerabilities (KEV) status from CISA, Exploit Prediction Scoring System (EPSS) probabilities, availability of Nuclei scanning templates, references to HackerOne reports, and vendor metadata. The CLI exposes this enriched dataset through a query language that supports boolean logic, range operators, and faceted search. For teams already using Nuclei for vulnerability scanning, this creates a tightly integrated workflow from discovery to exploitation validation.

Technical Insight

External Service

Client Processing

User Layer

Parse query syntax

Credentials

Structured query

Auth token

HTTPS request

Vulnerability data

Parsed results

Formatted output

CLI Input

Query Parser

Authentication

Manager

Query Builder

Boolean & Filters

API Client

HTTP Requests

ProjectDiscovery

Cloud API

Result Parser

Stream Handler

Output Formatter

JSON/Interactive/File

System architecture — auto-generated

vulnx’s architecture appears to be a query client that transforms user input into API requests against ProjectDiscovery’s backend. Written in Go, it handles authentication, request construction, result streaming, and output formatting while leaving data aggregation to the server. This design means you get fast searches without managing local databases, but also means you’re fully dependent on their API.

The query syntax is where vulnx shines. Instead of constructing JSON payloads or URL parameters, you write search expressions that combine field filters with boolean operators. Want critical Apache vulnerabilities with remote exploitation vectors from the last 90 days? That’s a single query:

vulnx search "apache && severity:critical && is_remote:true && age_in_days:<90"

The tool supports extensive searchable fields spanning CVE metadata, CVSS metrics, temporal data, and enrichment flags. Range queries use intuitive syntax: cvss_score:>8.0 for scores above 8, cve_created_at:>=2024-01-01 for date filtering. Boolean operators (&&, ||) let you combine conditions, and the field namespace uses dot notation for nested attributes like affected_products.vendor:microsoft.

Faceted search enables aggregation analysis without writing custom code. If you need to understand the severity distribution across a result set, use term facets:

vulnx search "apache" --term-facets severity=10

This returns result counts bucketed by severity level—giving you statistical insight into your exposure surface. Range facets work similarly for numeric fields:

vulnx search "remote" --range-facets "numeric:cvss_score:high:8:10"

The filters command is underutilized but essential for power users. It returns machine-readable metadata about every searchable field: data type, whether it supports sorting or faceting, available enum values, and example query syntax. Run vulnx filters --json and you have a complete API reference for building programmatic queries.

Output handling is pragmatic. Default mode renders human-readable tables, but --json emits JSON suitable for piping to jq or ingestion tools. The --fields flag lets you project specific columns, reducing payload size when you only need CVE IDs and scores. The --detailed flag on search commands returns the same rich output as the id command, including full descriptions, references, and metadata—useful when you want comprehensive data without making per-CVE API calls.

Authentication is optional but recommended. Without an API key, you hit rate limits that make bulk analysis tedious. Running vulnx auth launches a browser flow to ProjectDiscovery Cloud where you get a free tier key. The tool stores credentials locally and injects them in request headers.

The integration with Nuclei templates is the killer feature for offensive security workflows. When vulnx shows is_template:true, you know ProjectDiscovery has a ready-to-run detection template. You can immediately pivot from vulnerability research to exploitation testing:

vulnx search "is_template:true && severity:critical" --fields cve_id,cvss_score --json | \
  jq -r '.cve_id' | \
  xargs -I {} nuclei -t ~/nuclei-templates -tags {}

This pipeline finds critical CVEs with Nuclei templates, extracts their IDs, and feeds them to Nuclei for scanning—closing the loop from intelligence to validation.

Gotcha

vulnx has no offline mode. Every query hits ProjectDiscovery’s API, which means no internet connection equals no functionality. If their infrastructure experiences downtime or you’re working in air-gapped environments, vulnx becomes unusable. The README explicitly warns that the older cvemap API retires August 1, 2025—a reminder that cloud dependencies come with deprecation risk. If your vulnerability management process is critical infrastructure, this single point of failure is concerning.

Data freshness and coverage are entirely at ProjectDiscovery’s discretion. You can’t supplement the dataset with internal vulnerability sources or proprietary feeds. If their ingestion pipeline lags NVD updates, you’re blind to new CVEs during that window. The enrichment data (KEV status, EPSS scores) is valuable, but you’re trusting their data quality and update cadence without visibility into the pipeline.

Rate limiting on the free tier isn’t documented clearly in the README, so you’ll discover constraints through trial and error. For organizations analyzing thousands of dependencies in CI/CD pipelines, you may need to evaluate limits carefully or risk throttling. The tool also lacks bulk operations—no “give me details for these 500 CVEs” endpoint—forcing you to loop through vulnx id calls or use search filters creatively.

Verdict

Use vulnx if you’re doing security research, threat intelligence, or vulnerability prioritization where enriched CVE data beats raw NVD feeds. The advanced filtering and faceted search save hours compared to scripting against the NVD API, and the Nuclei integration is unmatched if you’re already in that ecosystem. It’s excellent for answering prioritization questions like “which recent critical vulnerabilities are actually being exploited in the wild?” (combining KEV and EPSS filters). Skip it if you need offline access, can’t tolerate third-party API dependencies for critical processes, or require vulnerability data sources beyond what ProjectDiscovery aggregates. For organizations with mature vulnerability management platforms (DefectDojo, Faraday), vulnx works better as an analyst’s research tool than as infrastructure. If you’re just looking up occasional CVE details, the NVD website is simpler than adding another CLI tool to your stack.

// ADD TO YOUR README
[![Featured on Starlog](https://starlog.is/api/badge/cybersecurity/projectdiscovery-cvemap.svg)](https://starlog.is/api/badge-click/cybersecurity/projectdiscovery-cvemap)