DefaultCreds-cheat-sheet: Mining 3,711 Vendor Passwords Security Teams Forgot to Change
Hook
Out of 3,711 default credential entries analyzed, 814 systems still ship with blank usernames and 479 with blank passwords. If you’re scanning infrastructure or hardening assets, you’re likely searching for the same credentials repeatedly across scattered databases.
Context
Default credentials represent a persistent security failure across enterprise infrastructure. Routers ship with admin/admin, databases deploy with oracle/oracle, and web servers default to tomcat/tomcat. For penetration testers, discovering these credentials means manually searching vendor documentation, GitHub repositories like changeme and routersploit, and security wordlists scattered across SecLists. For blue teams conducting asset audits, it means correlating discovered systems against multiple credential databases to identify hardening gaps.
DefaultCreds-cheat-sheet consolidates this fragmented landscape into a single CSV database with a Python CLI wrapper. Created by Iheb Khemissi, it aggregates 3,711 credential pairs from authoritative sources—changeme’s scanner database, routersploit’s exploit modules, SecLists password collections, betterdefaultpasslist, and ICS-specific credentials. The result is a unified dataset covering 1,398 unique products that serves both offensive reconnaissance and defensive security audits. Unlike active scanners that probe systems, this is a passive lookup tool designed to answer one question: what credentials should I test for this product?
Technical Insight
The architecture is deliberately minimal—a flat CSV file (DefaultCreds-Cheat-Sheet.csv) paired with a Python CLI script that performs search, update, and export operations. The CSV schema contains columns for product/vendor, username, and password. This simplicity enables both machine parsing and human readability, though it sacrifices relational structure for portability.
The CLI tool, installed via pip3 install defaultcreds-cheat-sheet, exposes three primary commands. Search queries filter the CSV by product name using case-insensitive substring matching:
$ creds search tomcat
+----------------------------------+------------+------------+
| Product | username | password |
+----------------------------------+------------+------------+
| apache tomcat (web) | tomcat | tomcat |
| apache tomcat (web) | admin | admin |
+----------------------------------+------------+------------+
The export modifier writes results to separate username and password files, designed for ingestion by brute-force tools like Hydra or Burp Intruder:
$ creds search tomcat export
[+] Creds saved to /tmp/tomcat-usernames.txt, /tmp/tomcat-passwords.txt 📥
This generates two wordlists where /tmp/tomcat-usernames.txt contains usernames like tomcat and admin (one per line) and the corresponding password file mirrors this structure. The intended workflow connects reconnaissance to exploitation: search for credentials, export to wordlists, feed to automated attack tools.
The update mechanism fetches the latest CSV from the GitHub repository, replacing the local copy. Proxy support, added in version 0.5.2, routes HTTP requests through intermediaries—critical for corporate environments or penetration tests requiring traffic obfuscation:
$ creds search oracle --proxy=http://localhost:8080
$ creds update --proxy=http://localhost:8080
The implementation appears to use standard Python libraries for CSV parsing and console output formatting. For a dataset of 3,711 entries, search operations complete quickly on modern hardware.
A notable architectural decision is the project’s dual distribution model. The raw CSV file serves as the canonical dataset, enabling integration with other tools. The noraj-maintained Pass Station library demonstrates this: it wraps the same CSV with advanced search features including regex matching, field-specific queries, and multiple output formats (JSON, YAML, CSV). Pass Station showcases capabilities beyond the core CLI tool, illustrating how the flat-file design enables ecosystem growth.
The dataset reveals interesting statistical patterns. Oracle products account for 235 entries (the most frequent vendor), while 814 credentials use blank usernames and 479 use blank passwords. The <blank> token explicitly represents empty fields, distinguishing between “no default exists” and “the default is empty”—a subtle but critical distinction when testing authentication systems that treat null and empty-string credentials differently.
Gotcha
The CSV structure imposes significant limitations for real-world security operations. There’s no versioning metadata—credentials for Apache Tomcat 7 appear identical to Tomcat 10, despite authentication mechanisms evolving between versions. Protocol context is absent, so you can’t distinguish between HTTP Basic Auth defaults, SSH credentials, or database connection strings without external research. This forces analysts to manually verify whether discovered credentials apply to their target’s specific configuration.
Database freshness depends entirely on manual community contributions and the maintainer’s aggregation cadence. Unlike automated scrapers that monitor vendor security advisories, this dataset won’t capture newly disclosed default credentials until someone submits a pull request. The update command fetches the latest CSV, but “latest” means “most recent GitHub commit,” not “verified current as of today.” You’re trusting crowd-sourced data without timestamps indicating when each credential was validated.
Legal and ethical risks deserve emphasis. While the README includes an educational-use disclaimer, possessing and using this tool straddles the line between security research and unauthorized access preparation. Exporting wordlists for brute-force attacks crosses into computer fraud territory without explicit authorization. The OWASP reference positions this as a blue-team hardening tool, but the export functionality clearly targets offensive operations. Organizations should establish clear rules of engagement before deploying this in production environments or client engagements.
Verdict
Use if you’re conducting authorized penetration tests where default credential discovery is in scope, performing asset hardening audits across heterogeneous infrastructure, or need a quick reference during incident response when identifying compromised systems. It excels at reconnaissance workflows where you’ve discovered a service (via Nmap or Shodan) and need probable credentials before investing time in exploitation frameworks. Blue teams should absolutely incorporate this into baseline security assessments—if your infrastructure matches any of these 3,711 entries, you have an immediate remediation item. Skip if you need version-specific credential data, protocol-aware authentication details, or prefer active scanning tools that test credentials rather than suggesting them. Also skip if you’re operating without proper authorization, since the export functionality is designed for attack automation. This is a lookup table, not a vulnerability scanner—treat it as reconnaissance intelligence that requires validation, not ground truth you can act on blindly.