Building a Static Blog with Notion as Your CMS: A Deep Dive into notion-blog
Hook
Before Notion had a public API, developers were reverse-engineering browser cookies to turn it into a CMS. This 3,800+ star project shows exactly how they did it—and why you probably shouldn’t follow their lead today.
Context
In the early days of Next.js’s static site generation (SSG) capabilities, developers faced a common friction point: content management. Markdown files were developer-friendly but alienated content creators. Traditional CMSs like WordPress were too heavy. Headless CMSs like Contentful existed but added complexity and cost. Meanwhile, Notion was gaining traction as a collaborative workspace tool with a beautiful editor—but it had no official API.
The notion-blog project emerged as an experimental answer to this gap. It demonstrated that Notion could serve as both the content editing interface and the data source for a static blog, using Next.js’s then-experimental SSG features. The project treated Notion pages as blog posts, automatically transforming them into static HTML at build time. For teams already using Notion for documentation, this eliminated the need to duplicate content into a separate CMS. It was hacky, it was unofficial, but it worked—and it pointed toward a future where any collaborative tool could become a content backend.
Technical Insight
The architecture of notion-blog revolves around a clever exploitation of Notion’s private API during the build process. At its core, the system expects a specific Notion database structure: an inline table (not a full page table, as the README emphasizes) with fields for Page, Slug, Published (checkbox), Date, and Authors (person property). This table acts as your blog index, with each row representing a post.
The authentication mechanism is where things get interesting—and fragile. Instead of OAuth or API keys, the project uses Notion’s token_v2 cookie extracted directly from your browser session. You expose this as an environment variable alongside your blog index page ID:
export NOTION_TOKEN='your-token-v2-cookie-value'
export BLOG_INDEX_ID='S5qv1QbU-zM1w-xm3H-3SZR-Qkupi7XjXTul'
During Next.js’s build phase, the system makes authenticated requests to Notion’s private endpoints. The loadPageChunk API appears to return the full page content as structured blocks—headings, paragraphs, images, code blocks—which the blog then maps to React components, preserving formatting while generating static HTML.
The project includes a bootstrapping script that auto-creates the required table structure if it doesn’t exist. This happens on first visit to /blog or can be manually triggered:
NOTION_TOKEN='token' BLOG_INDEX_ID='new-page-id' node scripts/create-table.js
Content authoring follows a specific pattern: you write your preview content (under two paragraphs), add a divider block, then write the full post below it. The divider acts as a separator between preview and full content—a simple but effective convention that avoids complex metadata.
The deployment model targets Vercel specifically, leveraging environment variables for secrets and Vercel’s edge network for global distribution. A critical caveat appears in the deployment notes: if you only edit content in Notion without changing code, you must use vc -f to force redeployment and bypass Vercel’s build deduplication. This reveals that content changes require full rebuilds—there’s no automatic webhook system for incremental updates.
The SSG implementation uses Next.js’s experimental canary-branch APIs. The project fetches all published posts at build time, generates routes for each slug, and outputs completely static HTML. No client-side API calls to Notion occur after deployment, making the result fast but entirely static until the next build.
Gotcha
The most significant limitation is the reliance on Notion’s private API using cookie-based authentication. The README explicitly warns that it uses ‘a private API and experimental features’ and to ‘use at your own risk as these things could change at any moment.’ The token_v2 approach could break without warning when Notion updates their internal systems. Your blog could stop building with no recourse.
The project was built on experimental Next.js canary features that have since evolved. The README explicitly states: ‘This example uses the experimental SSG hooks only available in the Next.js canary branch! The APIs used within this example will change over time.’ The codebase represents patterns from an earlier era of Next.js development. Running this on current Next.js versions would likely require substantial rewrites.
Content workflow limitations are equally important. Every Notion edit requires a full site rebuild and redeployment. For a personal blog with weekly posts, this is manageable. For a documentation site with frequent updates, it becomes a bottleneck. The manual vc -f flag requirement for content-only changes (to bypass build deduplication) is a friction point that would frustrate editorial teams. Additionally, since the project relies on mapping Notion’s internal block format to React components, complex layouts or custom components may require workarounds that the basic mapping doesn’t handle.
Verdict
Use if: You’re studying the evolution of JAMstack architectures and want to understand how developers bridged gaps before official APIs existed. This project is a valuable historical artifact showing creative problem-solving during Next.js’s early SSG days. It’s also useful if you’re building a proof-of-concept where breaking changes are acceptable and you want a simple integration between Notion and a static site for a weekend project.
Skip if: You need anything resembling production reliability. The private API dependency and experimental feature usage make this unsuitable for sites you care about maintaining long-term. The README itself warns about the risks. Instead, consider using Notion’s official API (released after this project) with modern Next.js, or adopt other established solutions for Notion-to-web workflows. For professional projects, established headless CMSs with proper APIs, webhooks for instant updates, and support options would be more appropriate. This repository’s real value is educational—understanding what developers built when official solutions didn’t exist helps you appreciate why modern integrations work the way they do.