Get Senior Engineers Straight To Your Inbox

Slashdev Engineers

Every month we send out our top new engineers in our network who are looking for work, be the first to get informed when top engineers become available

Slashdev Cofounders

At Slashdev, we connect top-tier software engineers with innovative companies. Our network includes the most talented developers worldwide, carefully vetted to ensure exceptional quality and reliability.

Top Software Developer 2026 - Clutch Ranking

10K Users on Next.js: Jamstack + React Server Components/

Patrich

Patrich

Patrich is a senior software engineer with 15+ years of software engineering and systems engineering experience.

0 Min Read

10K Users on Next.js: Jamstack + React Server Components

Scaling a Next.js Site to 10K+ Daily Users With Minimal Ops

Here’s how our team took a content-heavy B2B marketing portal from 600 to 10,000+ daily users in six weeks without growing an SRE team. The stack centers on Jamstack website development, Next.js (App Router), and Postgres-optimized for fast reads. The goal: ship weekly, survive product launches, and keep ops simple.

Architecture at a Glance

  • Delivery: Next.js on Vercel with edge caching and on-demand ISR; assets on a global CDN.
  • Compute model: React Server Components implementation to move data work to the server and stream HTML to clients.
  • Data: Postgres with serverless connection pooling, Redis-compatible KV for hot fragments, S3-compatible object storage for media.
  • Observability: lightweight-Edge logs, Postgres pg_stat_statements, and OpenTelemetry traces sampled at 5%.

Why Jamstack for an Enterprise Portal

Jamstack lets you precompute what’s stable and compute just-in-time for what’s personalized. We split pages into three tiers: fully static marketing pages, ISR-driven category pages, and server-rendered dashboards keyed by session. This reduced p95 TTFB from 850ms to 180ms while keeping cache hit rates >92% during campaigns.

React Server Components Done Deliberately

Our React server components implementation embraced three rules: fetch at the boundary, stream early, and cache deliberately. Server components fetched product catalogs and campaign metadata using async functions co-located with the tree. We streamed above-the-fold content, then enhanced filters and charts with client components. Critical to cost control, we wrapped fetches in Next.js cache tags and revalidated only when upstream content changed-never on every request.

Open Samsung laptop showing Facebook sign-up page next to a potted plant. Ideal for technology themes.
Photo by Pixabay on Pexels
  • Shared loaders returned typed DTOs, preventing N+1 queries in child components.
  • We used React’s use cache to memoize expensive transforms across a request.
  • Client components were reserved for interactivity (cart, search box, analytics beacons).

Database Design and Optimization

We modeled Postgres around read paths, not ERD purity. Core tables: accounts, products, campaigns, page_views. Each had composite keys shaped to the most common WHERE clauses. For example, page_views(primary_key: day, route, account_id, hash) let us aggregate quickly by day and route. The payoff: one index per hot query, zero table scans in the critical path.

  • Indexes: composite, covering indexes for dashboard queries; partial indexes for recent 30 days.
  • Partitioning: monthly partitions on page_views; retention policy auto-dropped cold partitions.
  • Materialization: nightly materialized views for “top content” and “campaign lift,” with incremental refresh.
  • Connection management: PgBouncer in transaction mode and Prisma connectionLimit=1 in serverless functions.
  • Consistency: CQRS-lite-writes via a narrow API, reads via denormalized tables and Redis keys.

We stored session state and personalization flags in a KV store to avoid hot table contention. Fan-out notifications to invalidate ISR paths were pushed through a durable queue. Migrating safely was non-negotiable: every schema change shipped with backward-compatible code and a “dark read” verifying results before switchover.

Close-up of colorful CSS code lines on a computer screen for web development.
Photo by Pixabay on Pexels

Caching and Revalidation Strategy

Cache keys matched user intent. Public pages keyed by route and locale; dashboards keyed by account_id and feature flags. We tagged caches by “product,” “campaign,” and “settings.” When content editors published, a webhook called /api/revalidate with those tags, touching only impacted pages. Edge middleware enforced gzip + brotli and added a 24-hour stale-while-revalidate, which kept the site fast during spikes while background refreshes amortized load.

Detailed view of a video editing software interface showing multi-track timeline and colorful design.
Photo by Francesco Paggiaro on Pexels

Build, Deploy, and Minimal Ops

We kept ops lean by standardizing the path to prod. GitHub Actions ran type checks, unit tests, and telemetry smoke tests against a seeded database snapshot. Next.js builds emitted a manifest listing ISR routes and cache tags; the release job published that map for observability. Database migration steps were idempotent and timed, with alerts if they exceeded thresholds. The only manual step: review database query plans for new endpoints.

  • SLOs: 99.9% availability, p95 TTFB under 250ms, error rate under 0.3%.
  • Budgets: CDN egress under 1TB/day; Postgres under 5k QPS sustained.
  • Runbooks: one-pagers for cache stampede, queue backlog, and hot partition.

Results and Cost Profile

At 10K-18K daily users, we sustained 92-96% cache hit rates. Edge TTFB averaged 120ms for public pages and 210ms for dashboards. Database CPU held under 35% with under 50 active connections. Monthly infra cost: roughly $420 (Vercel Pro, managed Postgres with one read replica, KV, object storage, and logging). The site rode two global campaigns without a paging event.

Actionable Playbook

  • Adopt Jamstack website development patterns: classify routes into static, ISR, and dynamic early.
  • Design around Database design and optimization for reads; make writes boring and serialized.
  • Lean into RSC: push data work server-side, stream HTML, and keep client bundles tiny.
  • Tag everything for revalidation; invalidate by business concept, not by URL strings.
  • Instrument first: sample traces, expose cache hit ratio, and track query plans over time.
  • Keep ops minimal: one platform, one database, one queue; add complexity only with proof.

Need senior support or extra velocity? Partnering with slashdev.io gave us vetted remote engineers and software agency capabilities on demand-ideal when deadlines loomed but headcount couldn’t budge.