Node.js 20 powers the ClickHouse query proxy, API middleware, and server-side Redis caching in the Next.js layer.
HOW WE USE IT
Node.js 20 runs inside every Next.js application as the server runtime for API routes and Server Components. The primary role is the query proxy layer: incoming requests from the browser hit a Next.js API route, which applies authentication (Clerk), rate limiting (Redis-backed per-tier), and then proxies the SQL query to ClickHouse over the HTTP interface.
The ioredis 5.3 client handles Redis connections from the Node.js layer. It operates with a connection pool of 10 connections per Next.js instance. The caching pattern is straightforward: before hitting ClickHouse, the API route checks a Redis key with a TTL appropriate to the data freshness requirements (1 hour for daily bars, 5 minutes for regime state, 30 seconds for streaming analytics). On cache miss, the ClickHouse query runs and the result is stored with the correct TTL before returning.
Example workflow: wiring rate limiting into a new API route for a client's metered product. 1. At the top of the route handler, read the Clerk session using auth() and extract the userId and plan tier from session claims. 2. Construct a Redis key: ratelimit:{userId}:{routeName}:{windowMinute} where windowMinute is Math.floor(Date.now() / 60000). 3. Run INCR on the key, then SET its EXPIRE to 60 if the count returned is 1 (first request in this window). 4. If the count exceeds the tier limit (e.g., 60 for Free, 600 for Pro), return a 429 with a Retry-After header. 5. Proceed with the ClickHouse query on success. 6. Log the INCR result to the usage_events ClickHouse table for billing reconciliation at month-end.
The query-proxy case study covers the detailed implementation. The key decision was to implement the proxy in the Next.js layer rather than as a separate service. A standalone Rust proxy was evaluated but eliminated because it would require a separate deployment pipeline and another network hop. The Next.js runtime handles sufficient concurrency for current load (200 concurrent users in load tests, p95 under 500ms) without a separate proxy process.
Node.js is not used for any data ingest, computation, or trading logic. Those paths run in Rust. The boundary is deliberate: Rust handles anything that must be correct and fast under load; Node.js handles anything that benefits from the JavaScript ecosystem (auth libraries, email SDKs, Stripe webhook parsing, Next.js server components).
Cloudflare proxies all traffic before it reaches the Vercel edge, so Node.js never handles TLS termination or DDoS mitigation directly.
Production numbers
20 LTS
Node.js version
10 conn
ioredis pool size
<500ms
p95 response time
200
Max concurrent users (tested)
We discovered 209,033 regime keys with no TTL and fixed them in a single SCAN pass, then cut the regime endpoint latency 13x by eliminating per-request key scans.
209,033 Keys without TTL (found)
Read case study →
InfrastructureWe built a 63-line Node.js proxy that gives Vercel serverless functions read-only access to a private ClickHouse instance with zero database exposure.
12ms Proxy overhead (end-to-end)
Read case study →
InfrastructureWe automated Avo site deployments with a GitHub Actions CI/CD pipeline that catches TypeScript errors in 35 seconds and deploys to Vercel production in 90 seconds.
90 sec Frontend deploy time (was 40-60 min)
Read case study →
Start a project
Most projects ship in under two weeks. Start with a free 30-minute discovery call.
Start a project →