We launched a multi-tenant market intelligence SaaS serving computed signals from 425M rows, with all API routes under 500ms cold and unit economics positive from customer one.
425M+
ClickHouse rows at launch
13x
Regime endpoint speedup (1,048ms → 79ms)
$95/mo
Infrastructure cost (pre-revenue)
23
Rust binaries compiled
CHAPTER 01
The Argus consumer intelligence platform had to solve a problem most SaaS companies never face at launch: the data moat already existed before the product did. By April 2026, 425 million rows of market data lived in a Hetzner AX102 server covering crypto, US equities, global equities, forex, commodities, bonds, macro economic series, on-chain metrics, DeFi TVL, derivatives, and 50 factor series.
The product constraint was sharp: customers would never see raw data. The data was the moat. Customers got computed intelligence only: regime classifications, novelty scores, correlation shifts, signal summaries. This was not a data-as-a-service play. It was an intelligence-as-a-service play. The distinction forced every architectural decision that followed.
Three constraints shaped the build from day one. Cold query latency had to be below 500ms on intelligence endpoints. Multi-tenancy had to be implemented without row-level security overhead at the ClickHouse layer, because ClickHouse's columnar engine is not designed for per-row tenant checks. Total infrastructure cost had to stay below what a single $499 founding member subscription covered.
CHAPTER 02
Authentication used Clerk 6. Clerk handled JWT issuance, session management, OAuth social login, and webhook delivery on user creation and deletion events. The Clerk webhook fired a Next.js API handler that provisioned the user record in ClickHouse and created the Stripe customer object.
Multi-tenancy at the data layer used a thin Rust middleware pattern rather than database-level RLS. Every API route decoded the Clerk JWT on the server side, looked up the user tier from a 5-minute Redis cache keyed by Clerk user ID, and then selected the appropriate ClickHouse view. Founding tier users got intelligence summaries. Growth tier users got signal history going back 90 days. Enterprise tier got full regime history plus the contextual narrative layer.
This approach meant zero ClickHouse query plan contamination from tenant filters. Each view was pre-defined. Query performance was deterministic per tier. The tradeoff was that adding a new tier required a schema migration rather than a data row change. Acceptable given the pricing simplicity: three tiers, not fifty.
ARCHITECTURE OVERVIEW
PRESENTATION
Next.js 15
API LAYER
React 19
auth + rate limit + versioning
SERVICES
Clerk 6
DATABASE
ClickHouse 26.3
QUEUE
Redis 7
CHAPTER 03
The core computation loop ran on a 1-minute tick for intraday symbols and a post-market batch for daily symbols. argus-features computed the 1,400 feature vector per symbol using AVX2 SIMD for float64 batch arithmetic and wrote results to argus.feature_vectors. argus-regime consumed feature vectors and classified each symbol into one of five regimes across eight timeframes. argus-signals consumed regime classifications and feature vectors to produce directional signals with confidence scores. The full pipeline from bar ingestion to signal availability ran in under 90 seconds for the 2,378 intraday symbols.
Stripe subscriptions used Checkout Sessions with the 2024 Stripe API. On subscription.created, the handler updated the user's tier in ClickHouse and invalidated the Redis tier cache for that Clerk user ID. On subscription.deleted, the handler downgraded the user to a read-only free state. Webhook processing was idempotent: each Stripe event ID was stored in argus.stripe_events and checked before processing to prevent double-tier-changes on retry.
Rate limiting ran at two levels. The Vercel Edge Config stored per-plan request limits. A lightweight Edge Middleware checked requests against the Redis counter keyed by Clerk user ID before they hit any route handler.
TECH STACK
CHAPTER 04
425 million rows across all ClickHouse tables at launch readiness. 23 Rust binaries compiled and verified, all passing CI. All API route cold latency under 500ms cold, under 250ms warm. Regime endpoint achieved a 13x latency improvement (1,048ms to 79ms) after collection caching. Macro endpoint achieved a 3x latency improvement (1,476ms to 465ms) after Redis layer. Zero hardcoded secrets across all 18 Next.js sites and 75 Rust crates. Infrastructure cost at approximately $95 per month before paying customers.
425M+
ClickHouse rows at launch
13x
Regime endpoint speedup (1,048ms → 79ms)
$95/mo
Infrastructure cost (pre-revenue)
23
Rust binaries compiled
CHAPTER 05
DECISION · 01
ClickHouse is the right call, but the migration cost is real. Moving from QuestDB required careful handling of the WAL append-only constraint. The migration rule became: never CTAS on a live WAL table. Two billion rows of intraday data migrated without data loss by treating the migration as a copy-and-verify operation.
DECISION · 02
Intelligence-only is a better product and a better moat. The decision to never expose raw data to customers simplified the permission model, eliminated the risk of customers building competing products on top of the data, and sharpened the product narrative from we sell market data to we sell market intelligence. The pricing ceiling changed from $99/month to $4,999/month.
DECISION · 03
Clerk webhooks need idempotency from day one. The first version of the Stripe webhook handler had no event deduplication. During load testing, a retry storm from a misconfigured webhook endpoint created 14 duplicate user tier records in a 3-minute window. Adding the stripe_events deduplication table cost 2 hours and prevented what would have been a painful production incident.
START A PROJECT
We build fast. Most projects ship in under two weeks. Start with a free 30-minute discovery call.
Start a ProjectWe debugged 65 compounding bugs across seven subsystems of a live trading engine, fixed a score overflow that silently blocked all dark_matter_rs signals, and cut Redis memory from 11.8GB to 7.15GB.
65 Bugs fixed in one session
Read case study →
PlatformsWe built a retail investor dashboard serving live fund performance from a paper trading account, with compliance banners enforced as server-side dependencies and JavaScript bundle under 120KB.
7 Pages built and deployed
Read case study →
PlatformsWe built a 90-inbox Google Workspace cold email system using Maildoso + Smartlead warmup, capable of 3,600 sends per day at 92 to 95% inbox placement for $369/month.
90 GWS inboxes
Read case study →