CASE STUDIES
Every system listed here is live in production. Real architecture, real constraints, hard numbers. 37+ systems across AI, data engineering, infrastructure, platforms, and real-time processing.
NEXT STEPS
Custom software, AI, and data infrastructure. Fixed scope, full IP transfer. Most projects ship in under two weeks.
We built a 723M-row market data pipeline ingesting 10 exchanges simultaneously at under 50ms tick-to-storage latency.
723M+ Total rows stored
Read case study →
We migrated 425M rows to ClickHouse and achieved 8x storage compression and 15x faster analytical scans versus our prior QuestDB setup.
723M+ Rows stored
Read case study →
We replaced a Python fan-in that dropped ticks under load with a Rust multi-task aggregator handling 80,000 ticks per second across 10 exchanges at 3.1% CPU.
80K tick/s Peak throughput
Read case study →
We migrated 425M rows across 43 tables from a CPU-saturating QuestDB deployment to ClickHouse in 6.5 days with zero data loss.
425M+ Rows migrated
Read case study →
We built a zero-cost downloader collecting 11,706 equity symbols across 19+ global exchanges, replacing $8,000 to $22,000 per month in vendor licensing.
11,706 Total symbols collected
Read case study →
We built a revision-aware FRED pipeline tracking 63 macro series with 90-day lookback windows, growing coverage from 32 to 63 series in one sprint.
63 FRED series tracked (from 32)
Read case study →
We built a shared Rust validation library that blocked 1,319 corrupt rows from entering ClickHouse and caught 4.35M corrupt records through nightly out-of-band audits.
723M+ Total rows validated
Read case study →
We discovered 209,033 regime keys with no TTL and fixed them in a single SCAN pass, then cut the regime endpoint latency 13x by eliminating per-request key scans.
209,033 Keys without TTL (found)
Read case study →
We built a 63-line Node.js proxy that gives Vercel serverless functions read-only access to a private ClickHouse instance with zero database exposure.
12ms Proxy overhead (end-to-end)
Read case study →
We added a lock-free AtomicUsize round-robin proxy pool to argus-common, giving all 23 downloader binaries IP rotation without duplication or mutex contention.
180/min Download throughput (proxy)
Read case study →
We audited 168 running services consuming 33GB of RAM, culled the dead weight, and reduced the Argus footprint to 25 production services using 12GB.
168 Services before audit (33GB RAM)
Read case study →
We automated Avo site deployments with a GitHub Actions CI/CD pipeline that catches TypeScript errors in 35 seconds and deploys to Vercel production in 90 seconds.
90 sec Frontend deploy time (was 40-60 min)
Read case study →
We rebuilt the signal scoring pipeline from scratch, fixing look-ahead contamination and adding a top-decile filter that produced 72.2% win rate on selected signals.
72.2% Win rate (top-decile signals)
Read case study →
We found a 50-percentage-point win rate spread between market regimes, fixed a regime classifier that was routing by symbol name instead of market structure, and built a live suppression system for anti-patterns.
62.1% Win rate in choppy regime
Read case study →
We built a Rust correlation engine processing 1,200 symbols with incremental sliding window updates at 340ms p95 per cycle, 14x faster than full recompute.
1,200 Symbols in correlation matrix
Read case study →
We built a five-layer parallel context engine that synthesizes macro, sector, correlation, historical, and catalyst data into a 2-sentence market narrative within 1.5 seconds of signal emission.
1-1.5 sec Synthesis latency (p95)
Read case study →
We upgraded from a static 6-factor lead score to a three-tier behavioral composite integrating email engagement, AUM, and headcount, projecting 5 to 10% conversion uplift.
3,889 Tier A leads (V1)
Read case study →
We generated 500 personalized cold email pitches using Claude Haiku and a Rust web scraper for $1.40 total, achieving 34% open rate versus 11% for category-level templates.
500 Leads processed
Read case study →
We replaced fixed 50/50 email A/B splits with Thompson sampling over Beta distributions, cutting sends to convergence from never-observed to 147 median and raising open rate from 13.7% to 19.2%.
19.2% Open rate (Thompson vs 13.7% fixed)
Read case study →
We rebuilt the backtesting engine with point-in-time cursors and separate ingestion timestamps, collapsing the backtest-to-live delta from 37 percentage points to 1.4 points.
37→1.4pp Backtest-to-live delta (biased → clean)
Read case study →
We replaced fixed 5% position sizing with calibrated half-Kelly plus drawdown scaling, improving Sharpe from 0.79 to 1.34 and cutting maximum drawdown from 18.3% to 9.7%.
1.34 Sharpe ratio, 4-week (was 0.79)
Read case study →
We built a serverless PDF generator using @sparticuz/chromium and Puppeteer that fits in Vercel's 50MB function limit and generates 4-page proposals in 5.8 seconds at $0.004 per document.
5.8 sec p50 generation time
Read case study →
We launched a multi-tenant market intelligence SaaS serving computed signals from 425M rows, with all API routes under 500ms cold and unit economics positive from customer one.
425M+ ClickHouse rows at launch
Read case study →
We debugged 65 compounding bugs across seven subsystems of a live trading engine, fixed a score overflow that silently blocked all dark_matter_rs signals, and cut Redis memory from 11.8GB to 7.15GB.
65 Bugs fixed in one session
Read case study →
We built a retail investor dashboard serving live fund performance from a paper trading account, with compliance banners enforced as server-side dependencies and JavaScript bundle under 120KB.
7 Pages built and deployed
Read case study →
We built a 90-inbox Google Workspace cold email system using Maildoso + Smartlead warmup, capable of 3,600 sends per day at 92 to 95% inbox placement for $369/month.
90 GWS inboxes
Read case study →
We built a campaign orchestration system managing 4-touch email, 3-touch LinkedIn, 2-touch Twitter, and social listening in a single PostgreSQL state machine, targeting 20-minute daily operator review.
1,000+ Daily touch capacity
Read case study →
We built a zero-dependency booking engine that prevents race-condition double-bookings by re-validating slot availability at click time against live Google Calendar and ClickHouse state.
<800ms Slot generation time (cold)
Read case study →
We built a single-user operator analytics dashboard on ClickHouse that assembles 9 parallel queries in under 450ms, with SVG-native charts and no third-party analytics dependency.
280-450ms API response time (9 queries)
Read case study →
We built a 3-step onboarding wizard that shows live regime signals on users' selected assets during signup, eliminating the blank-dashboard activation gap and placing the upgrade prompt at peak intent.
<1.2 sec Intelligence fetch time (16 symbols)
Read case study →
We built a crypto screener that computes 52-week highs, 30-day returns, and volume rankings for 19,172 symbols from 87M rows in a single ClickHouse GROUP BY scan returning in under 420ms.
19,172 Symbols in screener
Read case study →
We built an intelligence API that serves regime, novelty, and correlation data for any of 19,172 symbols in under 120ms using pipeline-batched Redis fetches and a two-round-trip pattern.
<120ms Cache hit latency
Read case study →
Argus generates trading signals continuously across 6 source categories: mean_reversion, novelty_anomaly, regime_caution, trend_following,.
100% Signal sources emitting (6 of 6)
Read case study →
Argus needed a unified market data foundation across the major crypto exchanges.
~186/min Sustained throughput (per exchange)
Read case study →
By late April 2026, the Argus data layer held over 800 million rows across bars_1m, bars_1d, and downstream tables.
<6 min Detection speed (silent-dead)
Read case study →
Argus generated regime signals, novelty anomalies, and trend-following calls from a 1,400-feature engine built on AVX2 SIMD.
3.2x Macro endpoint (1,476ms → 465ms warm)
Read case study →
Argus generated signals continuously across 6 source types.
927ms Signal emit to execution (end-to-end)
Read case study →