Redis 7.2 handles regime caching (209K keys, 24h TTL), signal pub/sub, job queues, and API rate limiting. Sub-millisecond p99 reads.
HOW WE USE IT
Redis 7.2 plays four distinct roles in the Avo stack: regime state cache, signal pub/sub transport, job queue for the outbound email system, and per-tier API rate limiting.
The regime cache holds 209,033 keys in the argus:regime:* namespace. Each key stores a hash of regime scores across 5 regime types and 8 timeframes for a given symbol. Keys are written by the argus-regime Rust binary after every computation cycle. A significant bug was discovered in April 2026: the EXPIRE call was missing after the HSET write, meaning keys accumulated indefinitely. If the regime pipeline stopped, stale data would silently persist and be served to Apex as current. The fix set a 24-hour TTL on all 209K keys via a Redis SCAN loop and added the EXPIRE call to the Rust writer. The 24h window was chosen because regime state changes on a daily cycle; any key older than 24h is stale by definition.
Signal pub/sub uses Redis Streams with consumer groups. The Argus signal pipeline publishes computed signals to a Redis stream; Apex reads from the stream with at-least-once delivery guarantees. If Apex crashes mid-read, it resumes from its last acknowledged message ID. The stream is trimmed to 7 days of history with MAXLEN.
Example workflow: setting up a Redis Stream for signal delivery between two services. 1. The producer (argus-signals Rust binary) calls XADD signals:live * signal_id {uuid} symbol AAPL direction long confidence 0.82 after each computation cycle. 2. The consumer (apex-executor Rust binary) calls XREADGROUP GROUP apex-group apex-consumer-1 COUNT 10 BLOCK 1000 STREAMS signals:live > on startup. 3. On successful execution, the consumer calls XACK signals:live apex-group {message_id} to confirm delivery. 4. If the consumer crashes, unacknowledged messages stay in the Pending Entries List. On restart, the consumer calls XAUTOCLAIM to reclaim messages idle for more than 30 seconds. 5. Add a XTRIM signals:live MAXLEN 50000 call in the producer after every XADD to cap stream size at approximately 7 days of history. 6. Monitor stream lag with XLEN signals:live and XPENDING signals:live apex-group. Alert if pending count exceeds 1,000 (consumer is falling behind).
Job queues for the outbound email system (alien/cadence) use Redis Lists. Each outbound touch (email, DM, retarget post) is pushed onto a queue; a Rust worker pops and executes with configurable rate limits per inbox.
API rate limiting uses Redis INCR with EXPIRE in a sliding-window pattern. Each Clerk user tier (Free, Pro, Institutional) gets a different per-minute limit. The implementation is in the Next.js API layer, using ioredis from Node.js.
The caching layer for the Avo marketing site API routes was added in April 2026 after a performance audit: the /api/macro route dropped from 1,476ms to 465ms with a Redis cache; the /api/intelligence/regime route dropped from 1,048ms to 79-90ms with a collection cache.
Production numbers
209,033
Regime keys
24h
Regime key TTL
465ms
API macro (cold cache)
79ms
Regime API (cold cache)
We built a 723M-row market data pipeline ingesting 10 exchanges simultaneously at under 50ms tick-to-storage latency.
723M+ Total rows stored
Read case study →
DataWe replaced a Python fan-in that dropped ticks under load with a Rust multi-task aggregator handling 80,000 ticks per second across 10 exchanges at 3.1% CPU.
80K tick/s Peak throughput
Read case study →
DataWe migrated 425M rows across 43 tables from a CPU-saturating QuestDB deployment to ClickHouse in 6.5 days with zero data loss.
425M+ Rows migrated
Read case study →
DataWe built a revision-aware FRED pipeline tracking 63 macro series with 90-day lookback windows, growing coverage from 32 to 63 series in one sprint.
63 FRED series tracked (from 32)
Read case study →
InfrastructureWe discovered 209,033 regime keys with no TTL and fixed them in a single SCAN pass, then cut the regime endpoint latency 13x by eliminating per-request key scans.
209,033 Keys without TTL (found)
Read case study →
InfrastructureWe built a 63-line Node.js proxy that gives Vercel serverless functions read-only access to a private ClickHouse instance with zero database exposure.
12ms Proxy overhead (end-to-end)
Read case study →
Start a project
Most projects ship in under two weeks. Start with a free 30-minute discovery call.
Start a project →