75+ Rust crates handle ingest, signal computation, and exchange execution with sub-100ms tick-to-signal latency.
HOW WE USE IT
Rust is the primary language for every latency-sensitive component in the Avo stack. The workspace currently contains 75+ Rust crates spanning market data ingest, SIMD feature computation, regime detection, backtesting, position sizing, and exchange execution against IBKR and OKX.
The decision to use Rust over Python for these layers came down to two factors: predictable tail latency and fearless concurrency. Python's GIL and garbage collector introduce p99 pauses that compound across a pipeline of async stages. In a tick-to-signal chain where each stage adds latency, those pauses accumulate into seconds-long gaps during market open. Rust's ownership model eliminated an entire class of race conditions without runtime cost.
The async runtime is Tokio 1.40 throughout. Ten exchange ingest daemons run as independent Tokio tasks, each managing WebSocket reconnect loops, deserialization, and ClickHouse inserts without sharing state. Median reconnect time after a dropped feed is 3.2 seconds. The tungstenite 0.21 library handles the wire layer; we chose it over async-tungstenite because the simpler API made error path testing faster.
The SIMD feature engine (argus-features) uses AVX2 intrinsics via std::arch to compute 1,400 technical features over a rolling window. On Hetzner AX102 hardware this runs in under 4ms per symbol per minute bar, enabling real-time feature generation across 2,378 active minute symbols.
Example workflow: adding a new exchange feed to the ingest pipeline. 1. Create a new crate (argus-ingest-mexc) under the workspace. 2. Implement the WebSocket connect loop using tungstenite with exponential backoff (100ms, 200ms, 400ms, cap at 30s). 3. Deserialize the exchange-specific tick format into the shared Tick struct using serde_json. 4. Push deserialized ticks into a bounded tokio::sync::mpsc channel (capacity 10,000) to the ClickHouse writer task. 5. The ClickHouse writer accumulates rows into a batch of 1,000 and inserts via the HTTP bulk insert endpoint. 6. Add the systemd unit file. Wire TOKIO_WORKER_THREADS and the ClickHouse connection string from the environment. Enable.
Initial pain points were all in tokio runtime tuning. The default thread pool size was undersized for 10 concurrent feed daemons plus background computation. Setting TOKIO_WORKER_THREADS to match physical core count and pinning IO-heavy tasks to a separate runtime resolved most tail latency spikes. A second class of bugs came from mishandling ClickHouse batch insert backpressure: when the database was slow, the insert queue grew unbounded. Adding a bounded channel with explicit drop-on-full policy brought memory usage from unbounded growth to a stable 200MB ceiling.
The full workspace builds and passes Clippy clean. Cargo workspace-level builds take 4 minutes on the build server; incremental rebuilds of a single crate average 18 seconds.
Production numbers
75+
Rust crates
<100ms
Tick-to-signal latency
1,400
SIMD features / symbol
3.2 sec
Median reconnect time
We built a 723M-row market data pipeline ingesting 10 exchanges simultaneously at under 50ms tick-to-storage latency.
723M+ Total rows stored
Read case study →
DataWe migrated 425M rows to ClickHouse and achieved 8x storage compression and 15x faster analytical scans versus our prior QuestDB setup.
723M+ Rows stored
Read case study →
DataWe replaced a Python fan-in that dropped ticks under load with a Rust multi-task aggregator handling 80,000 ticks per second across 10 exchanges at 3.1% CPU.
80K tick/s Peak throughput
Read case study →
DataWe migrated 425M rows across 43 tables from a CPU-saturating QuestDB deployment to ClickHouse in 6.5 days with zero data loss.
425M+ Rows migrated
Read case study →
DataWe built a zero-cost downloader collecting 11,706 equity symbols across 19+ global exchanges, replacing $8,000 to $22,000 per month in vendor licensing.
11,706 Total symbols collected
Read case study →
DataWe built a revision-aware FRED pipeline tracking 63 macro series with 90-day lookback windows, growing coverage from 32 to 63 series in one sprint.
63 FRED series tracked (from 32)
Read case study →
Start a project
Most projects ship in under two weeks. Start with a free 30-minute discovery call.
Start a project →