As Iโm building Gextron, one of the most expensive operations is generating a full option chain.
I actually wrote about the bottleneck here:
Moving this work to Node workers helped a lot โ but there was still another hidden cost: how data was stored.
So I completely changed the model of how Redis is used inside Gextron.
๐งฉ Before โ Redis as a Key-Value JSON Cache
I was just storing option-chain data as raw JSON strings. [IMG 1]
Redis stored this as plain UTF-8 text, which came with heavy costs:
- ๐ธ Lots of heap allocation
- ๐ธ CPU-heavy serialization (JSON.stringify / JSON.parse)
- ๐ธ Huge Redis memory usage (3 โ 5 MB per chain)
- ๐ธ Slow iteration and high GC pressure
โก Now โ Redis as a Binary Data Store (Columnar Cache)
Now the data is serialized into typed arrays (Float32Array, Uint32Array) and packed into one binary block, then Brotli-compressed. [IMG 3]
Redis now stores a compact numeric binary snapshot, not text.
This change alone brought massive wins:
- โ๏ธ Pure numbers, no strings
- ๐พ ~10ร smaller payloads (โ 300 KB vs 3 MB)
- ๐ Fast transfers
- โก Decompression 2โ4 ms vs 20โ40 ms for JSON.parse()
๐ง Lets breakdown a simple analogy:
๐ Bookshelf
- JSON: Thousands of loose pages (repeated titles, redundant text)
- Columnar: One clean binder with tabbed sections
๐งฎ CPU Access
- JSON: Skims ever page to find a number
- Columnar: Reads one tight stack of numbers
๐ฆ Compression
- JSON: Random text
- Columnar: Predictable patterns (tiny)
๐งฐ The Takeaway
By switching from JSON blobs to binary columnar storage, Redis went from being a text cache to an in-memory analytics engine. These are the same principles behind systems like Parquet and ClickHouse