Storing Data Efficiently with JS
As Iโ€™m building Gextron, one of the most expensive operations is generating a full option chain.
I actually wrote about the bottleneck here:
Moving this work to Node workers helped a lot โ€” but there was still another hidden cost: how data was stored.
So I completely changed the model of how Redis is used inside Gextron.
๐Ÿงฉ Before โ€” Redis as a Key-Value JSON Cache
I was just storing option-chain data as raw JSON strings. [IMG 1]
Redis stored this as plain UTF-8 text, which came with heavy costs:
  • ๐Ÿ”ธ Lots of heap allocation
  • ๐Ÿ”ธ CPU-heavy serialization (JSON.stringify / JSON.parse)
  • ๐Ÿ”ธ Huge Redis memory usage (3 โ€“ 5 MB per chain)
  • ๐Ÿ”ธ Slow iteration and high GC pressure
โšก Now โ€” Redis as a Binary Data Store (Columnar Cache)
Now the data is serialized into typed arrays (Float32Array, Uint32Array) and packed into one binary block, then Brotli-compressed. [IMG 3]
Redis now stores a compact numeric binary snapshot, not text.
This change alone brought massive wins:
  • โš™๏ธ Pure numbers, no strings
  • ๐Ÿ’พ ~10ร— smaller payloads (โ‰ˆ 300 KB vs 3 MB)
  • ๐Ÿš€ Fast transfers
  • โšก Decompression 2โ€“4 ms vs 20โ€“40 ms for JSON.parse()
๐Ÿง  Lets breakdown a simple analogy:
๐Ÿ“š Bookshelf
  • JSON: Thousands of loose pages (repeated titles, redundant text)
  • Columnar: One clean binder with tabbed sections
๐Ÿงฎ CPU Access
  • JSON: Skims ever page to find a number
  • Columnar: Reads one tight stack of numbers
๐Ÿ“ฆ Compression
  • JSON: Random text
  • Columnar: Predictable patterns (tiny)
๐Ÿงฐ The Takeaway
By switching from JSON blobs to binary columnar storage, Redis went from being a text cache to an in-memory analytics engine. These are the same principles behind systems like Parquet and ClickHouse
4
1 comment
Ruben Leija
5
Storing Data Efficiently with JS
JavaScript
skool.com/javascript
Chat about javascript and javascript related projects. Yes, typescript counts.
Powered by