Shrink Data, Slash Costs: Using Data Compression in Redis
By integrating data compression into Redis, we reduced memory usage by 70%, cut network traffic, and saved significant infrastructure costs — all without altering the data structure or impacting performance.

In one of our projects, we encountered a pressing issue: our Redis instance in production was running out of memory. Redis, which we use as a fast key-value storage, handles both data (e.g. products, categories, prices) and user sessions. However, the user sessions occupied the majority of the available memory — around 80-90%. This is because session objects in Redis contain the user’s cart (with all product details) and their wishlist. These objects can grow in extreme cases to up to 4MB per user in size and have a lifetime of up to 30 days.
To address this issue quickly, we first upgraded our AWS Elasticache instance, effectively doubling our memory capacity (from cache.r5.xlarge (26GB) to cache.r5.2xlarge (52GB)). However, this also doubled the client’s costs.
While solving the immediate problem, we wanted a more sustainable, cost-effective solution. Initially, we considered analyzing and restructuring the data stored in Redis to optimize memory usage. But then we had a better idea: compress the data before storing it in Redis and decompress it when retrieving it. This approach allowed us to drastically reduce memory usage without altering the data structure itself.
Redis compression for the win!
By implementing Redis compression, we achieved a significant reduction in memory usage. In our project, which is based on Spryker, we integrated compression for all data stored in Redis and reduced memory usage from 26GB to just 8GB — a reduction of nearly 70%! Depending on the size of the objects, compression rates varied between 50% and 90%.
This approach not only saved memory but also reduced network traffic, as the compressed data required less bandwidth when being transmitted to and from Redis. The results were so impactful that we were able to downgrade our Redis instance, saving substantial infrastructure costs.
Integration Guide
If you want to implement Redis compression in your project and you’re using Predis (which is the default for Spryker projects), here’s how you can do it.
Step 1: Use the Predis Compression Package
We used the b1rdex/predis-compressible composer package, which extends the base Predis command classes with compression capabilities. For example, it provides extended commands like StringSet
that handle compression seamlessly.
Step 2: Update Your Redis Configuration
To integrate compression, replace the default Predis commands with the extended ones. Here’s how we configured this in our Spryker project.
Add this to your configuration, e.g. in config/Shared/config_default.php
:
Step 3: Set a Compression Threshold
The ConditionalCompressorWrapper
requires a threshold value (in bytes) to determine when to compress data. For our project, we used a threshold of 1024 bytes (1kB). This ensures that smaller values, where compression overhead would outweigh the benefits, are not compressed.
You can experiment with different thresholds based on your data to find the best balance between memory savings and processing overhead.
Note
Once compression is enabled, disabling it requires careful planning. You need to stop compressing new data by removing the usage of extended commands for setters (SET, SETEX, PSETEX, MSET). Then, you either allow existing compressed keys to expire (e.g. session data) or recreate the data in an uncompressed format (in Spryker projects by publishing data again, for example).
Once all data is decompressed or recreated, you can remove the integration completely.
Compression modules
There are several compression modules available for PHP. In our case, we used ZLIB for compression, which supports multiple levels (0-9), which can be configured by setting the parameter output-compression-level.
We quickly compared the different levels, but didn’t see significant differences in the results between them. ZLIB provided excellent results without requiring further optimization.
Impact on memory and traffic, not on performance
Before introducing this change, our main question was: How will this affect the performance of our application? Compression and decompression introduce some processing overhead when writing and reading data. Still, this is offset by the reduced data size being transmitted.
When introducing such a change, we always recommend running at least a basic performance/load test on your project before activating it on your production system. In our case, we ran a load test on a test environment with a setup similar to our production environment using active compression. Here, we observed no significant change in performance — neither positive nor negative. Of course, our absolute best case scenario would have included a slight performance gain — though, when introducing a change like this, maintaining current performance levels is indeed a great success!
Another very positive effect besides the huge reduction of memory usage was the reduced network traffic due to the smaller amounts of data that have to be transferred to/from Redis now. This can also save infrastructure costs if traffic is part of your billing.
Note
Always run at least a basic performance/load test on your project before activating it on your production system.
Saving infrastructure costs
The reduction in memory usage and network traffic has a direct impact on infrastructure costs. After implementing compression, we were able to downgrade our AWS Elasticache instance not just to its original size but to an even smaller one. This saved our client nearly $600 per month in Redis costs.
For larger setups, the savings can be even more substantial. For example, we are currently rolling out Redis compression for a corporate client with infrastructure across 10 countries and multiple environments per country. The estimated cost savings for this project are around €100,000 per year.
Effort vs. impact
This improvement stands out as one of the most impactful optimizations we’ve implemented, especially when comparing the effort required to the results achieved. Our lean team of two developers tackled the entire process — from initial research through implementation and testing — in a remarkably efficient two-day sprint.
The actual implementation itself, if you know how to do it, takes as little as 30 minutes. Whether you’re dealing with growing data volumes or rising infrastructure costs, Redis compression is a simple yet powerful optimization that delivers incredible results. We hope this guide helps you implement Redis compression in your own projects. With minimal effort, you too can achieve massive savings and improved efficiency.
Let's compress
Get in touch to learn more about data compression in Redis.

- Bernd Alter
- Co-CTO
- bernd.alter@turbinekreuzberg.com