Why redis or memcached
If the dataset stored in Redis is too big, the RDB file will take some time to be created, which has an impact on the response time. On the other hand, it will be faster to load on boot up compared to the AOF log. The AOF log is better if data loss is not acceptable at all , as it can be updated at every command.
It also has no corruption issues since it's an append-only file. However, it can grow much larger than an RDB snapshot. Redis is surely more flexible and powerful , but Memcached does serve some purposes very well and in some cases achieves better performance. By being multi-threaded it has advantages, especially when working with big data. These data operations are as heavy as a get or set. In one that I was involved in, we had to choose between the two options. At first we went with Memcached based on its simplicity, ease of use, easy setup and we simply needed a cache so persistency wasn't a requirement.
Although, after some testing, we decided to swap to Redis due to the advantages of having data types. In this project, the data type operations were an advantage for the kind of data that was going to be stored. Also, Redis provides a command to search for keys that match a pattern along many other useful commands to deal with keys.
This was something really useful to us and a key point in deciding to migrate to Redis. Concerning the migration, it was very easy to perform as Redis supports most of the commands that Memcached does.
If we had inverted the route and decided to migrate from Redis to Memcached, it would have been way harder since Memcached has no data types. Any Redis data type command would have been translated to many commands, along with some data processing in between them to achieve the same result. When it comes to making a decision, we cannot really say that one is better than the other, as it all depends on the project requirements.
However, based on our experience, it's important to consider its pros and cons right from the beginning to avoid changes and migrations during the project.
At Imaginary Cloud , we simplify complex systems, delivering interfaces that users love. Redis also employs more sophisticated approaches to memory management and eviction candidate selection. Redis supports both lazy and active eviction, where data is evicted only when more space is needed or proactively. Redis gives you much greater flexibility regarding the objects you can cache.
While Memcached limits key names to bytes and works with plain strings only, Redis allows key names and values to be as large as MB each, and they are binary safe. Plus, Redis has five primary data structures to choose from, opening up a world of possibilities to the application developer through intelligent caching and manipulation of cached data. Using Redis data structures can simplify and optimize several tasks—not only while caching, but even when you want the data to be persistent and always available.
Redis Hash saves developers the need to fetch the entire string, deserialize it, update a value, reserialize the object, and replace the entire string in the cache with its new value for every trivial update—that means lower resource consumption and increased performance.
Other data structures offered by Redis such as lists, sets, sorted sets, hyperloglogs, bitmaps, and geospatial indexes can be used to implement even more complex scenarios. Sorted sets for time-series data ingestion and analysis is another example of a Redis data structure that offers enormously reduced complexity and lower bandwidth consumption.
A considerable share of the plus commands available in Redis are devoted to data processing operations and embedding logic in the data store itself via server-side Lua scripting.
These built-in commands and user scripts give you the flexibility of handling data processing tasks directly in Redis without having to ship data across the network to another system for processing.
Redis offers optional and tunable data persistence designed to bootstrap the cache after a planned shutdown or an unplanned failure.
While we tend to regard the data in caches as volatile and transient, persisting data to disk can be quite valuable in caching scenarios. Redis can also replicate the data that it manages. Replication can be used for implementing a highly available cache setup that can withstand failures and provide uninterrupted service to the application. Last but not least, in terms of operational visibility, Redis provides a slew of metrics and a wealth of introspective commands with which to monitor and track usage and abnormal behavior.
Real-time statistics about every aspect of the database, the display of all commands being executed, the listing and managing of client connections—Redis has all that and more.
When used in this manner, Redis can also be ideal for analytics use cases. Three analytics scenarios come immediately to mind. Like Article. Last Updated : 10 Jun, It is an in-memory data structure that stores all the data served from memory and uses disk for storage.
It offers a unique data model and high performance that supports various data structures like string, list, sets, hash, which it uses as a database cache or message broker. It is also called Data Structure Server. Memcached : Memcached is a simple, open-source, in-memory caching system that can be used as a temporary in-memory data storage. Also great for keeping track of the last time users visited and who is active in your application.
Storing values with the same score causes them to be ordered lexicographically think alphabetically. This can be useful for things like auto-complete features. Many of the sorted set commands are similar to commands for sets, sometimes with an additional score parameter.
Also included are commands for managing scores and querying by score. Redis has several commands for storing, retrieving, and measuring geographic data. This includes radius queries and measuring distances between points. Technically geographic data in redis is stored within sorted sets, so this isn't a truly separate data type.
It is more of an extension on top of sorted sets. Like geo, these aren't completely separate data types. These are commands that allow you to treat string data as if it's either a bitmap or a hyperloglog. Bitmaps are what the bit-level operators I referenced under Strings are for. HyperLogLog allows you to use a constant extremely small amount of space to count almost unlimited unique values with shocking accuracy.
Commands in redis are atomic, meaning you can be sure that as soon as you write a value to redis that value is visible to all clients connected to redis. There is no wait for that value to propagate. Technically memcached is atomic as well, but with redis adding all this functionality beyond memcached it is worth noting and somewhat impressive that all these additional data types and features are also atomic.
Redis provides a feature called ' pipelining '. If you have many redis commands you want to execute you can use pipelining to send them to redis all-at-once instead of one-at-a-time. With pipelining, redis can buffer several commands and execute them all at once, responding with all of the responses to all of your commands in a single reply.
This can allow you to achieve even greater throughput on bulk importing or other actions that involve lots of commands. This allows a single client to publish messages to many other clients connected to a channel.
You can kind of think of lua scripts like redis's own SQL or stored procedures. It's both more and less than that, but the analogy mostly works. Maybe you have complex calculations you want redis to perform. Maybe you can't afford to have your transactions roll back and need guarantees every step of a complex process will happen atomically. These problems and many more can be solved with lua scripting. The entire script is executed atomically, so if you can fit your logic into a lua script you can often avoid messing with optimistic locking transactions.
As mentioned above, redis includes built in support for clustering and is bundled with its own high availability tool called redis-sentinel. Without hesitation I would recommend redis over memcached for any new projects, or existing projects that don't already use memcached. The above may sound like I don't like memcached. On the contrary: it is a powerful, simple, stable, mature, and hardened tool.
There are even some use cases where it's a little faster than redis. I love memcached. I just don't think it makes much sense for future development.
Redis does everything memcached does, often better. Any performance advantage for memcached is minor and workload specific. There are also workloads for which redis will be faster, and many more workloads that redis can do which memcached simply can't.
The tiny performance differences seem minor in the face of the giant gulf in functionality and the fact that both tools are so fast and efficient they may very well be the last piece of your infrastructure you'll ever have to worry about scaling. There is only one scenario where memcached makes more sense: where memcached is already in use as a cache.
If you are already caching with memcached then keep using it, if it meets your needs. It is likely not worth the effort to move to redis and if you are going to use redis just for caching it may not offer enough benefit to be worth your time. If memcached isn't meeting your needs, then you should probably move to redis. This is true whether you need to scale beyond memcached or you need additional functionality. You require the ability to query keys of a particular type.
Use this to invalidate certain types of cached items selectively. You can also use this to invalidate fragment cache, page cache, only AR objects of a given type, etc. Persistence You will need this too, unless you are okay with your cache having to warm up after every restart.
Very essential for objects that seldom change. Redis has lots of features and is very fast, but completely limited to one core as it is based on an event loop. We use both. Memcached is used for caching objects, primarily reducing read load on the databases. Redis is used for things like sorted sets which are handy for rolling up time-series data. This is too long to be posted as a comment to already accepted answer, so I put it as a separate answer. One thing also to consider is whether you expect to have a hard upper memory limit on your cache instance.
Since redis is an nosql database with tons of features and caching is only one option it can be used for, it allocates memory as it needs it — the more objects you put in it, the more memory it uses. The maxmemory option does not strictly enforces upper memory limit usage.
As you work with cache, keys are evicted and expired; chances are your keys are not all the same size, so internal memory fragmentation occurs.
By default redis uses jemalloc memory allocator, which tries its best to be both memory-compact and fast, but it is a general purpose memory allocator and it cannot keep up with lots of allocations and object purging occuring at a high rate. Because of this, on some load patterns redis process can apparently leak memory because of internal fragmentation.
For example, if you have a server with 7 Gb RAM and you want to use redis as non-persistent LRU cache, you may find that redis process with maxmemory set to 5Gb over time would use more and more memory, eventually hitting total RAM limit until out-of-memory killer interferes. Moreover, memcached tries hard to keep internal fragmentation low, as it actually uses per-slab LRU algorithm , when LRU evictions are done with object size considered.
We've tried to use latest stable redis 2. This makes it really good for session storage. It really depends on what you are going to be putting in there. My understanding is that in terms of performance they are pretty even. And no benchmark link is complete without confusing things a bit, so also check out some conflicting benchmarks at Dormondo's LiveJournal and the Antirez Weblog.
Edit -- as Antirez points out, the Systoilet analysis is rather ill-conceived. Even beyond the single-threading shortfall, much of the performance disparity in those benchmarks can be attributed to the client libraries rather than server throughput. The benchmarks at the Antirez Weblog do indeed present a much more apples-to-apples with the same mouth comparison.
I got the opportunity to use both memcached and redis together in the caching proxy that i have worked on , let me share you where exactly i have used what and reason behind same I have more than billion keys in spread over redis clusters , redis response times is quite less and stable. If you ask for overall experience Redis is much green as it is easy to configure, much flexible with stable robust features. Further , there is a benchmarking result available at this link , below are few higlight from same,.
Run some simple benchmarks. For a long while I considered myself an old school rhino since I used mostly memcached and considered Redis the new kid.
With my current company Redis was used as the main cache. When I dug into some performance stats and simply started testing, Redis was, in terms of performance, comparable or minimally slower than MySQL. Memcached, though simplistic, blew Redis out of water totally.
0コメント