-
Notifications
You must be signed in to change notification settings - Fork 7
Congratulations and a few thoughts #15
Description
First of all I want to congratulate you on this fantastic piece of code! Finally a conceptual approach that makes sense.
My thoughts:
I think today's systems need more like classic caches and your approach goes a lot further. I don't need a cache, I need a distributed memory manager. The problem at the moment is that the cache is always seen as an intermediate layer to data sources (DB, API) and a "cost / benefit" decision must always be made.
I think it should be easier! The cache should basically be the basis for storing data - and then, depending on the configuration, reduce latency and computing power in a distributed environment.
What is missing?
1. Multi-tier
-
The cache should basically be multi-tier, whereby I would not overdo it (creating X adapters etc.)
-
However, it must be possible to use an in-memory cache.
-
Local memory is just FAST and today's servers really have enough of it.
-
If the cache should be the basis, then you should also be able to use it for development or for small projects, where maybe there is no Redis (yet).
In-memory cache with size limit and hit tracking for optimal garbadge collect
2. Invalidation - instead of TTL
- Our micro services receive events via a message broker and based on this information you can delete outdated data.
- Right now I see no direct delete methods - and they should also work based on tags and not limited to one key
3. Versioning - short round trip instead of TTL
- Internally you already work with versions
- It should be possible to ask Redis for newer version (based on given version from local in-memory cache)
- I created such a thing already with some little lua script injected to Redis, which just returns the data if a newer version exists.
- With that you have just one request and 5 byes response if your local data is still up2date
4. Pre-warm in memory cache
-
In combination with a message brokers (or even Redis pubsub) it is also possible to cache data in-memory, before it gets accessed
-
This logic is application side, but would a good interface would help.
Refresh ahead is nice, but I want control over my data and not just statistical optimizations**
Right now we have many backends with some kind of "session stickyness", just to have some little information already there to improve latency. That sucks and with the possibilities described it would be easy to solve.
Maybe you have similar ideas? I would appreciate feedback and if you have a roadmap in which direction the project should go, I would be very interested!
Thanks a lot!
Nic