Distributed Caching in Node.js with Redis for Better Performance
Recently, I needed to implement server-side caching in an Express.js application. Coming from frameworks like Next.js, where caching helpers are readily available, I wondered how to achieve similar functionality in a plain Express.js setup. As I started researching, I realized that creating a universal caching function, similar to what Next.js provides, was surprisingly straightforward. While caching values directly in the memory of the Express app was an option, it could become problematic once multiple instances of the application were running. To address this, I opted for Redis.
Redis offers several additional advantages that are benefitial for this case, including:
- Automatic expiration of cached values
- Distributed architecture
- Scalability
My goal was to make caching as simple as a regular function call. This was the developer experience I aimed for:
const users = await cache(async () => await getUsers(), ['users'], {expiresIn: 3600,});
On the first call, the getUsers
function fetches the users and stores the result in the cache. On subsequent calls, the function checks the cache for existing data and returns it, avoiding the need to call getUsers
again. The cache key ["users"]
is used to identify the data in the cache.
Why Redis?
Because we are using Redis, we can leverage its built-in EX
expiration property to handle cache expiration automatically. Redis is an in-memory data structure store, often used as a cache to speed up web applications by reducing the need to repeatedly query databases.
By caching the results of frequently called functions, Redis significantly reduces the load on your database, speeds up response times, and helps your app scale efficiently. This is especially useful when dealing with large amounts of traffic or complex database queries.
The Cache Function
Below is the first implementation of the cache
function, which allows us to achieve the behavior described above:
import { Redis } from 'ioredis';interface CacheOptions {expiresIn?: number;}/*** This function caches the result of a function in Redis.* @param fn The function whose result will be cached.* @param key Key to identify the cache entry.* @param opt Options for cache configuration.* @returns The result of the function.*/export default async function cache<T>(fn: () => Promise<T>,key: string[],opt: Partial<CacheOptions> = {},): Promise<T> {const kv = new Redis({host: process.env.REDIS_HOST,port: parseInt(process.env.REDIS_PORT),password: process.env.REDIS_PASSWORD,});const cacheKey = `cache:${key.join(':')}`;if (!kv) {return fn();}const cachedResult = await kv.get(cacheKey);if (cachedResult) {return JSON.parse(cachedResult) as T;}const result = await fn();await kv.set(cacheKey, JSON.stringify(result), 'EX', opt.expiresIn ?? 3600);return result;}
Handling Cache Invalidation
Now, let's address a potential issue: what happens when a new user is added and we need to update the users
list immediately? This is where cache invalidation comes in.
Here’s a simple implementation for cache invalidation:
import { Redis } from 'ioredis';const kv = new Redis({host: process.env.REDIS_HOST,port: parseInt(process.env.REDIS_PORT),password: process.env.REDIS_PASSWORD,});/*** This function revalidates the cache by deleting the existing cache entry.* @param key Key to identify the cache entry.*/export default async function revalidateTag(key: string[]): Promise<void> {const cacheKey = `cache:${key.join(':')}`;if (!kv) {return;}await kv.del(cacheKey);}
The revalidateTag
function allows you to invalidate the cache whenever necessary, forcing a fresh fetch the next time the function is called.
Here’s how you can use it:
await addUser('max', 'muster');await revalidateTag(['users']);
With this approach, the next time getUsers
is called, the new user max
will be included in the results.
Best Practices for Cache Management
When implementing caching, there are several best practices to keep in mind to ensure it’s both efficient and scalable.
Cache Key Naming
A solid naming convention for cache keys is essential to avoid conflicts and ensure efficient cache management. Using a prefix like cache:
helps distinguish cache data from other types of data stored in Redis. Additionally, incorporating versioning in your cache key can help prevent issues when the structure of the cached data changes over time.
To make working with cache keys easier, I’ve implemented a simple helper that generates a clean and consistent cache key from an array of strings, making it easier to work with cache keys.
const cacheKey = `cache:${key.join(':')}`;
Error Handling
It’s important to add robust error handling when working with Redis. If the Redis server is down or unreachable, your application should have a fallback mechanism to handle the error gracefully. In this case it will just handle it like if there was no cached value.
Conclusion
In this post, we set up a simple yet powerful caching solution for a Node.js app using Redis. By leveraging Redis' built-in expiration feature, we were able to cache data efficiently and reduce the load on the server. We also tackled cache invalidation with the revalidateTag
function, making sure that the data stays up-to-date when things change, like adding a new user.
This setup gives us a solid foundation for improving app performance and scalability, and it’s a great starting point for handling large amounts of traffic without sacrificing speed.