The go-to resource for upgrading Ruby, Rails, and your dependencies.

Implementing Redis Caching to Alleviate Database Load in Legacy Rails Apps


In the late 19th century, before the advent of modern filing cabinets, large commercial enterprises frequently relied on “pigeonhole” desks and complex, bound ledgers. When a clerk needed to reference a specific customer’s account balance, they had to physically retrieve the heavy ledger, scan the index, and locate the exact page. This process was perfectly accurate, but it was incredibly slow. To solve this, businesses began keeping small, specialized “tickler” files or index cards on the desk for the most frequently accessed accounts. They traded the comprehensive certainty of the main ledger for the immediate availability of a temporary, localized copy.

Software architecture requires the same calculation. When we maintain a legacy Ruby on Rails application, the relational database — usually PostgreSQL or MySQL — acts as our definitive ledger. It provides ACID guarantees, complex relational querying, and durable storage. However, as an application grows, querying that database for every single request becomes an architectural bottleneck.

We frequently observe this when analyzing p95 response times. A page that requires complex aggregations or suffers from deeply nested N+1 queries will force the database to work continuously, consuming CPU cycles and memory. Eventually, the database reaches its capacity. The immediate instinct is often to provision larger cloud infrastructure, which quickly escalates hosting costs. A more sustainable, programmatic approach is to implement a caching layer, keeping frequently accessed data readily available in memory.

Why Redis for Caching?

First, let’s discuss a few possibilities besides Redis. The oldest and most traditional caching system for Rails applications is Memcached. Memcached is exceptionally fast, multithreaded, and designed strictly for simple key-value string storage. If your only requirement is basic fragment caching, Memcached is a proven, reliable choice.

Redis, though, is a data structure server. It supports strings, hashes, lists, and sets, and it offers optional persistence. More importantly, in the context of a modern Rails ecosystem, Redis is almost certainly already present in your infrastructure to back background job processors like Sidekiq or Resque.

Generally speaking, utilizing the existing Redis instance for caching is simpler from an infrastructure perspective. It prevents the need to maintain, monitor, and pay for an entirely separate Memcached cluster. Because of this consolidation benefit, and its rich feature set, Redis is the caching store we will be discussing here; of course, we won’t cover every possible caching store in this article, but rather focus on Redis because of its prevalence in the modern Rails ecosystem.

Configuring the Redis Cache Store

Before Rails 5.2, utilizing Redis as a cache store required third-party gems like redis-rails. However, modern Rails includes built-in support for Redis caching through the redis gem.

To configure your legacy application to use Redis, we first need to ensure the redis gem is present in our Gemfile:

gem 'redis', '~> 4.0'

Next, we configure the production environment. We will edit config/environments/production.rb to set the cache_store:

Rails.application.configure do
  # ...snip...
  
  config.cache_store = :redis_cache_store, {
    url: ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" },
    connect_timeout:    30,  # Defaults to 20 seconds
    read_timeout:       0.2, # Defaults to 1 second
    write_timeout:      0.2, # Defaults to 1 second
    reconnect_attempts: 1,   # Defaults to 0
  }
  
  # ...snip...
end

We can notice a few things about this configuration. First, we provide a url targeting a specific Redis database index (in this case, /1), which is a good practice to separate cache data from Sidekiq data (often on /0). We also explicitly set read and write timeouts. This is a critical safety measure; if the Redis server becomes unresponsive, we want our application to fail open or fall back to the database quickly, rather than hanging indefinitely and exhausting our web worker pool.

Tip: In this article, I am using a single Redis instance for simplicity. However, for high-traffic environments, it is often recommended to use a separate Redis server entirely for caching, distinct from the server processing Sidekiq jobs. This ensures a sudden spike in caching memory does not evict critical background jobs.

Low-Level Caching with Rails.cache

With the store configured, we can begin alleviating database load. The most direct approach is low-level caching, which allows us to store the results of expensive queries or calculations.

Consider a dashboard that displays complex sales statistics. Calculating these metrics might take several seconds of database time:

class DashboardController < ApplicationController
  def index
    # This query might join multiple tables and aggregate thousands of rows
    @monthly_revenue = RevenueCalculator.compute_for_month(Time.current)
  end
end

We can wrap this expensive operation in a Rails.cache.fetch block:

class DashboardController < ApplicationController
  def index
    @monthly_revenue = Rails.cache.fetch("revenue_#{Time.current.strftime('%Y_%m')}", expires_in: 12.hours) do
      RevenueCalculator.compute_for_month(Time.current)
    end
  end
end

When this code executes, Rails first checks Redis for the key matching our current month. If it exists, it returns the cached value instantly, bypassing the database entirely. If it does not exist (a “cache miss”), it executes the block, stores the result in Redis with a 12-hour expiration, and then returns the value.

Of course, this approach relies on time-based expiration. For data that changes unpredictably, we need a more robust invalidation strategy.

Fragment Caching and Invalidation

While low-level caching is useful for specific calculations, fragment caching is typically where we see the most significant reduction in Rails scalability bottlenecks. Fragment caching stores the rendered HTML of a view snippet. For example, if you had a blog application to which users could post comments, you might have a view like this:

<% cache @article do %>
  <article>
    <h1><%= @article.title %></h1>
    <p><%= @article.body %></p>
    
    <div class="comments">
      <% @article.comments.each do |comment| %>
        <%= render comment %>
      <% end %>
    </div>
  </article>
<% end %>

The challenge with any caching strategy is invalidation — knowing when the underlying data has changed and the cache must be discarded. Rails handles this elegantly through cache key generation. When we pass an ActiveRecord object like @article to the cache helper, Rails generates a key that includes the model’s class name, its id, and its updated_at timestamp.

If we update the article, its updated_at timestamp changes, resulting in a completely new cache key. The old cached HTML is naturally ignored and eventually evicted by Redis.

One may wonder: if we cache the article, what happens when a user adds a new comment? By default, creating a comment does not update the parent article’s updated_at timestamp, meaning our cache will serve stale HTML.

The answer is straightforward. We use the touch option on the belongs_to association:

class Comment < ApplicationRecord
  belongs_to :article, touch: true
end

When a comment is created, updated, or destroyed, it will “touch” the parent article, updating the article’s updated_at timestamp and successfully invalidating the fragment cache. This pattern, often extended into nested structures, is known as Russian Doll caching.

Safety and Limitations

Caching is a powerful tool, but it introduces state and complexity to your application architecture. Therefore, before heavily caching your legacy application, we must acknowledge a few constraints.

Strictly speaking, caching is not a replacement for proper database indexing. If a query is catastrophically slow because it lacks a necessary index, you should fix the database schema before applying a cache. Caching masks structural performance problems; it does not cure them.

Warning: Be extremely cautious about caching personalized information. If you cache a fragment that includes a “Welcome, David!” header based on the current user, you risk serving that personalized greeting to every subsequent visitor. Always ensure that globally cached fragments contain only globally applicable data.

Conclusion

Implementing Redis caching in a legacy Rails application is a highly effective methodology for extending the lifespan of your current infrastructure. By deferring expensive queries and serving pre-rendered HTML, organizations can significantly reduce database CPU utilization and dramatically improve Ruby performance. It requires careful consideration of cache invalidation and data boundaries, but the resulting gains in performance and infrastructure fine-tuning are well worth the engineering investment.

Sponsored by Durable Programming

Need help maintaining or upgrading your Ruby on Rails application? Durable Programming specializes in keeping Rails apps secure, performant, and up-to-date.

Hire Durable Programming