The go-to resource for upgrading Ruby, Rails, and your dependencies.

Updating Your Redis Infrastructure When Migrating Off Heroku


When engineering teams migrate large and complex applications off Heroku, the relational database often receives the most attention. Moving from Heroku Postgres to a solution like AWS RDS requires careful planning, but migrating your Redis infrastructure carries its own set of critical risks. Heroku Redis abstracts away connection management, memory eviction policies, and persistence settings. When you move to a provider like AWS ElastiCache or Render, you must configure these parameters explicitly.

Before we get into that, though, we must address how Rails applications typically use Redis. A standard Heroku deployment frequently uses a single Redis add-on to handle background jobs via Sidekiq, page caching, and ActionCable WebSockets. While this monolithic approach works for early-stage applications, migrating your infrastructure presents a unique opportunity to separate these workloads. Doing so prevents a sudden spike in cached data from evicting critical background jobs and improves overall Ruby performance.

1. Auditing Your Existing Redis Workloads

Before provisioning new infrastructure, you must understand exactly what your application stores in Redis. If you migrate cache data alongside Sidekiq queues without distinguishing between the two, you risk significant downtime.

You should connect to your Heroku Redis instance using the CLI and run the INFO command to inspect your current memory usage and key count. This helps determine the appropriate instance size for your new environment.

$ redis-cli -h your-heroku-redis-host -a your-password INFO memory

Look specifically for the used_memory_human and evicted_keys metrics. If your application frequently evicts keys, your current Heroku instance is likely undersized. You also may notice a high fragmentation ratio, which indicates that Redis is struggling to manage memory efficiently. This data will inform whether you need to provision a larger instance or split your workloads during the migration.

2. Splitting Workloads Across Multiple Instances

When migrating off Heroku, we recommend provisioning at least two separate Redis instances — one for ephemeral caching and one for persistent data like Sidekiq queues.

Caching requires a allkeys-lru or volatile-lru eviction policy. When memory fills up, Redis drops the oldest cache keys to make room for new ones. Conversely, Sidekiq requires a noeviction policy. If Redis drops a Sidekiq job because the cache consumed all available memory, you will lose customer data or fail to process critical background tasks.

If you are moving to AWS, you can provision two ElastiCache instances. For the cache, you might select a smaller node type, while the Sidekiq instance requires enough memory to handle peak queue depth during outages.

3. Configuring Connection Pooling and Timeouts

Heroku manages network routing internally, frequently masking connection latency. When you migrate to a new cloud provider, network topology changes. You must configure your Rails application to handle connection timeouts and pooling appropriately.

In your config/initializers/redis.rb file, explicitly define connection pool limits based on your server concurrency. If you run Puma with 5 threads and 3 workers, each server process can consume up to 15 Redis connections.

# config/initializers/redis.rb
require "redis"
require "connection_pool"

redis_url = ENV.fetch("REDIS_URL", "redis://localhost:6379/0")

REDIS_POOL = ConnectionPool.new(size: ENV.fetch("RAILS_MAX_THREADS", 5).to_i, timeout: 5) do
    Redis.new(
        url: redis_url,
        timeout: 1.0,
        read_timeout: 1.0,
        write_timeout: 1.0
    )
end

By setting strict timeout values — such as 1.0 seconds — you prevent your Ruby application from hanging indefinitely if the new Redis infrastructure experiences a network partition.

4. Managing Sidekiq Queues During the Transition

Migrating Sidekiq data requires precision. You cannot lose jobs that are currently enqueued or scheduled for the future. We can approach this migration using a dual-infrastructure strategy.

First, provision the new Redis instance and update your staging environment to verify the connection. When you are ready for the production cutover, the safest method involves a brief pause in job processing.

  1. Pause Sidekiq processing on the Heroku application but allow the web dynos to continue enqueuing new jobs.

  2. Wait for the active jobs to finish executing.

  3. Export the data from Heroku Redis and import it into the new infrastructure.

  4. Update the REDIS_URL environment variable on your application servers to point to the new instance.

  5. Restart the Sidekiq processes on the new infrastructure.

    While Heroku does not provide root access for a standard BGSAVE and scp transfer, you can use a migration script or a tool like redis-dump to extract the data. For large databases, consider setting up the new Redis instance as a replica of the Heroku instance, though Heroku’s network restrictions sometimes prevent direct external replication.

5. Handling Cache Invalidation and ActionCable

Unlike Sidekiq, caching data is inherently temporary. You generally do not need to migrate your cache data. Instead, you can configure your new infrastructure to point to the new cache Redis instance, and let the application rebuild the cache naturally.

ActionCable presents a different challenge. WebSockets rely on Redis pub/sub channels to broadcast messages. When you switch the REDIS_URL for ActionCable, existing WebSocket connections will drop. Your front-end JavaScript — typically using the standard @rails/actioncable library — will automatically attempt to reconnect.

You should update your config/cable.yml to utilize the new infrastructure:

# config/cable.yml
production:
  adapter: redis
url: <%= ENV.fetch("ACTIONCABLE_REDIS_URL") { ENV.fetch("REDIS_URL") } %>
channel_prefix: your_app_production

Ensure that your load balancer or ingress controller is configured to handle the sudden spike in reconnection attempts when the cutover occurs.

6. Post-Migration Monitoring and Long-Term Maintainability

After updating your Redis infrastructure, rigorous monitoring is mandatory. Rely heavily on Application Performance Monitoring (APM) tools to track connection errors, queue latency, and memory consumption.

Pay close attention to Sidekiq processing times. If your new Redis instance is located in a different availability zone than your application servers, network latency can increase, which degrades overall throughput.

Moving off Heroku forces you to take ownership of your database infrastructure. By explicitly separating workloads, defining strict eviction policies, and configuring connection pools, you establish a resilient foundation capable of supporting significant application growth.

Sponsored by Durable Programming

Need help maintaining or upgrading your Ruby on Rails application? Durable Programming specializes in keeping Rails apps secure, performant, and up-to-date.

Hire Durable Programming