How to Use Sentry and Honeybadger to Monitor Errors During a Rails Upgrade
In 1968, Volkswagen introduced the first consumer on-board diagnostics (OBD) computer system. Before the advent of OBD systems, mechanics diagnosing a car problem had to rely on a combination of driver reports, manual inspection, and intuition. If the engine sounded wrong, they had to take it apart to find out why. Today, of course, a mechanic can plug a scanner into a port under the dashboard and immediately retrieve standardized error codes that pinpoint the exact failing component.
This historical shift from manual inspection to automated instrumentation parallels the evolution of software operations. When upgrading a large Ruby on Rails application, we are essentially changing the engine while the car is running. Even with comprehensive test coverage, local environments and CI/CD pipelines rarely capture every edge case present in a production environment. As we migrate an application to a new version of Rails, deprecated behaviors break, gem incompatibilities emerge, and unpredictable user data exposes new code paths.
To manage this risk, we must rely on robust exception tracking. Tools like Sentry and Honeybadger act as our application’s OBD system. When configured correctly for an upgrade, they provide the visibility needed to identify, triage, and remediate technical debt before it impacts the critical business path.
One may wonder: if we can log exceptions to a local file, why do we need a dedicated error monitoring service? The answer is straightforward. An upgrade generates a high volume of noise. While a log file captures everything, it does not aggregate, group, or provide analytical tools to differentiate a critical regression from a minor warning.
Before we get into the specifics of configuring these tools, though, let’s establish some foundational practices.
Establishing a Pre-Upgrade Error Baseline
Strictly speaking, a staging environment will not catch all of your upgrade regressions — at least not without an exact clone of your production data and traffic patterns. Therefore, before deploying any upgraded code to production, we need a clear understanding of the application’s current technical health. An upgrade will inevitably generate noise; if your error tracker is already flooded with unresolved exceptions, isolating new regressions becomes significantly harder.
By way of a memory aide, you can think of your error tracker during an upgrade as a signal-to-noise problem. We want to maximize the signal (new upgrade-related errors) and minimize the noise (existing technical debt).
Take the time to establish an error baseline:
- Resolve or Mute Existing Noise: Review the highest-volume errors in your current production environment. Fix the ones you can, and use your tool’s ignore or snooze features to silence known, low-priority issues.
- Audit Ignored Exceptions: Check your configuration files for suppressed error classes. Ensure you are not dropping exceptions that might indicate a severe migration failure.
- Ensure Context is Logged: Verify that your current setup captures essential context, such as user IDs, request parameters, and background job arguments.
Choosing a Diagnostic Instrument
There are a number of different error monitoring services available. For our purposes, we will look at two of the most popular in the Ruby ecosystem: Sentry and Honeybadger.
Depending on the particular circumstances you find yourself in, one of them may be more useful than the other. I would characterize the Sentry approach as aiming for comprehensive, full-stack observability. It provides granular control, release tracking, and deep performance monitoring. Conversely, I would characterize Honeybadger as focusing on developer ergonomics and simplicity — it is designed to be a straightforward solution for uptime and error monitoring.
Generally speaking, if you already have a complex infrastructure and need deep cross-service tracing, Sentry is often the better choice. Honeybadger, though, will often make more sense if your primary goal is rapid triage and straightforward error grouping.
Configuring Sentry for a Rails Upgrade
Sentry provides granular control over exception tracking. Its environment tagging features are particularly useful for isolating regressions during an upgrade, especially if you are employing a dual-boot strategy.
Tagging the Rails Version
If you are dual-booting — testing the application against both the current and target Rails versions using a tool like bootboot or next_rails — you must know which Rails version generated a specific error.
We can tag exceptions with the active framework version globally within our Sentry initializer. Let’s look at how we might configure this in config/initializers/sentry.rb:
# config/initializers/sentry.rb
Sentry.init do |config|
config.dsn = ENV['SENTRY_DSN']
# Tag exceptions with the active Rails version
config.before_send = lambda do |event, hint|
event.tags ||= {}
event.tags[:rails_version] = Rails.version
event
end
end
This is significant because before_send allows us to mutate the event payload before it is transmitted. By injecting Rails.version into the tags, we can instantly filter Sentry issues in the web interface to see only errors occurring on our target Rails branch.
Utilizing Release Tracking
Sentry’s release tracking ties errors to specific deployments or Git commits. When rolling out a major Rails version, associate the deployment with a distinct release tag. This allows Sentry to automatically identify which exceptions were introduced during the upgrade deployment.
You can configure the release version in your initializer:
Sentry.init do |config|
# ...snip...
config.release = ENV.fetch('COMMIT_SHA', 'development')
end
I have abbreviated the rest of the configuration above for the sake of brevity.
Configuring Honeybadger for a Rails Upgrade
Honeybadger, on the other hand, is known for its straightforward configuration. Its simplicity makes it an excellent tool for rapid triage during the immediate post-upgrade support phase.
Custom Context and Insights
During an upgrade, you might need to track specific application states that are unique to the migration process. For example, if you are migrating from Paperclip to ActiveStorage, you might want to know if an error occurred while processing an old or new attachment format.
We can add custom context to Honeybadger within specific controllers or background jobs. For instance:
class LegacyAttachmentMigrator
def perform(record_id)
@record = Record.find(record_id)
# Add context for this specific execution
Honeybadger.context(
migration_phase: 'active_storage_transition',
legacy_attachment_id: @record.legacy_id
)
# ...snip...
end
end
You also may notice that we do not need to clear this context manually at the end of the request or job; Honeybadger automatically clears context between requests in a Rails environment. Again, I have abbreviated the actual migration logic to keep the example focused on the instrumentation.
Leveraging Deployment Tracking
Like Sentry, Honeybadger supports deployment tracking. You should configure your deployment scripts to notify Honeybadger whenever the upgraded branch is deployed.
You can typically notify Honeybadger of a deployment via a CLI command:
$ bundle exec honeybadger deploy --environment=production --revision=$(git rev-parse HEAD)
If the deployment notification is successful, Honeybadger will output something like this:
Deploying to production...
Revision: 3a9b1c2
Deploy recorded successfully.
This command notifies Honeybadger that a deployment has occurred. Honeybadger uses this data to group errors by deployment. If an exception spikes immediately following the upgrade release, Honeybadger will flag it as a likely regression. Note the use of bundle exec; we could have also used the honeybadger executable directly, but that will not necessarily give us the correct version of the gem for our environment. Additionally, note that the exact revision number you see in your particular output will likely vary depending on your Git commit hash.
Triage and Remediation Workflows
The true value of these tools during a Rails upgrade lies in how we use them. A sudden influx of errors can be overwhelming. Establishing a clear triage workflow before the deployment is essential.
- Prioritize by Frequency and Impact: Focus on exceptions affecting the highest number of users or interrupting critical user flows.
- Filter by Release/Deployment: Immediately isolate errors introduced in the upgrade release. These are your primary regressions.
- Rollback if Necessary: If the error volume exceeds your acceptable threshold or impacts core functionality, use the monitoring data to justify a rollback.
Therefore, although deploying an upgraded branch is certainly a reasonable action, any such deployment should be accompanied by a clear rollback plan — one must assume there will be regressions. The goal of an upgrade deployment is not to force the new version into production at all costs; the goal is a stable application.
Conclusion
Upgrading a Ruby on Rails application is a significant undertaking. By establishing an error baseline and deliberately configuring Sentry or Honeybadger to track releases and framework versions, we transform these tools into targeted diagnostic instruments. This approach minimizes the impact of unexpected regressions and provides the confidence needed to finalize complex framework modernizations.
Sponsored by Durable Programming
Need help maintaining or upgrading your Ruby on Rails application? Durable Programming specializes in keeping Rails apps secure, performant, and up-to-date.
Hire Durable Programming