ruby

Optimize Database Performance in Ruby on Rails and ActiveRecord

Daniel Lempesis

Daniel Lempesis on

Optimize Database Performance in Ruby on Rails and ActiveRecord

In Rails, we're more likely to use SQL databases than other frameworks. Unlike NoSQL databases, which can be scaled horizontally with relative ease, SQL databases like PostgreSQL or MySQL are much less amenable to easy scaling.

As a result, our database usually becomes the primary bottleneck as our business grows. Although SQL databases are very efficient, as our growing customer base puts an increasing load on our servers, we begin scaling our instance counts, workers, etc. But we can't just make copies of our database for each new server we spin up. This makes optimizing database performance critical for any serious Rails project.

In this post, we'll explore strategies for optimizing performance to minimize the load on our database. We'll start with some more basic topics like eager loading and the N+1 query problem, database indexing, select, pluck, and immediate loading. Then, we'll move on to more involved topics like performance profiling, scaling via techniques like database sharding, and background jobs using read replicas.

Let's get going!

Getting ActiveRecord N+1s Out of the Way

No discussion of database performance involving ActiveRecord (or any ORM) is complete without addressing the infamous N+1 problem. While most readers are likely familiar with it, it's still worth revisiting, as N+1 queries can (and do) creep into our projects over time, especially as our codebase evolves and we add or update features.

The N+1 problem generally occurs when iterating through a (potentially large) group of retrieved records. Because we haven't loaded their association/s, each loop means an additional query for each associated model we access inside the loop. This can quickly spiral out of control, resulting in a staggering number of queries and causing serious consequences such as crashed pages, an exhausted database connection pool, and memory running out, ultimately grinding our site to a halt.

There are a few useful tools at your disposal to help identify and resolve N+1 problems. The first is simply looking at your server output; generally, this works pretty well, as N+1s are easy to spot. For a more assisted approach, the Bullet gem is a popular tool that automatically detects N+1s in applications and suggests ways to fix them. Another, arguably better option is prosopite, a less well-known option that generally provides better results with fewer false positives (and false negatives).

Eager loading should be used with care, however; while you're probably safe loading has_one associations, has_many can sometimes be dangerous: what happens when we've got a query of multiple joined, many-to-many tables, and eager load one or more of the many-to-many associations?

We can end up loading way too many records into memory. This can crash our app just as surely as a bad N+1. If you're at this point, and both the N+1 and eager loading approaches are causing you problems, it may be time to reevaluate the query itself. Maybe you're trying to load associations you just want to COUNT (which is something you might be able to build into your query), tighten your pagination limits, or consider offloading the query to a background job if possible (which we'll cover).

Database Performance in Rails: Considerations and Optimization

Sometimes, breaking queries down into a couple of steps can both improve performance and simplify a query. Ever find yourself trying to get a subset of records to complete a mind-bending query? In certain cases, just finding the IDs of the subset you want and then feeding them into a second, less complex query can help reduce the need for complex joins, subqueries, etc.

Have you ever fallen victim to Rails' (usually beneficial) default lazy-loading and evaluation, and wished you could load something immediately rather than the first time it's used? Sometimes we need not only the results of a query, but another aspect of it too, like the number of records we've retrieved. Our seemingly harmless code looks like this:

Ruby
@users = User.where("email ILIKE ?", search) @total_users = @users.size

Normally, this is fine, but what if we need to count users before we do anything with them (maybe we're displaying the number of users returned in our live search at the top of the page, followed by the users themselves)? We might add COUNT to our query, but then we need to retrieve the records again.

We can get around this by explicitly calling load on our query — for example:

Ruby
@users = User.where("email ILIKE ?", search).load @total_users = @users.size

This simple change can save us a query. With our users already loaded, our display of @total_users above the user list will no longer trigger an expensive COUNT followed by another query to get users.

In the other direction, we have load_async with Rails 7. This allows us to asynchronously query the database with multiple requests at once, rather than being locked to one synchronous query at a time. This can be a total game-changer if you're in a situation that can benefit from it. Be careful though, as this (probably unsurprisingly) uses Ruby threads under the hood, so comes with all the same issues you'd expect from threads. That's not to mention the fact that it (potentially) opens our database up to a greatly increased number of connections, and can exhaust our connection pool if we aren't careful.

A large part of database performance in Rails comes down to how you utilize ActiveRecord. While AR is an invaluable tool, it can also lead to performance issues if not used with care. As you're probably aware, it loads every column of every table involved in a query (SELECT *) by default, regardless of how much of that data we actually use. It's pretty easy to forget this though, especially as our queries evolve, and we can end up loading a lot of unnecessary data. It's important to consider whether or not you're really using all of the columns you're loading, or if a select (or even a pluck) statement might be a better fit for your use case.

You're probably familiar with eager loading in Rails, but one thing that's often overlooked is that Rails makes it easy to introduce optional eager-loading (and WHERE clauses, and SELECTs). If you've ever had slightly different requirements for two very similar queries, you might feel like you're left with two options: write two queries, or write covering both use cases, each retrieving more than you need. This might happen in a before_action, where you typically want the same thing for each action in a particular controller. But one controller might benefit from eager loading, more careful column selection, or a search results page which is just filtering the usual index based on some criteria.

There's another way, though:

Ruby
class User def search_user_posts(before_date: nil, optional_selects: [], optional_eager_loads: [], exclude_inactive: false, search: false) optional_before_date_constraint = ["created_at < ?", before_date] if before_date optional_joins = {user_posts: [posts: :comments]} if search self.posts .includes(optional_eager_loads) .joins(:shared_joins) .joins(optional_joins) .select(:columns_shared_by_queries) .select(optional_selects) .where("your shared WHERE constraints") .where(optional_before_date_constraint) end end

Rails is smart enough to know not to execute anything here if there's nothing passed in; where(nil) will behave as though it doesn't exist, as will includes([]), etc. This can really clean up your code and help you tailor performance to your needs.

Finally, rethinking your JOINs and WHEREs can sometimes be beneficial. Are you querying a very large dataset where using a JOIN could significantly reduce the initial result set? Try it out, and don't be afraid to run #explain on your query. It can help you determine if switching to a JOIN might work better and identify any missing indexes on critical queries.

Database Indexing

One of our most important (and nearly universally necessary) tools for database performance is indexing. As you're probably aware, indexes are special data structures (typically B-trees) that a database uses to quickly find records, improving retrieval times from O(n) to O(log(n)). However, it's important to add indexes carefully, as indexing columns unnecessarily can actually harm performance.

The trick is to identify the columns and associations that your application needs to query frequently, and then create indexes on those. This allows the database to quickly locate the relevant data without having to scan the entire table.

You can also use partial indexing. Partial indexes allow you to index a subset of a table's rows based on a specified condition. This is particularly useful when your queries frequently target only a specific portion of the data.

For example, maybe your app has a core of frequent visitors, the majority of whom have their own accounts. They make up the lion's share of your traffic, but represent a small minority of your total users. You could define a partial index on the users table like this:

Ruby
class AddPartialIndexToUsersOnGuest < ActiveRecord::Migration[7.1] def change add_index :users, :guest, where: "(guest = false)" # the `where: condition` here makes this a partial index. end end

Here, we've specifically targeted rows where guest is false. This optimizes the performance of any query searching for non-guest users by reducing the total rows covered by the index.

It's important to revisit your indexing profile and database schema periodically. The specific indexing needs of your application may change over time as the user base, codebase, and feature set evolve. Doing so often means you end up reviewing your indexes during comfortable periods where you have time to analyze and think about your database, rather than in a panic when an influx of traffic crashes your app.

All of this brings us to our next topic: performance profiling.

Performance Profiling for Your Ruby on Rails App

So, we've identified some major categories of performance optimization; what now? Do we go through our entire app from top to bottom, optimizing line by line? Probably not.

It's pretty unlikely that you have evenly distributed traffic across your whole site. It's even less likely that you can afford to go through your entire codebase; premature optimization may not actually be the root of all evil, but you also don't have time for it. So how do we know what to focus on?

This is where profiling tools come in. While your local server output can provide valuable performance insights into response times, memory use, and query performance, you are only one user. Your local database likely doesn't reflect the scale or patterns of your production database. Valuable as it is for spotting potential performance issues, your local environment doesn't provide the context you need to make decisions about what genuinely needs your attention.

Application Performance Monitoring tools (APMs) like AppSignal for Ruby are essential for monitoring and maintaining our site's performance. They provide comprehensive insights not just at the database layer, but across the rest of the stack, allowing us to pinpoint specific areas (pages, endpoints, and even specific queries) that are causing headaches. With a good APM tool, we can track the performance of any request over various timescales, monitor background jobs, spot memory leaks, set up custom error and performance alerts, and more.

APMs provide visualization tools that make it much easier to spot and understand complex performance trends, leading to quicker, more informed decisions. They help prioritize optimization efforts by highlighting not only the slowest queries and most resource-intensive parts of an application, but also the endpoints that consume the most total time (total request count * average response time). This means we can focus on what's actually important. Remember that page we were worried about that takes a couple of seconds to load? Turns out, its total impact on our site amounts to practically nothing. Our products page though, clocking in at a reasonable 150ms? That's taking up 15% of our total server time.

To find out more about measuring database performance using AppSignal, check out our post Monitor the Performance of Your Ruby on Rails Application Using AppSignal.

Background Jobs

It isn't always an option, but running queries in background jobs is a powerful tool for improving performance.

You might be asking yourself how this could be, given that we only have one database; what difference does it make if we're merely calling on it from another thread or process? But there are a few main ways background jobs can come to our database's rescue.

First, just by nature of the fact that our background jobs run asynchronously, we're pretty free to run relatively resource-intensive tasks — like setting up bulk inserts (or upserts), for example (note: neither validations nor callbacks are triggered for bulk operations) — and potentially distributing large numbers of queries across periods of time (perhaps especially for write operations), possibly with the use of find_each.

You're likely used to using a job to process a CSV or spreadsheet; those tasks tend to lend themselves to asynchronous processing as well as bulk insertions. If you're doing this on your server's main thread, you're likely doing something wrong. But sometimes other, less obvious things we're currently doing on our main thread can be moved to a background job to improve performance on the main thread and reduce load on the database.

Consider a scenario where your site allows people to make potentially large purchases of relatively inexpensive items. A school, for example, might order tens of thousands of pencils at once, and maybe your system creates a PurchaseItem for every single item purchased. Instead of creating or updating thousands of rows one at a time, we can set up a bulk insert in a background job, create all the individual hashes inside a loop, and use insert_all to get the job done — taking advantage of our background worker to set up our bulk inserts, and then writing to the database once, instead of locking it with thousands of writes.

Read Replicas

Background jobs are great, but eventually, even if we're spacing out those large, numerous database queries, we hit a point where we're asking too much of our database. We're probably okay in terms of write performance (we tend to read from our database far more than we write to it, after all), but we're constantly bombarding it with queries. Our site's response times are steadily rising, and user experience is suffering.

At this point, the next logical step may be to introduce read replicas to offload work from the primary database. Read replicas allow you to distribute the read query load across one or more replicas of your database. This can improve performance across the board, as it greatly reduces the number of reads (and thus the total I/O) of your primary database, leaving it less swamped by requests (most of which are likely reads in the first place).

Ideally, large database operations are performed in background jobs reading from replicas, setting up a bulk insert or upsert, and then making one atomic write to your main database. This means you can potentially raise not only the read performance of your site, but can also boost write performance to some degree, as your primary database is consistently freer to accept write requests.

While read replicas are not painless to set up, they can provide the extra performance our site needs.

Database Sharding At A High Level

Remember how we said SQL databases don't scale horizontally? Technically, they can scale horizontally through database sharding. Sharding involves splitting up your database into multiple databases, which are then distributed across multiple servers (and often across geographical locations).

There are several approaches for doing this, including:

  • Geographical sharding (based on user locations).
  • Vertical sharding (splitting into different tables or groups of tables based on access patterns, with each shard storing a different subset of the database’s tables).
  • Functional sharding (splitting based on business function; like separating out billing from user content).

And more. Each approach has its use cases, and some naturally lend themselves better to some businesses than others.

As you've doubtlessly realized by now, sharding is generally difficult, and depending on your business, sometimes not even possible. It adds complexity, developmental and maintenance overhead to our apps, and potentially a lot of it. The good news is that if you don't have the resources to cover this, you probably don't need it in the first place. The bad news is that even if you do, it still complicates things and means increased costs (presumably balanced by the potential benefits).

But if you've done all you can with query optimization, read replicas, and caching, sharding might just be the next necessary step — and one that can greatly increase your I/O capability.

Words of Caution and Final Words: Database Optimization in Ruby on Rails

We need to be conscientious in how we approach our optimizations. It's possible to over-index (or index the wrong things, which hurts write performance). In the same way that N+1s can get out of control if we're not careful, so can eager loading, and it's not always immediately obvious when we're developing locally (hence the importance of at least approximating your real-world site locally).

Keep an eye on things via an APM, as traffic and user patterns can change and new ones can emerge, not just over time but potentially week to week and season to season, depending on your business.

There's no one-size-fits-all approach, so we have to be mindful of how we tailor our optimizations to our own site. Remember not to waste too much time trying to optimize things until you have an idea of the impact they'll have on your site.

That doesn't mean you shouldn't try to write generally efficient code, just that there's generally no reason to agonize over tweaking things to be absolutely optimal or wasting time on micro-optimizations. Try to keep the time-to-benefit ratio in mind, and how what you're working on will likely fit into your existing site as it is now, and as it evolves.

If you're interested in diving deeper into aspects of database optimization like sharding, you can learn more from the Rails guides. Also, check out Paweł Urbanek's great article on load_async.

Happy optimizing!

P.S. If you'd like to read Ruby Magic posts as soon as they get off the press, subscribe to our Ruby Magic newsletter and never miss a single post!

Daniel Lempesis

Daniel Lempesis

Our guest author Daniel is a software engineer passionate about Ruby, Rails and software development in general. Most days he can be found squashing bugs or working on building out a new feature.

All articles by Daniel Lempesis

Become our next author!

Find out more

AppSignal monitors your apps

AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

Discover AppSignal
AppSignal monitors your apps