In an Elixir application, you might need to access certain data frequently, which can be costly. The access time involved in retrieving data at every step can cause high latency, or even make the application crash (due to an increased workload on the database).
Caching is the best technique to store the most frequently accessed data and minimize database data retrieval, improving the overall performance of the application.
In this post, you'll learn how to use the Nebulex caching toolkit to:
- cache locally in your Elixir applications
- get familiar with different supported caching solutions/strategies
- see how Nebulex can be a good fit for caching
Let's get going!
What Is Nebulex?
You've likely already come across some caching options for your existing Elixir application.
Here, I will introduce you to the Nebulex caching toolkit. Nebulex is an in-memory and distributed caching framework for Elixir that supports temporarily storing and accessing data in an Elixir application.
Why Use Nebulex in Your Elixir Application?
Caching can be implemented in many different ways depending on your application use case. You may need a local cache that stores data directly in your application or a distributed cache. Having a cache framework that supports a vast number of caching solutions is a plus. This is where we can leverage the Nebulex caching toolkit.
Nebulex has a flexible and pluggable architecture based on adapters. This makes it possible to integrate different caching options and also provides an easy workaround for various caching topologies, such as:
- Partitioned caching: distribution of data between multiple nodes
- Replicated caching: involving multiple servers, where each server has the same copy of data
- Local caching: a single-server cache that resides on a single node
So Nebulex makes it easy for developers to scale their applications beyond a single node (with minimal impact on the code). You only need to choose the adapter that best suits your needs.
Check out this in-depth guide on supported caches and their adapters in Nebulex.
Now let's look at a real-world example where Nebulex proved useful.
The Problem: Fetching and Storing Data from an External API
A while ago, I was working on a payment integration system that involved fetching a transaction from an external API. The fetched transaction was then stored in the database.
We also had to confirm if the transaction could complete, so we scheduled a job using GenServer. GenServer sent a request to the external API after a certain period of time to confirm if the transaction was complete and then updated the stored transaction status.
This approach had the following shortcomings:
- After every specified period, a request was sent to the external API to confirm if the transaction was completed. This occurred even when all fetched transactions stored in the database were completed.
- After every specified period, a query ran to check if all transactions were complete. This approach resulted in unnecessary trips to the database and was deemed unfit when handling many transactions.
The Solution: Caching Locally with Nebulex
The application was being deployed to a single instance and would frequently access loaded transactions. So to solve the problem stated above, we used Nebulex to locally cache the fetched transactions.
This approach helped to overcome the following:
- Making unnecessary trips to the external API after every specified period. It ensured requests were only made if the cache contained transactions. By default, all cached transactions were incomplete.
- Making many trips to the database. Trips to the database were made only when inserting a fetched transaction and updating its status.
As mentioned earlier, Nebulex supports different caching solutions, one of which is local caching.
We'll now implement Nebulex in an Elixir application when caching locally.
Let's begin!
Add Nebulex to an Elixir Application
First, open the mix.exs
file and add the nebulex
dependency as shown:
defp deps do [ {:nebulex, "~> 2.4"} ... ] end
Install the Nebulex dependency:
mix deps.get
After installation, go ahead and generate your cache:
mix nbx.gen.cache -c Payment.Cache
This step involves:
- Defining a cache module within the application
- Setting up configurations
We'll use the name Payment.Cache
to identify our cache (you can name yours as you wish). The command above will generate the Payment.Cache
module defined in lib/payment/cache.ex
.
#lib/payment/cache.ex defmodule Payment.Cache do use Nebulex.Cache, otp_app: :my_app, adapter: Nebulex.Adapters.Local end
This command generates the configuration Nebulex will use in config/config.exs
,
#config/config.exs config :my_app, Payment.Cache, # When using :shards as backend # backend: :shards, # GC interval for pushing new generation: 12 hrs gc_interval: :timer.hours(12), # Max 1 million entries in cache max_size: 1_000_000, # Max 2 GB of memory allocated_memory: 2_000_000_000, # GC min timeout: 10 sec gc_cleanup_min_timeout: :timer.seconds(10), # GC max timeout: 10 min gc_cleanup_max_timeout: :timer.minutes(10)
Finally, open lib/my_app/application.ex
and add the Payment.Cache
to the application supervision tree.
#lib/my_app/application.ex use Application @impl true def start(_type, _args) do children = [ Payment.Cache # ... ]
Setting Payment.Cache
in the supervision tree starts the Nebulex process when the application starts up.
Nebulex.Cache
Configuration in Elixir
The generated Payment.Cache
module uses the Nebulex.Cache
, a cache abstraction layer controlled by adapters. Nebulex.Cache
expects :otp_app
and :adapter
as options.
:otp_app
points to the Elixir application wherenebulex
can find the cache configuration. In our case,:my_app
is specified.:adapter
- the desired adapter is configured at this point. In our case,Nebulex.Adapters.Local
has been specified. It implements a local generation cache. This is a caching technique based on the age of the cached entries. It involves specifying the object's last modified date in the cache key.
The configurations specified in config/config.exs
are specific to Nebulex.Adapters.Local
passed in the :adapter
option.
These are some of the options supported by Nebulex.Adapters.Local
via the cache configuration:
:backend
- the storage used for the adapter. The supported forms of storage are:ets
and:shards
, and the default is:ets
.:gc_interval
- interval time in milliseconds for garbage collection to run, which involves deleting the oldest generation and creating a new one. Expects an integer > 0.:max_size
specifies the cache limit. Expects an integer > 0.:allocated_memory
- maximum size in bytes of the memory allocated for cache generation. Expects an integer > 0.:gc_cleanup_min_timeout
- minimum timeout in milliseconds for triggering the next clean-up and memory check. Defaults to 10_000 (10 seconds). Expects an integer > 0.:gc_cleanup_max_timeout
- maximum timeout in milliseconds for triggering the next clean-up and memory check. Defaults to 600_000 (10 minutes). Expects an integer > 0.
Read more about supported module options for Nebulex.Adapters.Local
.
Query with Nebulex.Cache
Callbacks
The Nebulex.Cache
module has callbacks that can be leveraged to perform queries. Let's cache fetched transactions from an external API and manipulate cached entries using callbacks.
The implementation for manipulating cache entries is in lib/my_app/payment_cache.ex
:
#lib/my_app/payment_cache.ex defmodule MyApp.PaymentCache do @moduledoc """ context for manipulating cache entries. It involves; - Inserting fetched transaction and incomplete transaction into the cache - Query for all cached transaction - Deleting cached transaction """ alias Payment.Cache def insert_transaction(transaction) do Cache.put(transaction["id"], transaction) end def insert_all_transactions(transactions) do transactions |> Enum.map(fn transaction -> {transaction.id, %{id: transaction.id, status: transaction.status}} end) |> Cache.put_all() end def get_transaction(id) do Cache.get(id) end def all_cached_transactions do Cache.all() end def delete_transactions(transaction_ids) when is_list(transaction_ids) do unless Enum.empty?(transaction_ids) do Enum.each(transaction_ids, fn key -> delete_transaction(id) end) end end def delete_transaction(id) do Cache.delete(id) end end
Here, we create a context for manipulating cache entries related to payments. We alias Payment.Cache
(the module generated above in lib/payment/cache.ex
). The module will give us access to the Nebulex.Cache
callbacks.
The following functions are defined:
insert_transaction/1
- takes in a fetched transaction from the external API and inserts it into the cache. Payment.Cache.put/3 is responsible for inserting the transaction into the cache.insert_all_transactions/1
- takes in a list of transactions and inserts them into the cache by invoking Payment.Cache.put_all/.all_cached_transactions/0
- fetches all cached entries by invoking Payment.Cache.all/2.delete_transaction/1
- deletes a cached entry by invoking Payment.Cache.delete/2.get_transaction/1
- fetches a cached transaction from the given key by invoking Payment.Cache.get/2.
Now, open an interactive shell in your terminal, and put the functionality into use:
iex > alias MyApp.PaymentCache iex > fetched_transaction = %{id: 1, status: "pending"} %{id: 1, status: "pending"} iex > PaymentCache.insert_transaction(fetched_transaction) :ok iex > PaymentCache.get_transaction(1) %{id: 1, status: "pending"} iex > another_fetched_transaction = %{id: 2, status: "pending"} %{id: 2, status: "pending"} iex > PaymentCache.insert_transaction(another_fetched_transaction) :ok iex > PaymentCache.all_cached_transactions() # returns the ids of the cached transactions [1, 2] iex > PaymentCache.delete_transaction(1) :ok iex > PaymentCache.all_cached_transactions() [2] iex > PaymentCache.get_transaction(1) nil
Schedule a Job in Elixir Using Cached Entries
Caching fetched transactions comes in handy when scheduling jobs.
#lib/my_app/payment_scheduler.ex defmodule MyApp.PaymentScheduler do use GenServer require Logger alias MyApp.PaymentCache def start_link(opts) do GenServer.start_link(__MODULE__, opts, name: __MODULE__) end @impl true def init(opts) do {:ok, opts, {:continue, :cache_incomplete_transactions}} end @impl true def handle_continue(:cache_incomplete_transactions, state) do insert_all_transaction() schedule_work() {:noreply, state} end @impl true def handle_info(:update, state) do update_pending_transactions() schedule_work() {:noreply, state} end defp schedule_work do Process.send_after(self(), :update, 10000) end defp update_pending_transactions do all_cached_transactions = PaymentCache.all_cached_transactions() case all_cached_transactions do [] -> Logger.info("All transactions are complete") all_cached_transactions -> complete_transactions = complete_transactions(all_cached_transactions) Payments.update_incomplete_transactions(complete_transactions) # delete cached transactions after update in the db PaymentCache.delete_transaction(complete_transactions) end end defp complete_transactions(cached_transactions) do #fetch all complete transactions from external API complete_transactions = Payment.confirmed_transactions() Enum.map(complete_transactions, fn transaction -> transaction.id in all_cached_transactions transaction.id end) end defp insert_all_transaction do pending_transactions = Transactions.all_pending_transactions() case Payment.all_pending_transactions() do [] -> Logger.info("There are no incomplete transactions in the database") pending_transaction -> PaymentCache.insert_all_transaction(pending_transactions) end end end
Let's run through what's going on here.
When an application starts, init/1
will be invoked. Our init/1
returns {:ok, opts, {:continue, :cache all incomplete transactions}}
.
This will immediately trigger handle_continue(:cache_incomplete_transactions, state)
. Here, all incomplete transactions are fetched from the database and cached. By invoking schedule_work/0
, an updated job is scheduled to take place every 10 seconds.
The handle_continue/2
callback allows us to perform a time-consuming job asynchronously, avoiding race condition cases. For more about handle_continue/2
, I suggest Sophie DeBenedetto's excellent Elixir School article, 'TIL GenServer’s handle_continue/2'.
In schedule_work/0
, an :update
message notifies the parent process (the GenServer) that there is an update to perform. The :update
message is then handled in the handle_info/2
callback. At this point, we confirm if there are incomplete transactions in the cache (keeping in mind that we only cache incomplete transactions).
When the cache is empty, that means there are no incomplete transactions. So we skip sending a request to an external API confirming whether the transactions have been completed, and we update their status in the database.
Wrapping Up
In this post, we explored how to use Nebulex for caching data locally in an Elixir application.
Implementing caching in an Elixir application depends purely on your business use case. When choosing a cache toolkit, make sure it meets your needs.
Nebulex's cache toolkit supports a vast number of caching solutions and allows you to implement different topologies with minimal impact on the code.
Visit Nebulex's guide to learn more.
Happy caching!
P.S. If you'd like to read Elixir Alchemy posts as soon as they get off the press, subscribe to our Elixir Alchemy newsletter and never miss a single post!