In this post, we'll configure a Phoenix LiveView application to use a structured logger. We'll then use AppSignal to correlate log events with other telemetry signals, like exception reports and traces.
Along the way, you'll learn about the benefits of structured logging, and you'll see how to configure a distinct framework and application logger in your Phoenix app.
Let's get started!
What is Structured Logging?
Structured logs conform to a specific structure in which an agreed-upon set of key/value pairs describe common log attributes. Without structured logging, you might get a log line that looks something like this:
A few bits of information make up each log statement:
- a timestamp
- a level (in this case,
info
) - a log message
- a bit of metadata (the
request_id
)
In this unstructured format, all the data is included in one string, so it's difficult to search and analyze log statements based on each data point.
Structured logging solves this problem for us. It breaks each piece of information into its own key/value pair. Here's an example of these same log statements structured using the logfmt format:
Now you can search and analyze log statements based on the timestamp
, level
, message
, and request_id
fields. For example, you could search all log statements with a shared request_id
, filter log lines by their level
key, or search logs within a range of timestamps.
Now that we can see the benefits of using structured logs, we can configure our Phoenix LiveView application to use the logfmt format in development and production. But first, we'll take a closer look at how Phoenix apps configure their loggers out of the box.
Logger Configuration in Phoenix
By default, your Phoenix application will configure its logger to output unstructured logs to the console. Open up your config.exs
file, and you'll see something like this:
This means the logger will use the console backend to print log messages to the console using the specified (unstructured) format. This configuration says that only the request_id
from the default metadata list should be included in each log statement. Elixir's Logger
adds a certain set of metadata to all log statements, and you can find the full list here. The request_id
metadata, however, is not added by default with the Elixir Logger
. Instead, it is added by the Plug.RequestId
plug that you'll see added to your application's endpoint.ex
file like this:
Let's take a quick peek at the rest of the default metadata by setting metadata: :all
in config.exs
:
Now, when we fire up our app and visit the root path, we'll see something like this:
That's a lot of information! Too much information for our development environment. These logs are so information-dense that they're difficult to parse with our human eyes. For that reason, you'll notice that the dev.exs
file overwrites this configuration to simplify it:
Here, the timestamp and metadata are excluded entirely, except for the metadata's $level
data point.
Now that we've seen where and how your Phoenix app configures the logger out-of-the-box, let's configure the logger to emit structured logs that conform to logfmt.
Using Logfmt in Phoenix Development
First up, we'll configure our development environment to use the console
logger backend with a logfmt formatter. For this, we need the LogfmtEx dependency added to our application. Open up your mix.exs
file and add it like this:
Go ahead and run mix deps.get
to install the new dependency.
Next up, we'll configure our application's console logger to use this formatter. In dev.exs
, add the following:
Now, when you start up your application, you should see structured logs like this:
Notice that we haven't overwritten the :metadata
key from the console backend config in config.exs
, so we're still only logging the request_id
data point from the metadata.
Now we've successfully emitted structured logs to the console, but this isn't good enough for production. To operate a Phoenix application in production, we need to emit our logs to a third-party service that allows us to search, aggregate, and analyze them at scale. This is where AppSignal comes in.
In the next section, we'll incorporate the AppSignal logger into our Phoenix application and configure it to use logfmt.
Using Logfmt with the AppSignal Logger in Production
If you don't already have AppSignal installed and configured in your application, you can follow the instructions here. Assuming you have an AppSignal account and organization configured, we're ready to set up the AppSignal logger backend by following the steps in AppSignal's Logging from Elixir docs. We'll use the Elixir integration rather than set up a custom source.
In our runtime.exs
file, we'll specify the Appsignal.Logger.Backend
, like this:
Also, in runtime.exs
, you'll want to set the application's log level to :info
, to ensure we send all logs at the info level and below to AppSignal:
This will forward our application's logs to our AppSignal account and group them under the "phoenix"
group designation. It will treat the format of the log statements as logfmt and allow us to search and analyze our logs based on their attributes.
If you log into your AppSignal account, select your organization, and select "Logging" -> "Application", you'll see something like this:
Note that the log line includes the timestamp, hostname, level, and group (in this case, "phoenix"
). Since we've specified "logfmt"
for the Appsignal.Logger.Backend
, each of these attributes is indexed and searchable. All we have to do is double-click any of these attributes on the log line itself, and the search filter on the top of the page will update and execute:
Let's take a closer look at one of these log lines now. The "hello from the page controller"
log line is emitted explicitly in application code by the following line added to the PagesController
:
If we click the "+"
icon next to the log line, we see all of the log line's metadata:
You can see that the Appsignal.Logger.Backend
emits all of the default metadata, along with the request_id
. Since we're using logfmt, each data point is searchable. So, just like we did with the log attributes displayed in the log line itself, we can click on any metadata attributes to get them added to the search. If you click on the request_id_string
attribute, you'll see the search query update to look like this:
In fact, we can add additional metadata when we emit log lines (using calls to Logger
), like this:
With that, the user_id
attribute is displayed in the details of each log line and searchable just like the request_id
metadata.
So far, we've:
- Configured our Phoenix application's logger to use the
Appsignal.Logger.Backend
. - Taught that backend to use logfmt.
- Seen logs emitted and consumed into AppSignal, and got a feel for the basic logging search functionality there.
Before we continue exploring the AppSignal logging UI and some of its more useful features, we need to address a problem that's cropped up.
Framework Vs. Application Logs
Taking another look at the log statements our Phoenix app has emitted to AppSignal so far, we see something like this:
There is a lot of noise in here mixed with some useful bits of information. You can see one error log that will very likely be helpful in debugging, as well as one instance of our "hello from the page controller"
log statement. Those two log lines are very hard to detect amidst the many log lines emitted that indicate the server is up, the WebSocket channel is connected, certain web requests are served, and so on.
These noisy logs are emitted not from our application code, but from the Phoenix framework itself. They don't tell us anything interesting, and they are not an essential part of the overall observability picture of our application. Instead, they create noise and contribute to high log volume, making it harder to find and identify the log statements that we intend to emit from application code.
For this reason, framework logs should be emitted at the FATAL
level. Application logs that are emitted when exceptions are raised or by explicit calls to Logger
should log at the INFO
level. Phoenix doesn't expose an easy way for us to do that, but we can build our own custom logger backend to solve this problem. Let's do that now.
Build A Custom Logger Backend
A logger backend is really nothing more than a GenServer, and must conform to the following API:
- Implement an
init/1
function - Implement a
handle_event/3
function
Here's the skeleton for our backend:
The init/1
function will be called with whatever config we pass to the call to configure the logger backend in runtime.exs
. It returns the {:ok, state}
tuple.
The handle_event/2
function will be called with all the data we need to emit a log statement with the Appsignal.Logger
, including the:
- Level
- Log message
- Metadata
- Internal state of the logger backend process that we established in
init/1
Whenever the Phoenix framework itself calls the Phoenix application's logger, the metadata
variable passed to handle_event/2
will include an :application
key pointing to a value of :phoenix
. Where logs are emitted by an explicit call to Logger
or by an exception raised in our application, the metadata
keyword list will not include an :application
key. We want to be able to write some code that says: "Only log statements coming from Phoenix if they are at the configured log level or below". We can do exactly that by adding some logic that looks for metadata
with application: :phoenix
.
Before we write that code, let's take a look at how we will configure this custom logger backend in our runtime.exs
file:
Here, we include a key :application_config
, which points to another keyword list. That keyword list contains configs for the :phoenix
app along with the :default
app. This data will be passed to the init/1
function and stored in our backend's state. So, when handle_event/2
is called later on, we can do the following:
- Check if the
metadata
containsapplication: :phoenix
. - If so, check the provided
level
and only log if it is at or below the level stored inapplication_config: [phoenix: [level_lower_than: level]]
in state.
Here's what our function will look like:
We pull the application config out of state and pass that along with the provided level to the log_event
helper function. That function includes a guard clause that will cause it to no-op if the provided level is higher than the level configured in runtime.exs
for the given application. If we should in fact log, we do so by calling directly on the AppSignal.Logger.log
function. This allows us to emit logs to AppSignal at a specified level, for a given group, with a message and any metadata.
With this, we will suppress Phoenix framework logs above the FATAL
level, but ensure that logs emitted by application code are emitted at INFO
.
If you're hesitant to suppress framework logs in this way, remember that logs are just one piece of the observability puzzle. Rather than recording every instance of a GET /
request, we are more likely to care about trends in serving such requests over time. We want the ability to drill down into the details of some such requests, especially where they are anomalous (for example, if a request experiences a high degree of latency).
This is where traces come in, along with other telemetry data, like metrics and exceptions.
In the next section, we'll look at how you can use AppSignal to correlate logs with traces and other types of telemetry, giving you a full picture of how your application is performing.
Correlating Log Events and Other Telemetry with AppSignal
Application logs are just one part of your overall observability posture. They should be used to record specific events of interest in your application's lifecycle and provide additional details and context in the event of an error or exception.
You should leverage application logs alongside other telemetry data like metrics, traces, and exception reports to fully observe your application in production.
Here's a very high-level overview of how you can view each type of telemetry data that your application emits:
- Logs record that an event occurred (for example, a failed user log-in attempt).
- Metrics record trends in event occurrences over time. For example, you can record the distribution of request latency to a particular endpoint, or monitor the capacity utilization of your system.
- Traces record details of the lifecycle for a particular piece of code flow. For example, drill down into the trace of a specific request to understand possible causes of latency.
- Exception reports show when an unexpected or "exceptional" error occurred. If your code encounters an exception, you can see the details of that exception, including the backtrace and full error message.
These telemetry signals become especially powerful when they can be correlated. For example, when viewing a specific exception report, you may want to gain more insight into the performance of your overall system at the time the exception occurred. In this case, the ability to view logs and/or traces from that same timeframe can provide you with the extra information you need to remedy bugs and improve overall application performance.
Monitoring Your Logs Using AppSignal
Correlating telemetry data can be a particularly hard problem to solve in the observability domain, especially when using different third-party services for things like logging, tracing, and exception reporting. AppSignal provides functionality for managing all of these types of telemetry and empowers you to correlate different types of telemetry data to get the most out of various signals. Let's take a look at an example now.
We'll use the "Errors" -> "Issue list" as the entry point for this exercise. I've selected an exception report from this list, displayed here:
We can correlate this exception report with some traces emitted by the live view that encountered this error. Note that I've configured our application to emit LiveView traces to AppSignal by following the instructions here. By clicking on "samples" and selecting an exception sample to examine, we can see the following exception details:
Now, in addition to the original exception report, we can see other useful information like the event that the live view was trying to handle when the exception occurred and the payload of that event, in addition to the full exception backtrace.
Finally, we can use the Time Detective feature to explore the full range of telemetry data, including traces, that our application emitted around the time of this exception. If we click on the "Time Detective" action here:
We'll see something like this:
The "Performance" view brings up some trace samples from the time period that we can drill down into. Here, we can see a list of traces that occurred around the time of the incident, including traces for the various LiveView lifecycle callbacks. From the details in the exception sample, we know this error occurred when an event handler in our live view tried to handle the "update"
message. So, let's select the handle_event
trace. We'll see something like this:
You can also select the "Logs" tab to view logs from this time period.
The Time Detective tool can be accessed via the logging UI as well. Just expand a given log line and select "Time Detective" from the bottom of the expanded view:
Then you'll have access to all of the telemetry data that can be correlated to your log statement based on timeframe. This could be especially useful if you see a high volume of a particular request being logged, or note an interesting error log. For any interesting or informative log statements, a deep dive into correlated telemetry data is just a click away.
Logging in Phoenix: A Few Final Tips
You're now ready to leverage logging as one part of the overall observability story in your Phoenix application. Just a few guiding rules to keep in mind when you do:
- Prefer structured over unstructured logs so that you can query and analyze your application's logs with tools like AppSignal.
- Suppress framework logs to the
FATAL
level and emit application logs atINFO
. - Treat logs as just one part of your observability posture. Logs are most useful when correlated with other telemetry data, like exceptions and traces.
Wrap Up
We've covered a lot of ground in this post, so let's wrap up before you go.
We discussed the benefit of structured over unstructured logging, and configured our Phoenix application to use logfmt with the console logger backend. We then saw the benefits of structured logging in action in AppSignal.
Finally, we took our log analysis skills to the next level when we suppressed framework logs and explored how our meaningful application logs correlated with other telemetry data in AppSignal.
Now take on the world of Phoenix logging. Happy coding!
P.S. If you'd like to read Elixir Alchemy posts as soon as they get off the press, subscribe to our Elixir Alchemy newsletter and never miss a single post!