Best Practices for Logging in Node.js

Ayooluwa Ayooluwa Isaiah on

Good logging practices are crucial for monitoring and troubleshooting your Node.js servers. They help you track errors in the application, discover performance optimization opportunities, and carry out different kinds of analysis on the system (such as in the case of outages or security issues) to make critical product decisions.

Even though logging is an essential aspect of building robust web applications, it’s often ignored or glossed over in discussions about development best practices. Knowing what and how to log is tricky because it’s often difficult to understand what information you’ll need during troubleshooting.

Since we understand how essential logging is, we need to practice good logging habits. We should accumulate logs that make it easier to diagnose the root cause of problems and solve potential issues before they impact end-users.

This article will outline some best practices to follow when writing logs in a Node.js application.

1. Use a Node.js Logging Library

Node.js developers tend to rely on the runtime’s console methods (such as console.log()) to log events since it is built into the runtime, and offers a familiar API similar to the JavaScript console mechanism provided by web browsers.

While console.log() has its uses, it is not an adequate solution for implementing logging in a production application. It lacks support for features and configuration options that are considered essential to a good logging setup. For example, console methods do not support log levels like warn, error or debug, despite the provision of methods like console.warn(), console.error(), and console.debug(), respectively. Those are simply functions that print to the standard output or standard error without indicating log severity.

A good logging library provides a robust feature set that makes it easier to centralize, format, and distribute the logs to fit your needs. For example, a typical logging framework will provide various options for where to output log data (such as the terminal, filesystem, or a database), while also supporting the ability to send the logs over HTTP if you want to transport your log entries to a log management service.

There are three major concerns for choosing a suitable logging library: recording, formatting, and storing messages. You need to make sure that your library of choice addresses all three concerns in a satisfactory manner. Another critical consideration for selecting a logging library is performance. Since the logger will be used a lot throughout the codebase, it can harm your application’s runtime performance. Therefore, you should also investigate the performance characteristics of a library, and see how it compares to alternatives.

In the Node.js ecosystem, there are several popular options to consider. Most of them offer similar features, but they also have their differences — so it’s vital that you try them out yourself and see which one caters to your use case the best:

For the rest of the article, we’ll use winston to demonstrate the different concepts. This is not necessarily an endorsement of winston; it’s only being used here because it’s the most popular logging framework for Node.js. You can follow along by installing winston in your project though npm:

1
npm install winston

2. Use the Correct Log Levels

If you’ve spent a reasonable amount of time in the software industry, especially supporting production applications, you may have observed the use of log levels, at least in passing. They provide a way to differentiate between the types of events in a system and add context to how important each event is. If you correctly utilize log levels in your application, it will be easy to distinguish between critical events that need to be immediately addressed versus purely informative events.

Although logging systems give different names to severity levels, the concepts remain largely the same. Here are the most common log levels that you are likely to encounter, regardless of the logging framework you choose (in decreasing order of severity):

The winston library uses the following log levels by default — with error being the most severe and silly being the least:

1
2
3
4
5
6
7
8
9
{
  error: 0,
  warn: 1,
  info: 2,
  http: 3,
  verbose: 4,
  debug: 5,
  silly: 6
}

If the defaults do not suit your needs, you can change them while initializing a custom logger. For example, you can instead use the log levels discussed above.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
const { createLogger, format, transports } = require('winston');

const logLevels = {
  fatal: 0,
  error: 1,
  warn: 2,
  info: 3,
  debug: 4,
  trace: 5,
};

const logger = createLogger({
  levels: logLevels,
  transports: [new transports.Console()],
});

When you want to log a message, you can reference the desired level directly on the custom logger, as shown below:

1
2
logger.info('System launch'); // {"message":"System launch","level":"info"}
logger.fatal('A critical failure!'); // {"message":"A critical failure!","level":"fatal"}

Winston also allows you to define a level property on the logger and on each transport which specifies the maximum level of messages that should be logged. For example, you can run your programs at info level by default, then switch to debug or trace level when you need to troubleshoot an issue or deploy to a testing environment. You should control this setting through an environmental variable.

1
2
3
4
5
6
7
const logger = winston.createLogger({
  level: 'warn',
  levels: logLevels,
  transports: [
    new transports.Console({ level: 'info' }),
  ]
});

3. Use Structured Logging

When defining how your log messages look, the priority should be to make your log entries easy to read for both humans and machines.

One of the primary goals of logging is to enable post-mortem debugging, and this will involve reading log entries to reconstruct the steps that led to an event in the system. Having human-readable log entries will make this thankless task much easier for developers and sysadmins. It’s also important to use a structured format that is easy to parse by machines. This allows for some automated processing on the logs (such as for alerting or auditing purposes).

JSON is a universal favorite for structured log entries because it is ubiquitous and easily readable by humans. It is also highly machine-readable and easily converted to other formats, even when working with other programming languages. When logging in JSON, it’s necessary to use a standard schema so that the semantics of each field is clearly defined. This also makes it easy to find what you’re looking for when analyzing log entries.

Winston outputs a JSON string by default with two fields: message and level. The former contains the text that is being logged, while the latter signifies the log level. Customizing the output is straightforward through winston.format, which uses logform to implement its different formats. For example, if you wanted to add a timestamp to each log entry, you can do so by combining the timestamp and json formats as shown below:

1
2
3
4
5
6
const { createLogger, format, transports } = require('winston');

const logger = createLogger({
  format: format.combine(format.timestamp(), format.json()),
  transports: [new transports.Console({})],
});

This will produce log entries in the following format:

1
2
{"message":"Connected to DB!","level":"info","timestamp":"2021-07-28T22:35:27.758Z"}
{"message":"Payment received","level":"info","timestamp":"2021-07-28T22:45:27.758Z"

Ensure you read the logform docs to learn about the predefined formats and how to create your own custom formats.

4. Write Descriptive Messages

Log entries should adequately describe the events that they represent. Each message should be unique to the situation and should clearly explain the event that occurred at that point. In the event of an emergency, your log entries may be the only source of information to help you understand what happened, so it’s important to get this aspect of logging right!

Here’s an example of an inadequate log entry being used to communicate a request failure:

1
Request failed, will retry.

The message above doesn’t provide any insights into:

We can probably find out the answers to some of these questions by looking at other places such as other log entries or even the code itself. However, it is better to make the log entry more valuable on its own, through a more descriptive message:

1
"POST" request to "https://example.com/api" failed. Response code: "429", response message: "too many requests". Retrying after "60" seconds.

This second message is much better because it provides adequate information about the failed request, including the status code and response message, and also notes that the request will be retried in 60 seconds. If all your messages are just as descriptive, you’ll have a more pleasant time when attempting to make sense of your logs. Other examples of good log messages include the following:

1
2
3
Status of task id "1234" changed from "IN_PROGRESS" to "COMPLETED".
SomeMethod() processed "100" records in "35ms".
User registration failed: field "email" is not valid email address; field "password" is below the minimum 8 characters.

When composing log messages, include all the relevant details pertaining to the event without being unnecessarily verbose. This will prevent other log readers (which could include your future self) from being overwhelmed with too much information to sift through. Your log messages should also be able to stand on their own; don’t rely on a previous message’s content to provide the context for a later entry.

5. Add the Right Amount of Context to Your Logs

Besides writing a descriptive log message, you also need to include the right amount of context in the log entry. Context makes it possible to quickly reconstruct the actions leading up to an event. Add basic information to the log, such as the timestamp of the event and the method where it occurred (or a stack trace, in the case of errors). You should also add data points relevant to the flow of the operation that triggered the event. These data points may be generated at different parts of the operation flow and aggregated at the point of logging.

In the context of a billing service, log entries generated by the system could include several data points, including:

You can use each of the above data points to track a user’s flow through an entire checkout process. If an important event occurs, the available data will be automatically appended to the log output, and it will be possible to identify:

These data points also let you filter log entries based on a common identifier such as the user id or product id.

Winston provides the ability to add global metadata (such as the component or service where an event occurred) to every generated log entry. In a complex application, this information in your logs is helpful for troubleshooting issues because it immediately directs you to the point of failure.

You can configure this when creating the logger for the component or service:

1
2
3
4
5
6
7
const logger = createLogger({
  format: format.combine(format.timestamp(), format.json()),
  defaultMeta: {
    service: 'billing-service',
  },
  transports: [new transports.Console({})],
});

A service field will be included in all logs created by the logger object:

1
{"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T10:56:14.651Z"}

To add metadata to individual entries, you need to create a context or metadata object that you can pass around throughout the flow of an operation so that the data is accessible at logging points. You can also utilize the concept of child loggers to add metadata at logging points:

1
2
3
4
5
6
7
const ctx = {
  userId: '090121',
  productId: 'creme-de-la-creme',
};

logger.child({ context: ctx }).info('Order "1234" was processed successfully');
// {"context":{"userId":"090121","productId":"creme-de-la-creme"},"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T12:20:13.249Z"}

6. Avoid Logging Sensitive Information

Regardless of whether you’re in an industry with strict rules around compliance (such as healthcare or finance) or not, it’s important to avoid including sensitive information in your logs.

Sensitive information includes social security numbers, addresses, passwords, credit card details, access tokens, and similar data types. Since log messages are often stored in plain text, such data will be exposed if the logs fall into the wrong hands. You also have to ensure you are not falling afoul of regulations that apply to countries where your product is operational (such as GDPR) by recording certain pieces of information.

You can avoid accidentally leaking sensitive data in your logs by minimizing which parts of the system work with that data. For instance, credit card details should only be seen by the billing component of your system, and sensitive data should be kept out of URLs — redacted where possible.

Although this isn’t a foolproof solution, you can also use a blocklist to prevent specific fields from ever making it into the logs.

7. Log for Auditing and Profiling Reasons

We primarily use logs to diagnose issues and find the root cause of bugs. However, logs can also prove invaluable when auditing or profiling a system, or perhaps to generate interesting statistics about system behavior.

For example, you can log details of what users are doing on the system (like user sign-ins, the resources they created or accessed, etc.). In the absence of specialized tools, you can also use logging to profile your code by reporting how long an operation took or how many times a function was executed. The insights gleaned from this data can help you improve the performance of your application’s hotspots.

Winston provides a simple profiling mechanism that you can take advantage of:

1
2
3
4
5
6
7
8
9
logger.profile('test', { level: 'info' });

// Returns an object corresponding to a specific timing
const profiler = logger.startTimer();

setTimeout(() => {
  // End the timer and log the duration
  profiler.done({ message: 'Logging message' });
}, 1000);

A durationMs field will be included in the output:

1
{"message":"Logging message","level":"info","durationMs":1001,"timestamp":"2021-07-29T14:17:55.097Z"}

8. Automatically Log Uncaught Exceptions and Unhandled Promise Rejections

When you encounter an uncaught exception or unhandled promise rejection, it is always considered good practice to crash the program. Use a process manager like PM2 to automatically restart the process and restore the program to a clean state.

To understand why such an event has occurred, it’s also necessary to log the details of the exception or promise rejection before exiting. Winston provides handlers for both situations, which may be configured on a logger instance:

1
2
3
4
5
6
7
8
9
10
11
const logger = createLogger({
  transports: [
    new transports.File({ filename: 'file.log' })
  ],
  exceptionHandlers: [
    new transports.File({ filename: 'exceptions.log' })
  ],
  rejectionHandlers: [
    new transports.File({ filename: 'rejections.log' })
  ]
});

In the example above, uncaught exceptions will be logged into an exceptions.log file, while unhandled rejections will be logged into a rejections.log file. Entries will automatically include the full stack trace as well as information about process arguments and memory usage associated with the exception, giving you all the details you need to find the root of the problem.

Learn how to configure Winston’s unhandled exception handler.

9. Centralize and Monitor Your Logs

Once you have implemented logging in your Node.js server, the system will rapidly create new log entries. Depending on how much traffic your application receives, this can yield gigabytes of data in a relatively short space of time.

You can prevent your log files from getting too big by employing a log rotation solution that also handles cleaning up old logs. Even with this measure in place, it may be challenging to sift through the logs when you need to troubleshoot an issue (due to the sheer number of log files and entries and the distributed nature of most applications).

A more modern approach to this problem is to utilize a log management solution that lets you centralize, filter, and gather insights from all your logs in one place and parse or visualize them in various ways. With a centralized logging solution, you’ll be able to configure alerts for critical events or when certain thresholds are surpassed so that you can address critical problems promptly. Winston provides several transport options that enable the easy transmission of log data from your application to an external system.

Node.js Logging Tips: Next Steps

In this article, we’ve covered some invaluable practical tips to help you write more useful log entries. Next, you should add logging to your Node.js server and configure it while keeping the above best practices in mind.

If you have any additional tips regarding logging in Node.js that were not discussed in this post, feel free to share them with me on Twitter.

Thanks for reading, and happy coding!

P.S. If you liked this post, subscribe to our JavaScript Sorcery list for a monthly deep dive into more magical JavaScript tips and tricks.

P.P.S. If you need an APM for your Node.js app, go and check out the AppSignal APM for Node.js.

Our guest author Ayo is a Software Developer by trade. He enjoys writing about diverse technologies in web development, mainly in Go and JavaScript/TypeScript. You can learn more about him through his blog.

5 favorite Javascript articles

10 latest Javascript articles

Go back
Javascript sorcery icon

Subscribe to

JavaScript Sorcery

A true sorcerer combines ancient wisdom and new discoveries. We'll provide you with both. Sign up for our JavaScript Sorcery email series and receive deep insights about JavaScript, error tracking and other developments.

We'd like to set cookies, read why.