How to Dockerize an Existing Node.js Application

Ayooluwa Isaiah

Ayooluwa Isaiah on

Last updated:

How to Dockerize an Existing Node.js Application

This post was updated on 9 August 2023 to use Fastify v4 (instead of v3) and to include changes to some Docker Compose commands and settings.

Docker is a software platform that enables packaging an application into containers. These containers represent isolated environments that provide everything necessary to run the application. Dockerizing an application refers to packaging it in a Docker image to run in one or more containers.

Dockerizing an application involves specifying everything needed to run the application in a Dockerfile and then using the file to build a specialized Docker image that can be shared to multiple machines. A Docker image is a reproducible environment for the application that guarantees portability across machines.

In this tutorial, you'll learn the process of Dockerizing an existing Node.js application from scratch. We'll cover topics such as:

  • What the Dockerfile represents
  • Sharing Docker images to multiple machines
  • The basics of Docker Compose for orchestrating multi-container applications

After reading this article, you should be armed with enough knowledge to Dockerize your own applications, even if they're built with some other technology.

Setting Up a Demo Node.js Application

To demonstrate the concepts discussed in this article, we'll use a demo Node.js application that provides an endpoint for the retrieval of Covid-19 statistics. It uses the free API provided by You can clone its GitHub repository to your computer using the command below:

$ git clone

Once downloaded, cd into the project folder and run yarn to install its dependencies. Afterward, open up the app.js file in your text editor. You should see the following content:

import Fastify from "fastify"; const fastify = Fastify({ logger: true, }); import got from "got"; import NodeCache from "node-cache"; const appCache = new NodeCache(); fastify.get("/covid", async function (req, res) { try { let covidAllStats = appCache.get("covidAllStats"); if (covidAllStats == null) { const response = await got(""); covidAllStats = response.body; appCache.set("covidAllStats", covidAllStats, 600); } res .header("Content-Type", "application/json; charset=utf-8") .send(covidAllStats); } catch (err) { fastify.log.error(err); res.code(error.response.code).send(err.response.body); } }); fastify.listen({ port: 4000, host: "" }, (err, address) => { if (err) { fastify.log.error(err); process.exit(1); }`server listening on ${address}`); });

This application provides a single endpoint (/covid) that returns the aggregated global Covid-19 totals to date. Once retrieved from the API, the data is subsequently cached in memory for 10 minutes.

Specifying '' as the address is essential when deploying to Docker because Docker containers do not default to exposing mapped ports to localhost. If this address is missing, your application might be inaccessible despite starting successfully in the container.

Go ahead and start the server with yarn dev, then make a GET request to the /covid endpoint with curl or some other tool. You should see a JSON response similar to the output shown below:

$ curl http://localhost:4000/covid {"updated":1691045997451,"cases":692576573,"todayCases":3,"deaths":6903976,"todayDeaths":0,"recovered":664687106,"todayRecovered":29215,"active":20985491,"critical":37138,"casesPerOneMillion":88851,"deathsPerOneMillion":885.7,"tests":6998914822,"testsPerOneMillion":880927.88,"population":7944935131,"oneCasePerPeople":0,"oneDeathPerPeople":0,"oneTestPerPeople":0,"activePerOneMillion":2641.37,"recoveredPerOneMillion":83661.74,"criticalPerOneMillion":4.67,"affectedCountries":231}

API response output formatted by a JSON formatter (JSONVue) on the browser:

{ "updated": 1691045997451, "cases": 692576573, "todayCases": 3, "deaths": 6903976, "todayDeaths": 0, "recovered": 664687106, "todayRecovered": 29215, "active": 20985491, "critical": 37138, "casesPerOneMillion": 88851, "deathsPerOneMillion": 885.7, "tests": 6998914822, "testsPerOneMillion": 880927.88, "population": 7944935131, "oneCasePerPeople": 0, "oneDeathPerPeople": 0, "oneTestPerPeople": 0, "activePerOneMillion": 2641.37, "recoveredPerOneMillion": 83661.74, "criticalPerOneMillion": 4.67, "affectedCountries": 231 }

Although this is a very simple application, it will suffice to demonstrate the concepts of Docker covered in this tutorial.

In the next section, we'll take a look at how to set up the Docker Engine locally on your machine.

Installing Docker

Before you can Dockerize an application, you need to install the Docker Engine. The official Docker manual provides a guide for installing the software on a variety of operating systems, most notably on macOS, Windows, and a variety of Linux distributions. Ensure you install the latest stable release — v24.0.5 at the time of writing.

$ docker -v Docker version 24.0.5, build ced0996

Setting Up a Dockerfile

Once the Docker Engine has been installed, the next step is to set up a Dockerfile to build a Docker image for your application. An image represents an immutable snapshot of an environment that contains all the source code, dependencies, and other files needed for an application to run. Once a Docker image is created, it can be transported to another machine and executed there without compatibility issues.

Docker images are assembled through a Dockerfile. It is a text file that contains a set of instructions executed in succession. These instructions are executed on a parent image, and each step in the file contributes to creating an entirely custom image for your application.

Let's go ahead and create a Dockerfile for our demo application at the root of the project directory:

$ touch Dockerfile

Open up the Dockerfile in your text editor and add the following line to the file:

FROM node:18.17.0-alpine

The above specifies the base image to be the official Node.js Alpine Linux image. Alpine Linux is used here due to its small size, which helps a lot when transporting images from one machine to the other.

The next line in the Dockerfile is shown below:


The WORKDIR instruction sets the working directory to /app. This directory will be created if it doesn't exist.

Use the following lines to install your application's dependencies: a crucial step for building your Docker image. Note that the lines that start with # denote a comment.

# Copy and download dependencies COPY package.json yarn.lock ./ RUN yarn --frozen-lockfile # Copy the source files into the image COPY . .

Next, we need to expose the port that the application will run on through the EXPOSE instruction:


Finally, specify the command for starting the application:

CMD yarn start

You can see the entire Dockerfile below:

FROM node:18.17.0-alpine WORKDIR /app COPY package.json yarn.lock ./ RUN yarn --frozen-lockfile COPY . . EXPOSE 4000 CMD yarn start

Build the Docker Image

Now that the Dockerfile is complete, it's time to build the Docker image according to the instructions in the file. This is achieved through the docker build command. You need to pass in the directory where the Dockerfile exists and your preferred name for the image:

$ docker build . -t covid

If you get a "permission denied" error, you could preface the command with sudo or set up permissions.

If all goes well and the build succeeds, you will see the messages below at the end of the command's output:

[+] Building 37.0s (10/10) FINISHED ... => exporting to image => => exporting layers => => writing image sha256:16398372978d5... => => naming to

You can run docker images to view some basic info about the created image:

$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE covid latest 16398372978d 5 minutes ago 209MB

Run the Docker Image in a Container

Use the docker run command to run your newly minted Docker image inside of a container. Since the application has been built into the image, it has everything it needs to work. It can be launched directly in an isolated process. Before you can access your running image inside the container, you must expose its port to the outside world through the --publish or -p flag. This lets you bind the port in the container to a port outside the container.

$ docker run -p 4000:4000 covid

The command above starts the covid image inside of a container and exposes port 4000 inside the container to port 4000 outside the container. You can subsequently access the routes on your server through http://localhost:4000.

Screenshot of docker run command

Sharing Docker Images

You can transfer Docker images from one machine to the other in a variety of ways. The most popular method involves using the docker push command to push the image to the official Docker registry and retrieving it through the docker pull command.

You need to sign up for a free account at Docker Hub first. After the signup process is complete, head over to the Repositories page, and create a new repository. Give it a name and set its visibility to "Public" or "Private". Note that free accounts have access to a limited number of private repos.

Screenshot of Docker hub create repo page

Once you've created a repository, enter the docker login command on your terminal to log in to Docker Hub on your machine.

Screenshot of Docker login command

Before you push the image to Docker Hub, you need to update the image tag to match your repository namespace: <your docker username>/<repo name>. This is because the docker push command expects an argument in this format.

Enter the command below to tag your covid image with a new name. Ensure you replace <your docker username> with your actual docker username.

$ docker tag covid <your docker username>/covid

Finally, push the image to Docker Hub using the docker push command, as shown below:

$ docker push <your docker username>/covid

Once the image is pushed successfully to the registry, it will be reflected in your repository dashboard:

Screenshot of Docker hub repository

You can pull the image on any machine with docker installed through the command below. If the repository is private, you'll need to log in first through the docker login command. Keep in mind that the speed of downloading an image from the registry depends on the image size and the speed of your internet connection. This is one of the reasons why smaller Docker images are preferred in general.

$ docker pull <your docker username>/covid

Note that you can also choose to share Docker images through registries provided by other cloud services such as GitLab, Google Cloud, RedHat, and others. You can even set up your own private registry on a dedicated server for use within an organization.

Share Docker Images without Using a Registry

An alternative way to share a Docker image with others is to export it as a .tar file and transfer it to a different machine through any preferred transfer method. This helps you transfer the Docker images between machines in cases when using a Docker registry is not desirable or possible, for whatever reason. The docker save command is what you need to use for exporting a Docker image:

$ docker save covid > covid.tar

The above command will export the covid image to a covid.tar file in the current directory. This file may then be transferred to a remote machine and loaded into the machine's local registry through the docker load command:

$ docker load < covid.tar Loaded image: covid:latest

Deploy Your Dockerized Node.js Application to Production

The easiest way to deploy a Dockerized application on a remote server is to transfer the application's image with docker pull and then use docker run. This runs the application in a container similar to how you'd do it in your development environment. However, such a strategy is suboptimal for a truly production-ready application.

Unlike our demo application, a real-world product will likely be composed of several different services that depend on each other for the application as a whole to properly work. Deploying to production usually means starting all the component services in the right order to ensure a smooth operation. You also need a strategy for other tasks, such as restarting a service in case of failures, aggregating logs, and performing health checks. All these concerns — and more — can be handled through Docker Compose.

Docker Compose coordinates multi-container Docker applications through a single command. It relies on a Compose file that provides a set of instructions to configure all the containers that should be spawned. Here's what the Compose file (compose.yml) for our demo application looks like:

services: web: image: covid ports: - "4000:4000" environment: NODE_ENV: production

The above Compose file uses the Compose Specification, which is the latest and recommended version of the Compose file format. It defines a single service called web that uses the covid image we previously set up. If you leave out the image property, a Docker image from the Dockerfile will be built in the current directory and used for the service. The ports property defines the exposed ports for the container and host machine, and the environment property sets up any necessary environmental variables.

Once you have a compose.yml file, you can start the defined services with the docker compose up command. Make sure you have docker-compose-plugin installed before running the command, otherwise, find out how to install Docker Compose on your operating system.

$ docker compose up Attaching to covid-node-web-1 covid-node-web-1 | yarn run v1.22.19 covid-node-web-1 | $ node app.js covid-node-web-1 | {"level":30,"time":1691101533196,"pid":29,"hostname":"4cdd1f88efe9","msg":"Server listening at"} covid-node-web-1 | {"level":30,"time":1691101533196,"pid":29,"hostname":"4cdd1f88efe9","msg":"server listening on"}

This command will launch the containers for the defined services, and they will be accessible on the specified ports. Note that if you exit this command (such as by pressing Ctrl-C), every spawned container will stop immediately. To prevent this from happening, append the --detach flag so that the containers start in the background and keep running.

$ docker compose up --detach

We've only scratched the surface of the workflows a Compose file can achieve. Ensure to check out the full documentation to learn more about all the available options. The docker compose CLI also provides several other important commands you should know about to get the most out of it. You can examine each of them through the --help flag or the CLI reference page.

Wrap-up and Further Docker Reading

In this article, we covered the process of Dockerizing an existing Node.js application, building containers, and deploying to production through Docker Compose.

Keep in mind that there's a lot more to Docker than can be covered in one article. Refer to the official documentation to learn more about best practices for writing a Dockerfile, securing a Docker container, logging, and other important topics to use Docker effectively in your application workflow.

Thanks for reading, and happy coding!

P.S. If you liked this post, subscribe to our JavaScript Sorcery list for a monthly deep dive into more magical JavaScript tips and tricks.

P.P.S. If you need an APM for your Node.js app, go and check out the AppSignal APM for Node.js.

Ayooluwa Isaiah

Ayooluwa Isaiah

Ayo is a Software Developer by trade. He enjoys writing about diverse technologies in web development, mainly in Go and JavaScript/TypeScript.

All articles by Ayooluwa Isaiah

Become our next author!

Find out more

AppSignal monitors your apps

AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

Discover AppSignal
AppSignal monitors your apps