
As your application scales to serve hundreds, thousands, or even millions of users, understanding its performance becomes essential.
Performance monitoring helps you make informed decisions based on data instead of guesswork or user complaints. Imagine users reporting that your app feels "slow". Without proper instrumentation and monitoring, you're left troubleshooting blindly.
However, with the right tools in place, you're far more likely to pinpoint the issue: maybe database queries are the bottleneck, or the server is hitting high CPU usage during peak hours.
When monitoring Express applications, the goal isn't to track every metric, but to focus on the ones that matter. By identifying and understanding these key metrics, you can drive meaningful performance improvements.
Let's get started!
Understanding Express.js Performance Metrics in Node Applications
To monitor an Express application effectively, it's important to know which metrics matter and what they reveal about your app's health and performance. Let's look at some key metrics to track: response time, throughput, error rate, and CPU and memory usage.
Response Time
Response time measures how long it takes your server to process a request and return a response. It's one of the most critical metrics, as it directly affects user experience.
Percentiles like the 95th and 99th help highlight slow requests. For example, a 95th percentile of 500ms means 95% of requests complete faster than 500ms, while the slowest 5% take longer.
These percentile metrics give a fuller picture than averages, which can obscure performance issues caused by outliers.
Throughput
Throughput measures how many requests your application handles per unit of time (typically requests per second). It reflects your app's capacity and how it performs under load.
Throughput and response time are closely linked: as throughput rises, response time often increases. If an app reaches its saturation point, response times may spike, or the app might begin to fail altogether.
Error Rate
The error rate represents the percentage of requests resulting in server errors (5xx status codes).
Errors and performance issues often go hand in hand. Bottlenecks can lead to timeouts and failed requests, while high error rates can degrade performance by consuming resources on retries or error handling.
CPU and Memory Usage
Since Node.js runs primarily on a single thread, CPU utilization is a crucial metric. Sustained high CPU usage can signal that your app is under stress.
Memory usage is just as important. Node apps are prone to memory leaks, especially when running for long periods. Look for steadily increasing memory that doesn't stabilize—this often points to a leak. As memory usage nears system limits, performance usually drops sharply before an app crashes.
Collecting Performance Metrics
Now that you know which performance metrics to track, the next step is to collect them.
This process is called instrumentation. It means adding code that measures and records specific aspects of your application's behavior during execution.
Effective instrumentation focuses on critical areas that offer actionable insights. Over-instrumenting can actually harm performance, as measuring and logging can also consume significant resources.
Instrumentation can be:
- Manual, where you explicitly insert measurement code into key parts of your app.
- Automatic, using libraries or agents that inject measurement logic for you.
Node provides a native way to
measure performance through the perf_hooks module,
which allows high-precision timing of operations:
import { performance, PerformanceObserver } from "node:perf_hooks"; const obs = new PerformanceObserver((items) => { const measurements = items.getEntries(); measurements.forEach((measurement) => { console.log(`${measurement.name}: ${measurement.duration}ms`); // This is where you'll send the metric to some external service // for monitoring and alerting }); }); obs.observe({ entryTypes: ["measure"] }); app.get("/products", async (req, res) => { // Measure database query time performance.mark("db-query-start"); const products = await database.getProducts(); performance.mark("db-query-end"); performance.measure("Database Query", "db-query-start", "db-query-end"); res.json(products); });
For more general-purpose metric collection, open-source solutions like Prometheus and OpenTelemetry can collect all kinds of metrics from Node which can later be ingested into visualization tools and transformed into performance dashboards.
For production environments, dedicated APM solutions offer the most comprehensive approach to performance monitoring. These services typically require installing an agent in your application that automatically instruments your code to collect detailed metrics, while allowing you to bring your own custom metrics as well.
In this guide, you'll learn how to instrument and monitor your Express application with AppSignal.
Setting Up the Demo Express App
If you’d like to experiment with the examples in this guide, you can start by cloning the demo repository:
git clone https://github.com/damilolaolatunji/express-perf-demo && cd express-perf-demo npm install
The demo is a basic Express server with several endpoints designed for performance demonstrations. It requires a PostgreSQL database, which you can quickly spin up using Docker:
docker run \ --rm \ --name postgres \ --env POSTGRES_PASSWORD=admin \ --env POSTGRES_DB=chinook \ --volume pg-data:/var/lib/postgresql/data \ --publish 5432:5432 \ postgres:bookworm
Then download the Chinook sample database file, and load it into your PostgreSQL instance:
docker exec -i postgres psql -U postgres -d chinook < chinook.sql
Finally, ensure the knex instance in your application is configured with the
correct database credentials:
// server.js const knex = Knex({ client: "pg", connection: { host: "localhost", port: 5432, user: "postgres", password: "admin", database: "chinook", }, });
You may now start the application on port 3000 by running:
npm start
Once the application launches, you'll be able to access the configured routes. For example, you can access all the albums in the database by running:
curl http://localhost:3000/albums
This should yield the following output:
[ { "AlbumId": 1, "Title": "For Those About To Rock We Salute You", "ArtistId": 1 }, . . . ]
Now you've set up your application, let's integrate AppSignal to collect real-time performance metrics.
Integrating AppSignal in Your Express App
To start monitoring your Express app with AppSignal, sign up for a free account and create a new Node.js application.
You'll be provided with an APPSIGNAL_PUSH_API_KEY which you'll need to copy
and store in a .env file:

# .env APPSIGNAL_PUSH_API_KEY=<your_push_api_key>
This value will be automatically loaded into process.env via the
dotenv library.
Then create an appsignal.js file with the following content:
// appsignal.js import { Appsignal } from "@appsignal/nodejs"; new Appsignal({ active: true, name: "<YOUR_APP_NAME>", // replace this with the name of the AppSignal app });
In your entry file, import appsignal.js after loading environment
variables, but before any other application logic:
// server.js import "dotenv/config"; import "./appsignal.js"; // Import it here import express from "express"; import Knex from "knex"; . . .
That's all you need to get started with basic performance monitoring, since Express is automatically instrumented by the AppSignal package.
To ensure that Express errors are also tracked in AppSignal, set up the error handler middleware, as shown below:
// server.js . . . import { expressErrorHandler } from "@appsignal/nodejs"; . . . app.use(expressErrorHandler());
To start generating metrics, simulate traffic with a tool like autocannon:
npx autocannon -c 2 -d 300s http://localhost:3000/ http://localhost:3000/posts
With the command running, return to your AppSignal dashboard and ensure you're in the Overview section. You will start seeing your application's key metrics such as error rate, response time, and throughput, along with a list of the most recent errors and where they occurred:

The Response time graph in particular includes the mean, 90th, and 95th percentile response times which give insights into both average performance and tail latency (worst-case scenarios):
- Mean: Represents the average response time across all requests.
- 90th percentile: 90% of requests were faster than this value.
- 95th percentile: 95% of requests were faster than this value, highlighting outliers.

You can also check the Performance > Issue list to see the performance data categorized by the HTTP routes:

This section shows average response times, throughput, and Impact, a score indicating how much a route contributes to overall performance degradation.
For example, in the above screenshot, GET /posts has a mean of 1.56 seconds and is
responsible for 80.95% of the total performance impact, while GET / is much
faster (448 ms) and has only 19.05% impact.
So Impact here tells you which endpoints hurt performance the most, helping you prioritize where to optimize first.
If you click on a specific Action in the list and head over to the Graphs tab, you'll see the response time and throughput graph for the specific route:

You can also check out the Samples tab and click a specific sample in the list to see a breakdown of what operations are contributing to the observed response times.
In the GET /posts entry, the fetch request is (predictably) taking up
all the time, as seen in the Sample breakdown and also in the Event
Timeline:

Under the hood, AppSignal uses OpenTelemetry's tracer objects to track how long
various operations take in the application, and it supports many popular
Node.js frameworks and libraries out of the box, such as Node.js core APIs,
Express, PostgreSQL (pg), Next.js, and many more.
If your routes include function calls from supported libraries, you'll see them in the Event Timeline, letting you know exactly where to focus your debugging efforts.
For example, sending requests to the /albums-with-tracks route will yield the
following timeline, due to automatic instrumentation for the
knex and pg
libraries:

System-level metrics like CPU and memory usage are also tracked automatically under Host monitoring > Host metrics:

Finding Slow Database Queries and Network Requests
Thanks to AppSignal's automatic instrumentation for many popular Node.js libraries, you can easily identify performance bottlenecks in your database and external API calls.
AppSignal tracks these automatically and displays them in the Slow queries and Slow API requests sections:


Both pages list queries and requests by Impact, allowing you to quickly identify and prioritize the operations that are slowing down your app the most.
To get a broader view, head over to the Slow events page, where slow queries, API calls, and other bottlenecks are aggregated into a single view:

This consolidated list helps you pinpoint the most expensive operations across your application, so you can focus your optimization efforts on where they'll have the biggest impact.
Customizing Your AppSignal Integration
Getting started with AppSignal is quite straightforward, but the real power lies in how you tailor it to your app's needs. Here are two useful ways to customize your setup.
Customizing the Action Name

By default, AppSignal names actions using the HTTP method and route (e.g.,
GET /posts). You can override this to provide more meaningful labels:
import { setRootName } from "@appsignal/nodejs"; app.get("/posts/", async (req, res) => { setRootName("Retrieve Posts from JSONPlaceholder"); // . . . });
Once set, the custom action name will appear in your AppSignal dashboard:

Using Deploy Markers
Deploy markers link performance data to specific application versions, making it easier to detect regressions introduced by new deployments.
To set this up automatically using Git, modify your appsignal.js file as
follows:
// appsignal.js import childProcess from "node:child_process"; import { Appsignal } from "@appsignal/nodejs"; const REVISION = childProcess.execSync("git rev-parse --short HEAD").toString(); new Appsignal({ active: true, name: "<Your App Name>", revision: REVISION, // sets the Git revision });
Once enabled, AppSignal allows you to filter issues by deployment and compare performance across versions, making it easier to discover when a performance issue is introduced:

You can find other customization options in the AppSignal for Node configuration docs.
Sending Custom Performance Measurements to AppSignal
While AppSignal's automatic instrumentation covers a lot, you'll sometimes need to track the performance of specific business logic that isn't captured out of the box.
For instance, suppose you want to measure how long it takes to enrich customer
data in the /customers/top-spenders route:
// server.js app.get("/customers/top-spenders", async (req, res) => { try { const rawData = await knex("Customer") .leftJoin("Invoice", "Customer.CustomerId", "Invoice.CustomerId") .groupBy("Customer.CustomerId", "Customer.FirstName", "Customer.LastName") .select( "Customer.CustomerId", "Customer.FirstName", "Customer.LastName", knex.raw('COALESCE(SUM("Invoice"."Total"), 0) AS total_spent'), knex.raw('COUNT("Invoice"."InvoiceId") AS total_invoices') ) .orderBy("total_spent", "desc") .limit(10); // How long does this take to run? const enrichedData = rawData.map((c) => ({ id: c.CustomerId, name: `${c.FirstName} ${c.LastName}`, totalSpent: parseFloat(c.total_spent), totalInvoices: parseInt(c.total_invoices), averagePerInvoice: c.total_invoices > 0 ? parseFloat(c.total_spent) / c.total_invoices : 0, })); res.json({ total: enrichedData.length, customers: enrichedData, }); } catch (err) { res.status(500).json({ error: "Failed to calculate top spenders" }); } });
Without custom instrumentation, the Event Timeline in AppSignal will only show database activity (e.g., Knex/PostgreSQL):

To track the performance of custom operations, you can instrument them using OpenTelemetry. AppSignal supports this seamlessly under the hood.
First, you'll need to install the OpenTelemetry API:
npm install @opentelemetry/api
Next, get a tracer instance as follows:
// server.js import { trace } from "@opentelemetry/api"; const tracer = trace.getTracer("example-app");
Then wrap your custom logic inside an active span:
app.get("/customers/top-spenders", async (req, res) => { try { const rawData = await knex("Customer") .leftJoin("Invoice", "Customer.CustomerId", "Invoice.CustomerId") .groupBy("Customer.CustomerId", "Customer.FirstName", "Customer.LastName") .select( "Customer.CustomerId", "Customer.FirstName", "Customer.LastName", knex.raw('COALESCE(SUM("Invoice"."Total"), 0) AS total_spent'), knex.raw('COUNT("Invoice"."InvoiceId") AS total_invoices') ) .orderBy("total_spent", "desc") .limit(10); tracer.startActiveSpan("enriching data", (span) => { const enrichedData = rawData.map((c) => ({ id: c.CustomerId, name: `${c.FirstName} ${c.LastName}`, totalSpent: Number.parseFloat(c.total_spent), totalInvoices: Number.parseInt(c.total_invoices), averagePerInvoice: c.total_invoices > 0 ? Number.parseFloat(c.total_spent) / c.total_invoices : 0, })); span.end(); // Don't forget to end the span here res.json({ total: enrichedData.length, customers: enrichedData, }); }); } catch (err) { res.status(500).json({ error: "Failed to calculate top spenders" }); } });
Once this is in place, you'll see your custom event show up in the event timeline:

Getting Alerted to Performance Issues
Once you've established how your application should behave, you can set up alerts to detect and quickly address unusual performance patterns.
For example, let's say you'd like to be notified if the /posts route takes more
than a certain amount of time to complete. You can do this through the
Settings:

Once the configured Threshold and Alerting conditions are reached, you'll get an email notification that looks like this:

And that's it for our whistle-stop tour measuring performance in an Express app using AppSignal!
Wrapping Up
It is essential that you monitor performance to maintain a responsive and reliable Express application, especially as it scales.
With AppSignal, you get powerful insights out of the box, from response time and throughput to database bottlenecks and custom spans for business logic.
By leveraging automatic and manual instrumentation, you can stay ahead of performance issues, optimize critical paths, and deliver a smooth experience to your users.
Thanks for reading, and happy monitoring!
Wondering what you can do next?
Finished this article? Here are a few more things you can do:
- Share this article on social media
Most popular Javascript articles

Top 5 HTTP Request Libraries for Node.js
Let's check out 5 major HTTP libraries we can use for Node.js and dive into their strengths and weaknesses.
See more
When to Use Bun Instead of Node.js
Bun has gained in popularity due to its great performance capabilities. Let's see when Bun is a better alternative to Node.js.
See more
How to Implement Rate Limiting in Express for Node.js
We'll explore the ins and outs of rate limiting and see why it's needed for your Node.js application.
See more

Damilola Olatunji
Damilola is a freelance technical writer and software developer based in Lagos, Nigeria. He specializes in JavaScript and Node.js, and aims to deliver concise and practical articles for developers. When not writing or coding, he enjoys reading, playing games, and traveling.
All articles by Damilola OlatunjiBecome our next author!
AppSignal monitors your apps
AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

