
Slow Express routes rarely look broken in logs. They just feel sluggish to users. With AppSignal, though, you can quickly identify which endpoints are the slowest, gain insight into each request, and find out if the latency is related to any errors or slow queries.
In this guide, you'll set up a mock Express application, create a load, and use AppSignal to analyze a route's performance as if you were working through a live incident.
Why Express Route Performance Is Hard to Debug
While a request may complete successfully and return a 200 response, the user experience will be negatively impacted if the query is slow, if there is a blocking call, or if the work is expensive due to the production load.
Production Makes Things Harder
A route that works well on your machine may exhibit performance issues due to factors like real user concurrency, larger data payloads, the speed of the database query, and the latency of calls to external APIs.
To help with this, Express recommends the following production-specific performance practices:
- Use of cache where appropriate
- Asynchronous functions
- Right deployment architecture
Node.js Adds Another Wrinkle
JavaScript runs from a single event loop. This means that one blocking request will hold up other unrelated requests, even though they may be in entirely different routes.
So, a route may appear slow due to issues within its own code, in the event loop, or even as a result of a combination of factors, such as bottlenecks in other route handlers.
The Pattern Problem
A single slow request is generally insufficient reason to take action. A number of issues could cause this, such as a transitory spike in network traffic or a locking issue in the database.
What matters is repeated behavior: one route that is consistently slower than the others, one endpoint that fails more frequently, or one request that repeatedly causes high latency.
That's why route monitoring should focus on recurring slowness rather than isolated incidents. It also explains why you can draw better conclusions from traces and aggregated request metrics than from raw logs alone.
What AppSignal Shows You
You do not have to manually instrument each route to see which ones are the fastest or slowest. AppSignal handles that for you. Its Node.js integration will do this automatically for Express applications, so the moment the application is set up, you can start seeing request behaviors.
When doing route analysis, the first signals you will grab are:
- Which endpoints are the slowest
- How frequently they are slow
- If this slowness is isolated or recurring
AppSignal also shows you what happens inside each request. It models work as spans, allowing you to see the route-level latency from the operations underneath it, such as middleware, database calls, or other instrumented work. That way, we can move on from "this route is slow" to "this part of the request is slow."
Errors show up as part of the same investigative flow. This mirrors real-life situations because slow paths and broken paths tend to be connected. Instead of switching between one tool for latency and another for exceptions, you can analyze both in a single place.
For Express apps, that translates to quicker triage and fewer shots in the dark. You can identify the problematic route and analyze its impact on users (in terms of timing, spans, and associated errors) to decide what to prioritize.
Build a Small Express App to Monitor
There’s a reason why the demo app should be kept small. You don’t need a production-level service to understand route monitoring. All you need is a few routes that behave differently when they are subjected to load.
This guide uses three routes:
GET /gives you a normal response and sets a baselineGET /errorgenerates an error so you can inspect failed requestsGET /slowwaits three seconds before responding so you can surface obvious latency
Here is the app:
// app.js const express = require("express"); const port = process.env.PORT || 3000; const app = express(); app.use(express.urlencoded({ extended: true })); app.get("/", (_req, res) => { res.send("GET query received!"); }); app.get("/error", (_req, _res) => { throw new Error("Expected test error"); }); app.get("/slow", async (_req, res) => { await new Promise((resolve) => setTimeout(resolve, 3000)); res.send("Well, that took forever!"); }); app.listen(port, () => { console.log(`Example app listening on port ${port}`); });
There’s no need to overwhelm the metrics with your app’s intricacies. A simple setup is sufficient to ensure AppSignal is effective.
Set Up AppSignal for Express
Log in to AppSignal and click Add app:

Select Node.js as your language. The installer will guide you through the rest of the setup:

Install the Node.js package:
npm install @appsignal/nodejs
Give your app a name:

Select Express as your framework and hit Install:

Include the AppSignal error handler in your demo app. At the top, import the package @appsignal/nodejs that you’ve just installed:
const { expressErrorHandler } = require("@appsignal/nodejs");
After declaring all your routes (but before adding any custom error handlers), include AppSignal’s error handler for Express:
app.use(expressErrorHandler());
Next, create a .env file in the root of your project and save your APPSIGNAL_PUSH_API_KEY, which you'll receive in the next step of the installation:
APPSIGNAL_PUSH_API_KEY=your-appsignal-push-api-key
During installation, AppSignal prompts you with the organization-level push API key. In some cases, you may prefer to use an application-specific API key instead. You can find this after completing the installation under the Push & deploy menu in the app settings of the specific app.
Create an appsignal.cjs file in your project root:
// appsignal.cjs const { Appsignal } = require("@appsignal/nodejs"); const appsignal = new Appsignal({ active: true, name: "expres-routes-monitoring", logLevel: "debug", // Sets logLevel to debug for the Node.js environment }); module.exports = { appsignal };
You can set up more options, but the minimum AppSignal requires are name and active: true entries.
The environment is determined by NODE_ENV by default. Alternatively, you can set APPSIGNAL_APP_ENV to something different. This means your .env file, or your deployment environment, will need something like:
APPSIGNAL_PUSH_API_KEY=your-appsignal-push-api-key APPSIGNAL_APP_NAME=express-routes-monitoring APPSIGNAL_APP_ENV=development APPSIGNAL_ACTIVE=true NODE_ENV=development
Then, make sure Node loads AppSignal before your server entry point. Change your package.json script like this:
{ "scripts": { "start": "node --require ./appsignal.cjs app.js" } }
That --require flag ensures AppSignal is set up early enough to instrument Express automatically.
You can run your Node.js app like you always do, by typing npm start. Then, you can send demo data provided by AppSignal:
npx @appsignal/cli demo
You will have to wait a few minutes for AppSignal to load data. Afterward, you can continue with the installation. The installation wizard should be able to detect the demo data and verify that everything has been set up correctly:

Generate Traffic and Create Test Data
To find useful patterns in AppSignal, you’ll need sufficient traffic across both healthy and unhealthy endpoints.
autocannon is a popular tool for generating this type of traffic. As an HTTP benchmarking tool, it can send a consistent load to each endpoint and provide you with quick stats on response times.
Execute these commands against the demo application:
npx autocannon -d 120 http://localhost:3000/ npx autocannon -d 90 http://localhost:3000/slow npx autocannon -d 10 http://localhost:3000/error
This provides enough variation for AppSignal to identify the routes that are consistently slow and those that fail every time.
How to Investigate Express Route Performance in AppSignal
When requests begin to arrive in AppSignal, the first step is to determine which route should be investigated.
In this example, GET / is served the most traffic, GET /slow gets moderate traffic, and GET /error receives the least traffic. This provides you with three patterns to analyze: a healthy baseline, a consistently slow route, and a route that fails every time.
Start With Route-Level Performance
Go to Performance and then Actions.
This is the best starting view since it is organized by the endpoint and you can compare each route without getting lost in the individual traces.

For starters, focus on two indicators:
- Mean response time, which is the measurement of the time it takes for a route to respond
- Throughput, which is the measurement of how often the route is accessed
A slow route with little traffic may not matter much. A route with modest latency can still matter if it’s heavily used or fails often. The first route to investigate is usually the one with the highest user impact.
In this example, one of the routes that should be most noticeable is GET /slow. It takes about ~3 seconds to respond to a request.
Look Beyond the Mean
The mean gives you the average response time across all requests, while percentiles such as the 90th and 95th show how slow the tail end of your traffic gets. If the 90th percentile is much higher than the mean, that indicates that a few requests are much longer than the average response time.
Click on GET /slowin the Issue list and navigate to the Charts tab. Since this route is a slow one by design, the mean and higher percentiles should both be high:

Use Throughput to Add Context
High throughput alone does indicate an issue. If GET / stays fast, it’s doing its job well even under more traffic.
GET /slow receives less traffic, but requests are more time-consuming. This is a strong candidate for investigation since it takes more time per request and becomes more painful as the traffic increases:

Check for Failing Routes
GET /error is a failing route, and thus should be the least used one in the test. However, it’s still worth checking out because it fails every time. Any route with low throughput that also breaks a real user path should be considered a problem.
Drill Into a Specific Route
When a route stands out to you, open it for a closer look.
If you click on GET /slow, the route-level graphs should confirm what you saw in the Actions view. Response time should be consistently high, while throughput should be lower than GET /:

Next, check the Samples tab for a few requests. This is the transition point from “this route is slow” to “what is it inside this request that’s slow?”

In this example, the HTTP request itself is quick, but handling the request in Express takes about three seconds. This delay reflects the behavior you’ve built into the route.
Read the Dashboard in Layers
Understanding the workflow is simple:
- Start with the grouped route performance
- Confirm the slowness is consistent
- Check whether the route is also failing
- Open the samples and timelines to inspect the request lifecycle
This top-down approach helps you avoid noise. If you try to analyze requests too early, it can lead you to focus on some one-off anomalies, whereas starting at the route level lets you spot broader patterns.
In this case, we can draw a couple of clear conclusions:
- Latency investigation should start with
GET /slow GET /errorshould be the first place to check for errorsGET /provides the baseline for healthy request behavior
Use Slow Events to Find the Bottleneck
When a route is confirmed to be slow, check the operations that it’s performing.
Within an Express app, route latency is typically caused by subsequent operations rather than the request handler itself. Some common culprits are queries to the database, calls to external APIs, or anything else that is instrumented.
These operations are viewed in AppSignal under the Slow queries, Slow API requests, and Slow events categories.
In this demo, the slowest event is the Express request handler for GET /slow:

Conclusion
AppSignal provides a clear view of each route’s performance and lets you explore the details of every trace, span, and associated errors to identify performance issues more quickly.
The workflow is quite straightforward: instrument first, and traffic illumination will yield patterns that you can investigate top-down. Start from the route, go into the trace, and then look at the specific operation that is causing the delay.
This method helps you prioritize issues and verify that your adjustments actually boost route performance in ways that users experience.
Wondering what you can do next?
Finished this article? Here are a few more things you can do:
- Subscribe to our JavaScript Sorcery newsletter and never miss an article again.
- Start monitoring your JavaScript app with AppSignal.
- Share this article on social media
Most popular Javascript articles

Top 5 HTTP Request Libraries for Node.js
Let's check out 5 major HTTP libraries we can use for Node.js and dive into their strengths and weaknesses.
See more
When to Use Bun Instead of Node.js
Bun has gained in popularity due to its great performance capabilities. Let's see when Bun is a better alternative to Node.js.
See more
How to Implement Rate Limiting in Express for Node.js
We'll explore the ins and outs of rate limiting and see why it's needed for your Node.js application.
See more

Dimitrije Stamenić
Dedicated Technical Writer, Editor and Developer, specializing in crafting informative and engaging content that enables developers to push their boundaries and excel. Passionate about translating complex technical concepts into accessible, engaging content.
All articles by Dimitrije StamenićBecome our next author!
AppSignal monitors your apps
AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

