javascript

Setting Up AppSignal for a Node.js App Running on Kubernetes

Dejan Lukić

Dejan Lukić on

Setting Up AppSignal for a Node.js App Running on Kubernetes

Monitoring in Kubernetes can seem like opening an airplane's black box. Everything happens silently, behind the scenes, hidden away. This can be a lot of trouble, as you don’t really want to dig through a bunch of logs at 3 a.m. after a call letting you know that a certain feature is broken. You want something direct, concise, and helpful.

Well, tools like AppSignal, which enhance observability and monitoring, are actually the solution when you’re trying to understand what happens between pods, networking, and complex computation inside these containers.

In this guide, I will show you how to set up AppSignal for a Node.js application running inside Kubernetes. I will keep it simple and short, as that will allow you to adapt AppSignal to your exact use case. Let's get started.

Prerequisites

Prior to setting up the app, make sure you have:

Setting up AppSignal in a Node.js App

For this example, you will use Express as the backend powerhouse. AppSignal has specific instrumentations in the Node.js package for Express.

Setting up the Environment

Start by creating a new directory with a new Node project in your terminal.

Shell
mkdir appsignal-js-demo-k8s && npm init -y

This will create the appsignal-js-demo-k8s directory, as well as scaffold the Node.js application.

Next, install the dependencies. For this example, you will also use the express library, as it will make working with AppSignal a piece of cake.

Shell
# appsignal-js-demo-k8s npm i express @appsignal/nodejs

Now, let's grab the API key from the AppSignal dashboard. Navigate to AppSignal and log in. Then, select your application from the Applications section.

On the left hand-side, go to App settings > Push & deploy. From the Push key section, copy the organization-level or app-specific Push key.

Push keys shown in the AppSignal dashboard
Push keys shown in the AppSignal dashboard

After you’ve headed back to your terminal, export the copied Push key to the APPSIGNAL_PUSH_API_KEY environment variable.

Shell
# appsignal-js-demo-k8s export APPSIGNAL_PUSH_API_KEY="<YOUR API KEY>"

Lastly, open the package.json file in your editor and change the key type value from commonjs to module :

JSON
// appsignal-js-demo-k8s/package.json { "name": "appsignal-js-demo-k8s", "version": "1.0.0", "description": "Node.js AppSignal demo on Kubernetes", "homepage": "<https://github.com/><YOUR GITHUB USERNAME>/appsignal-js-demo-k8s#readme", "bugs": { "url": "<https://github.com/><YOUR GITHUB USERNAME>/appsignal-js-demo-k8s/issues" }, "repository": { "type": "git", "url": "git+https://github.com/<YOUR GITHUB USERNAME>/appsignal-js-demo-k8s.git" }, "license": "ISC", "author": "<YOUR GITHUB USERNAME>", "type": "module", "main": "index.js", "scripts": { "test": "echo \\"Error: no test specified\\" && exit 1" }, "dependencies": { "@appsignal/nodejs": "^3.7.4", "express": "^5.2.1" } }

Great! The app is pretty much set up. Now, we’ll move on to creating the actual app and configuring Docker.

Setting up the App

In the appsignal-js-demo-k8s directory, create a new subdirectory src and two files - appsignal.cjs and server.js.

Shell
# appsignal-js-demo-k8s mkdir src && cd src && touch appsignal.cjs server.js

Open the appsignal.cjs file and paste the following content.

JavaScript
// appsignal-js-demo-k8s/src/appsignal.cjs const { Appsignal } = require("@appsignal/nodejs"); new Appsignal({ active: true, name: "appsignal-js-demo-k8s", });

The app startup requires the appsignal.cjs file, which configures the AppSignal client. The client will automatically load the environment variable you have previously set.

Head to the server.js next. Start by importing the dependencies.

JavaScript
// appsignal-js-demo-k8s/src/server.js import { expressErrorHandler } from "@appsignal/nodejs"; import express from "express";

Then you will initialize Express and define a port to use.

JavaScript
// appsignal-js-demo-k8s/src/server.js const app = express(); const PORT = process.env.PORT || 3000;

With this out of the way, let's add a few routes.

JavaScript
// appsignal-js-demo-k8s/src/server.js app.get("/health", (req, res) => { res.status(200).json({ status: "ok" }); }); app.get("/", (req, res) => { res.json({ message: "Hello from appsignal-k8s-demo!" }); }); app.get("/error", (req, res, next) => { try { throw new Error("Test error: AppSignal is catching this!"); } catch (err) { res.status(500).json({ error: err.message }); } });

You have added:

  • /health: This is the endpoint that Kubernetes will use to see if the server is alive.
  • /: Just to say, “Hi!” when running the server.
  • /error: You will use this to trigger an error, which AppSignal will catch.

To wrap things up, add the server listener and the AppSignal middleware.

JavaScript
// appsignal-js-demo-k8s/src/server.js app.use(expressErrorHandler()); app.listen(PORT, () => { console.log(`Server running on port ${PORT}`); });

You should add the expressErrorHandler() middleware after all routes, but before any other error handlers.

That's it for the simple Express app. Nice work! Up next, you need to set up a container for the app and run this in a Kubernetes pod.

Containerizing with Kubernetes

First, you will create a Dockerfile in the root directory.

Shell
# appsignal-js-demo-k8s touch Dockerfile

Paste the following content in the Dockerfile.

Dockerfile
ARG NODE_VERSION=24 FROM node:${NODE_VERSION} AS deps WORKDIR /app COPY package*.json ./ RUN npm ci --omit=dev FROM node:${NODE_VERSION} WORKDIR /app COPY --from=deps /app/node_modules ./node_modules COPY src/ ./ EXPOSE 3000 CMD ["node", "--require", "./appsignal.cjs", "server.js"]

In short, this setup:

  • Pulls the Node.js 24 version
  • Creates a working directory, copies the dependencies, and installs them
  • Copies the src directory, exposes the port 3000, and runs the server with the appsignal.cjs preloaded

The application must use the --require flag to load AppSignal's instrumentation before other libraries.

Next, in the root directory, create a subdirectory k8s, which will store all the Kubernetes manifests and add three manifest files:

  • deployment.yaml
  • secret.yaml
  • service.yaml
Shell
# appsignal-js-demo-k8s mkdir k8s && touch deployment.yaml secret.yaml service.yaml

Inside the deployment.yaml file, paste the following configuration. This manifest lets Kubernetes know how to run this application, which image to use, how many pod replicas to maintain, what environment variables to inject, and how to perform health checks for each container.

YAML
# appsignal-js-demo-k8s/k8s/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: appsignal-k8s-demo labels: app: appsignal-k8s-demo spec: replicas: 2 selector: matchLabels: app: appsignal-k8s-demo template: metadata: labels: app: appsignal-k8s-demo spec: containers: - name: appsignal-k8s-demo image: appsignal-k8s-demo:latest imagePullPolicy: Never ports: - containerPort: 3000 env: - name: APPSIGNAL_PUSH_API_KEY valueFrom: secretKeyRef: name: appsignal-secret key: APPSIGNAL_PUSH_API_KEY - name: PORT value: "3000" livenessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 10 periodSeconds: 15 readinessProbe: httpGet: path: /health port: 3000 initialDelaySeconds: 5 periodSeconds: 10 resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "250m" memory: "256Mi"

Let's briefly cover what this configuration includes:

  • The imagePullPolicy: Never property ensures that only the locally-built image is used.
  • The env block sets environment variables inside the container. APPSIGNAL_PUSH_API_KEY is pulled from the Kubernetes Secret you will create next.
  • The livenessProbe tells Kubernetes whether the container is still alive. It fetches GET /health every 15 seconds, waiting 10 seconds after startup before the first check. If it fails, Kubernetes restarts the pod.
  • The readinessProbe tells Kubernetes whether the container is ready to receive traffic. It uses the same GET /health endpoint but starts checking sooner (after 5 seconds) and more frequently (every 10 seconds). Pods that fail this probe are temporarily removed from the load balancer until they recover.
  • The resources block sets resource boundaries per container.

Next, paste this configuration inside the secret.yaml manifest.

Never commit real secrets to version control. Consider using Sealed Secrets, External Secrets Operator, or a Vault in production environments.

YAML
# appsignal-js-demo-k8s/k8s/secret.yaml apiVersion: v1 kind: Secret metadata: name: appsignal-secret type: Opaque data: APPSIGNAL_PUSH_API_KEY: <YOUR API PUSH KEY IN BASE64>

The manifest stores sensitive values, in this case the AppSignal Push API key, as a Base64-encoded Kubernetes Secret. This keeps credentials out of your application code and the deployment manifest.

The APPSIGNAL_PUSH_API_KEY value must be encoded in Base64.

Encode the key with the following command.

Shell
echo -n "YOUR API PUSH KEY IN PLAIN TEXT" | base64

Finish it off with the service.yaml. Copy-paste the following:

YAML
# appsignal-js-demo-k8s/k8s/service.yaml apiVersion: v1 kind: Service metadata: name: appsignal-k8s-demo spec: selector: app: appsignal-k8s-demo ports: - protocol: TCP port: 80 # External port targetPort: 3000 # Container port type: LoadBalancer

This final manifest exposes your pods to network traffic. Since pods are ephemeral and their IPs change, the service provides a consistent address that routes requests to whichever pods are currently healthy and ready.

Got it?

With all this in place, you can apply these manifests and deploy the app.

Deploying to Kubernetes

Deploying the app to Kubernetes will take a few steps:

  • First, you will start a local cluster with minikube .
  • Then, you will build out a Docker image within minikube's Docker context.
  • Next, you will apply all the created manifests.
  • And, as a cherry on top, you will access the service through minikube's LoadBalancer tunnel.

Okay, now let's kick things off by starting a local cluster.

Shell
# appsignal-js-demo-k8s minikube stat

Next, point your shell at minikube's Docker daemon so that your image is available inside the cluster without pushing to Docker Hub.

Shell
# appsignal-js-demo-k8s eval $(minikube docker-env)

Build the image inside minikube's Docker context.

Shell
# appsignal-js-demo-k8s docker build -t appsignal-k8s-demo:latest .

You should get an output like this:

Shell
# appsignal-js-demo-k8s => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 341B 0.0s => [internal] load metadata for docker.io/library/node:24.11.1-alpine 2.8s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => [internal] load build context 0.5s => => transferring context: 231.51kB 0.3s => [deps 1/4] FROM docker.io/library/node:24.11.1-alpine@sha256:682368d8253e0c3364b803956085c456a612d738bd635926d73fa24db3ce53d7 18.4s => => resolve docker.io/library/node:24.11.1-alpine@sha256:682368d8253e0c3364b803956085c456a612d738bd635926d73fa24db3ce53d7 0.1s => => sha256:014e56e613968f73cce0858124ca5fbc601d7888099969a4eea69f31dcd71a53 3.86MB / 3.86MB 4.0s => => sha256:de13c80ec1f1c2868b7daab769f3d7cb1bfb2bb16142918a157fd6324bc1ee59 1.26MB / 1.26MB 1.1s => => sha256:05f196bed839b234bc062fab82e6ee8c6f7cdc22a6ceb81180a4ceb4b08bb6cb 445B / 445B 0.7s => => sha256:ea370b117ea66438d0bc154a347a2f5de4816e445e2b061001161310b4f00d58 50.94MB / 50.94MB 16.8s => => extracting sha256:014e56e613968f73cce0858124ca5fbc601d7888099969a4eea69f31dcd71a53 0.1s => => extracting sha256:ea370b117ea66438d0bc154a347a2f5de4816e445e2b061001161310b4f00d58 0.8s => => extracting sha256:de13c80ec1f1c2868b7daab769f3d7cb1bfb2bb16142918a157fd6324bc1ee59 0.1s => => extracting sha256:05f196bed839b234bc062fab82e6ee8c6f7cdc22a6ceb81180a4ceb4b08bb6cb 0.1s => [deps 2/4] WORKDIR /app 0.4s => [deps 3/4] COPY package*.json ./ 0.2s => [deps 4/4] RUN npm ci --omit=dev 17.3s => [stage-1 3/4] COPY --from=deps /app/node_modules ./node_modules 5.6s => [stage-1 4/4] COPY src/ ./ 0.4s

After the image is done building, you will apply all the manifests.

Shell
# appsignal-js-demo-k8s kubectl apply -f k8s/secret.yaml kubectl apply -f k8s/deployment.yaml kubectl apply -f k8s/service.yaml

Kudos! Now that the hard part is behind us, let’s test the deployment.

Testing the AppSignal Integration

Before we test anything, let's check if the pods are up and running.

Shell
# appsignal-js-demo-k8s kubectl get pods -w

You should see them go from Pending -> ContainerCreating -> Running.

Shell
# appsignal-js-demo-k8s NAME READY STATUS RESTARTS AGE appsignal-k8s-demo-86cf4466cf-jwn2d 1/1 Running 0 69m appsignal-k8s-demo-86cf4466cf-phjdp 1/1 Running 0 69m

Once the pods' Status is running, you can now open a tunnel to the service.

Shell
# appsignal-js-demo-k8s minikube service appsignal-k8s-demo

After running this, here’s the output you should get:

Shell
# appsignal-js-demo-k8s ┌───────────┬────────────────────┬─────────────┬───────────────────────────┐ NAMESPACE NAME TARGET PORT URL ├───────────┼────────────────────┼─────────────┼───────────────────────────┤ default appsignal-k8s-demo 80 <http://192.168.49.2:31427> └───────────┴────────────────────┴─────────────┴───────────────────────────┘ 🎉 Opening service default/appsignal-k8s-demo in default browser...

minikube will open the service in your default browser in the URL that’s specified within the table.

Deployed application shown in the browser
Deployed application shown in the browser

To test how AppSignal catches errors (and bunch of other stuff, too), go to the http://<YOUR URL>/error address. You will get this output.

A previously specified error shown, caught by AppSignal
A previously specified error shown, caught by AppSignal

You can select this error over in the AppSignal dashboard, under Errors > Issue List, to see more details.

Caught error shown in the AppSignal dashboard
Caught error shown in the AppSignal dashboard

Next Steps and Resources

With AppSignal, observability in Kubernetes and Node.js isn’t as complicated as it may seem. The next step is putting that observability data to work. Those collected metrics can be used to catch issues early and improve your Kubernetes deployment over time.

Here are a few things you can start doing right away with AppSignal:

  • Monitor function performance: Identify slow routes, background jobs, or database queries in your Node.js app.
  • Set up error alerts: Get notified when new exceptions trigger, instead of finding them late in deep logs.
  • Watch resource usage: Keep an eye on memory and CPU trends to detect scaling issues.

Though this article has covered all the basics, AppSignal has a ton of additional features and configuration options. Feel free to explore the Node.js specifics, which include custom instrumentation and additional integrations.

Sound interesting? Get started with a free trial! And don’t worry, there will be no surprise charges in the end.

Frequently Asked Questions (FAQ)

1. How do I apply changes when modifying manifest files?

Run kubectl apply -f k8s/<manifest>.yaml for the specific file you’ve changed. For deployment changes, trigger a rollout with kubectl rollout restart deployment/appsignal-k8s-demo to recycle the pods with the new configuration.

2. How do I update the AppSignal API key after deploying?

Encode the new key in Base64, and update secret.yaml with the new value. Then re-apply the secret, and restart the deployment so that the pods pick up the new environment variable.

Shell
kubectl apply -f k8s/secret.yaml kubectl rollout restart deployment/appsignal-k8s-demo

3. How do I view logs from a specific pod?

List the running pods with kubectl get pods, then view the logs from a specific one. If you want logs from all pods in the deployment at once, use the --selector flag.

Shell
kubectl logs -f <pod-name> # or all pods at once kubectl logs -f --selector app=appsignal-k8s-demo

Wondering what you can do next?

Finished this article? Here are a few more things you can do:

  • Share this article on social media
Dejan Lukić

Dejan Lukić

Our guest author Dejan is an electronics and backend engineer, who is pursuing entrepreneurship with SaaS and service-based agencies and is passionate about content creation.

All articles by Dejan Lukić

Become our next author!

Find out more

AppSignal monitors your apps

AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

Discover AppSignal
AppSignal monitors your apps