javascript

Advanced Use Cases of the Node.js Native Test Runner

Damilola Olatunji

Damilola Olatunji on

Advanced Use Cases of the Node.js Native Test Runner

Welcome back to our exploration of Node.js' built-in test runner! In the previous article, we laid the groundwork for writing and running basic tests in your projects by exploring a few simple examples.

In this installment, we'll dive into practical examples of how to use mocking, code coverage analysis, test hooks, and strategies for testing HTTP servers. These tools will help you write more comprehensive and reliable tests, ultimately leading to more robust Node.js applications.

Let's get started!

Setting Up and Tearing Down with Test Hooks

In testing, "setup" refers to the actions performed before a test, such as initializing variables or establishing database connections, to prepare an environment. Conversely, "teardown" refers to cleaning up after a test, like resetting data or closing connections, ensuring a pristine state for subsequent tests.

The Node.js test runner simplifies this process with built-in functions called hooks:

  • before(): Executes once before all tests within a test suite.
  • after(): Executes once after all tests within a test suite.
  • beforeEach(): Executes before each individual test in the current suite.
  • afterEach(): Executes after each individual test in the current suite.

These hooks promote test independence by ensuring each test starts with a clean slate unaffected by previous results. They also neatly encapsulate resource-intensive tasks, preventing unnecessary repetition.

Here's an illustrative example:

javascript
import { describe, it, beforeEach, before, after } from "node:test"; import assert from "node:assert/strict"; import sqlite3 from "sqlite3"; const db = new sqlite3.Database(":memory:"); // In-memory database for testing describe("SQLite Database Operations", () => { before(() => { db.serialize(() => { db.run( "CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT)" ); }); }); beforeEach(() => { db.run("DELETE FROM users"); }); it("should insert a user", () => { db.run("INSERT INTO users (name) VALUES (?)", ["Alice"], function (err) { if (err) throw new Error(err); // Handle potential errors db.get("SELECT * FROM users WHERE name = 'Alice'", (err, row) => { if (err) throw new Error(err); assert.strictEqual(row.name, "Alice"); }); }); }); after(() => { db.close(); }); });

This test imports the sqlite3 module and uses its in-memory function to test SQLite database operations. Here are the key aspects to note:

  • The before() hook creates the users table before any tests start.
  • Before each test, all existing records are deleted from the users table through the beforeEach() hook, ensuring a clean state.
  • The test inserts a user named 'Alice' into the database and retrieves the user by name to assert that it was indeed inserted.
  • The after() hook closes the database connection once all tests have completed.

Once you install the sqlite3 module and run the tests, you will see that it passes successfully:

shell
▶ SQLite Database Operations ✔ should insert a user (0.630894ms) ▶ SQLite Database Operations (1.880435ms) . . .

Mocking APIs

Mocking is a powerful testing technique where you swap out real dependencies with controlled substitutes (mocks). This allows you to isolate the code you're testing from external factors, so your tests are reliable and focus solely on the logic you're verifying.

Mocking allows you to focus solely on your function's core logic by isolating it from its dependencies, ensuring predictable and consistent responses to prevent flaky tests caused by external factors. It also allows you to verify the correct interaction between your function and its dependencies, such as checking for proper function calls with the right arguments and order.

The node:test module provides a versatile mock object to help you implement mocks in your tests. Let's delve into some practical scenarios where you can leverage mocking to enhance your testing arsenal.

Mocking Async Functions

You can use mocking to simulate the behavior of asynchronous functions in a controlled environment. Using mock.method(), you can replace the actual implementation with a mock function that returns a resolved promise to simulate success, or a rejected promise to simulate an error.

For instance, imagine an email service that you want to mock within your tests. You can use mock.method() to make it always return true (success):

javascript
import assert from "node:assert/strict"; import { describe, it, mock } from "node:test"; // Service to be mocked const emailService = { sendEmail: (to, subject, body) => { /* implementation */ }, }; describe("Mocking email sending", () => { it("should successfully send a mocked email", async () => { const to = "test@example.com"; const subject = "Hello"; const body = "This is a test email."; // Mock the sendEmail function to return a resolved promise mock.method(emailService, "sendEmail", async () => Promise.resolve(true)); // Call the function with test data const result = await emailService.sendEmail(to, subject, body); // Assert that the email was "sent" successfully (based on our mock) assert.equal(result, true); }); });

Conversely, to simulate a failure scenario, you can make the mock function return a rejected promise:

javascript
describe("Mocking email sending", () => { it("should fail to send an email", async () => { const to = "test@example.com"; const subject = "Hello"; const body = "This is a test email."; mock.method(emailService, "sendEmail", async () => Promise.reject({ error: "network error " }) ); await assert.rejects( async () => await emailService.sendEmail(to, subject, body) ); }); });

Mocking Built-in APIs

You can also easily mock built-in APIs to ensure they behave predictably in your tests. Here's how to mock the fetch API, for example:

javascript
// module.js async function fetchDataFromAPI() { // This makes a real network request to an API const response = await fetch("https://jsonplaceholder.typicode.com/todos/1"); const data = await response.json(); return data; } export { fetchDataFromAPI };
javascript
// module.test.js import { describe, it, mock } from "node:test"; import assert from "node:assert/strict"; import { fetchDataFromAPI } from "./module.js"; describe("fetchDataFromAPI", { only: true }, () => { it("should return mocked data", async () => { const expectedResponse = { userId: 1, id: 1, title: "Learn JavaScript", completed: true, }; // fetch is mocked to always return the `expectedResponse` mock.method(global, "fetch", async () => { return { json: async () => expectedResponse, }; }); assert.deepStrictEqual(await fetchDataFromAPI(), expectedResponse); assert.strictEqual(global.fetch.mock.calls.length, 1); mock.reset(); }); });

Here, mock.method() overrides the global fetch function. This returns a Promise that resolves to an object with a json() method (which also subsequently resolves to the desired object).

The test confirms that calling fetchDataFromAPI() returns the mocked response, and that the mock was called exactly once. This allows you to seamlessly test code that relies on fetch without making actual network calls.

This technique can be applied to other built-in APIs as well, such as the fs module. You can also simulate and control the behavior of timers like setInterval() and setTimeout() without actually waiting for the specified time intervals. Refer to the Node.js documentation on timers for details.

Providing Multiple Implementations

In scenarios where a mocked function is called multiple times within a single test, the mockImplementationOnce() method allows you to customize each invocation's behavior. Let's see how this works:

javascript
. . . describe('fetchDataFromAPI', { only: true }, () => { it('should return mocked data', async () => { const expectedResponse = { userId: 1, id: 1, title: 'Learn JavaScript', completed: true, }; const expectedResponse2 = { userId: 2, id: 2, title: 'Learn Go', completed: false, }; // fetch is mocked to always return the `expectedResponse` const fetchMock = mock.method(global, 'fetch', async () => { return { json: async () => expectedResponse, }; }); // except on the second call, where `expectedResponse2` is used instead fetchMock.mock.mockImplementationOnce(async () => { return { json: async () => expectedResponse2, }; }, 1); // 0 is the first call, 1 is the second, etc assert.deepStrictEqual(await fetchDataFromAPI(), expectedResponse); assert.deepStrictEqual(await fetchDataFromAPI(), expectedResponse2); assert.deepStrictEqual(await fetchDataFromAPI(), expectedResponse); assert.strictEqual(global.fetch.mock.calls.length, 3); }); });

Here, the mockImplementationOnce() method overrides the default behavior of the fetchMock only in its second call (indicated by the second argument 1). This means that:

  1. The first call to fetchDataFromAPI() will return expectedResponse, as set by the original mock.
  2. The second call will return expectedResponse2, due to the temporary override.
  3. Subsequent calls will revert back to the original mock's behavior, returning expectedResponse.

This flexibility allows you to simulate a sequence of varying responses from the mocked function, making your tests mirror the complexity of real-world interactions with your dependencies.

Obtaining Code Coverage Reports

Code coverage is a metric in software testing that measures how much of your source code is executed during the course of running your test suite. Often expressed as a percentage, it reveals the extent to which your tests exercise different parts of your codebase.

By pinpointing areas of code untouched by tests, code coverage analysis allows you to evaluate the effectiveness of your existing tests and strategically prioritize future testing efforts. While not an absolute guarantee of bug-free code, higher coverage typically signifies a higher level of confidence in your software's quality and reliability.

The Node.js test runner comes equipped with built-in code coverage reporting. To activate it, include the --experimental-test-coverage flag when executing your tests:

shell
node --test --experimental-test-coverage

A coverage report will be generated that looks like this:

shell
. . . ℹ start of coverage report ℹ ----------------------------------------------------------------------------------- ℹ file | line % | branch % | funcs % | uncovered lines ℹ ----------------------------------------------------------------------------------- ℹ list_manager.js | 71.05 | 83.33 | 57.14 | 18-19 23-27 30-31 34-35 ℹ tests/list_manager.test.js | 100.00 | 100.00 | 100.00 | ℹ tests/main.test.js | 100.00 | 100.00 | 100.00 | ℹ ----------------------------------------------------------------------------------- ℹ all files | 90.83 | 91.67 | 72.73 | ℹ ----------------------------------------------------------------------------------- ℹ end of coverage report

This report shows you:

  • Branch coverage: The percentage of decision points (e.g., if statements, loops) exercised during testing.
  • Function coverage: The percentage of functions that were called.
  • Line coverage: The percentage of lines of code that were executed.
  • Uncovered lines: Specific line numbers in each file that aren't covered by any tests.

For example, you'll notice that lines 33-45 of the list_manager.js file are currently not covered by any tests. This corresponds to the getAllItems() method:

javascript
// list_manager.js class ListManager { . . . getAllItems() { return this.items; } }

To address this gap, you could add a test specifically targeting this method:

javascript
import { describe, it } from 'node:test'; import { ListManager } from '../list_manager.js'; import assert from 'node:assert/strict'; describe('ListManager', () => { . . . it('should return all items', () => { const fruits = new ListManager(5); fruits.addItem('apple'); fruits.addItem('orange'); fruits.addItem('banana'); assert.deepStrictEqual(fruits.getAllItems(), ['apple', 'orange', 'banana']); }); });

Rerunning the coverage report would then showcase improved metrics across lines, branches, and functions:

shell
node --test --experimental-test-coverage
shell
. . . ℹ start of coverage report ℹ ----------------------------------------------------------------------------- ℹ file | line % | branch % | funcs % | uncovered lines ℹ ----------------------------------------------------------------------------- ℹ list_manager.js | 76.32 | 85.71 | 71.43 | 18-19 23-27 30-31 ℹ tests/list_manager.test.js | 100.00 | 100.00 | 100.00 | ℹ tests/main.test.js | 100.00 | 100.00 | 100.00 | ℹ ----------------------------------------------------------------------------- ℹ all files | 92.97 | 92.86 | 83.33 | ℹ ----------------------------------------------------------------------------- ℹ end of coverage report

While striving for high code coverage is generally beneficial, it's not the sole determinant of a good testing strategy. Well-designed tests that thoroughly check for correct behavior and handle edge cases remain paramount, even if achieving 100% coverage isn't always feasible or necessary.

Customizing Test Reports

Test reporters in Node.js are specialized modules that format the results of your test runs. They provide a bridge between your raw test data and a presentation format that best suits your needs. This is invaluable for integrating test results into CI/CD workflows, enhancing readability in large test suites, and catering to specific reporting needs.

The Node.js test runner defaults to the spec reporter, which offers a colorized, hierarchical view ideal for terminals. However, you can tailor the output format using the --test-reporter flag, choosing from the following built-in reporters:

  • spec: The default, providing a structured, colorized overview in the terminal.

    spec reporter
  • tap: Generates output in the Test Anything Protocol (TAP) format, ideal for non-terminal environments and further processing by other tools.

    tap reporter
  • dot: Offers a minimalistic representation with dots for passed tests and 'X' for failures.

    dot reporter
  • junit: Produces results in the JUnit XML format, commonly used in CI/CD pipelines and reporting tools.

    junit reporter
  • lcov: Outputs code coverage data in the LCOV format when combined with the --experimental-test-coverage flag.

To illustrate this, let's generate a TAP report using:

shell
node --test --test-reporter=tap

This produces:

shell
TAP version 13 # Subtest: ListManager # Subtest: should be initialized to 0 when a maximum capacity is not provided ok 1 - should be initialized to 0 when a maximum capacity is not provided --- duration_ms: 0.569852 ... # Subtest: should have a capacity of 5 ok 2 - should have a capacity of 5 --- duration_ms: 0.091852 ... # Subtest: should reduce capacity from 5 to 4 when an item is added ok 3 - should reduce capacity from 5 to 4 when an item is added --- duration_ms: 0.127539 ... # Subtest: should return all items in the list ok 4 - should return all items in the list --- duration_ms: 0.417246 ... 1..4 ok 1 - ListManager --- duration_ms: 2.013476 type: 'suite' ... # Subtest: tests/main.test.js ok 2 - tests/main.test.js --- duration_ms: 26.023819 ... 1..2 # tests 5 # suites 1 # pass 5 # fail 0 # cancelled 0 # skipped 0 # todo 0 # duration_ms 55.145833

Leverage the --test-reporter-destination flag (or, for TAP format, simple shell redirection) to save the output to a file:

shell
node --test --test-reporter=tap --test-reporter-destination report.txt
shell
node --test > report.txt

Either command produces a report.txt file in the current directory with the TAP formatted output. As mentioned earlier, tap is the default reporter for non-terminal output, so you don't need to explicitly use the --test-reporter and --test-reporter-destination flags if that's the desired format. You can just use shell output redirection to save the test report to a file as shown above.

You can also combine multiple reporters, specifying a destination for each:

shell
node --test --test-reporter=spec --test-reporter-destination=stdout --test-reporter=tap --test-reporter-destination=report.txt

This outputs the spec report to the console (stdout) and the tap report to report.txt.

For advanced customization, you can create and utilize custom reporters by tapping into the events emitted by the TestStream object. Refer to the official documentation for more details on TestStream.

Testing HTTP Services

Now that we've explored the core features of Node.js's built-in test runner, let's now apply our knowledge to testing an HTTP server. Consider a simplified bookstore API, offering endpoints to list books (GET /books), place orders (POST /orders), and view existing orders (GET /orders).

We'll be using Fastify in this example, but the concepts also apply to other Node.js web frameworks like Express, Koa, and others.

Clone the GitHub repository to your machine with:

shell
git clone https://github.com/damilolaolatunji/fastify-node-testing

Change into the directory, and install the required dependencies with:

shell
cd fastify-node-testing npm install

For testing purposes, we've separated the application logic (app.js) from the server setup (server.js), making it easier to isolate and test the API endpoints. Note that we've omitted some validations and security measures to keep the example concise.

You may now start the server with:

shell
npm start
json
{"level":30,"time":1715814630918,"pid":682099,"hostname":"fedora","msg":"Server listening at http://[::1]:3000"} {"level":30,"time":1715814630919,"pid":682099,"hostname":"fedora","msg":"Server listening at http://127.0.0.1:3000"} {"level":30,"time":1715814630919,"pid":682099,"hostname":"fedora","msg":"Fastify is listening on port: http://[::1]:3000"}

Once the server is ready, make a few requests to confirm that everything works as expected.

Start by fetching all the available books on the shelf with:

shell
curl http://localhost:3000/books

This produces:

json
[ { "id": 1, "title": "The Hitchhiker's Guide to the Galaxy", "price": 15 }, { "id": 2, "title": "Pride and Prejudice", "price": 10 }, { "id": 3, "title": "The Lord of the Rings", "price": 25 } ]

To see the list of orders (currently empty), use:

shell
curl http://localhost:3000/orders
shell
[]

Finally, to create an order for a book, use:

shell
curl -X POST http://localhost:3000/orders \ -H "Content-Type: application/json" \ -d '{"bookId": 1, "quantity": 4}'

This yields a total of $60:

json
{ "bookId": 1, "quantity": 4, "total": 60 }

To see the new list of orders, use:

shell
curl http://localhost:3000/orders
json
[ { "bookId": 1, "quantity": 4, "total": 60 } ]

Once confirmed, open the tests/app.test.js file in your text editor to read the tests for the service.

The beforeEach() hook is used to initialize a fresh instance of the application before each test to avoid any potential side-effects from previous tests. Consequently, the afterEach() hook closes the server after each test case completes.

The code contains three describe blocks, each focusing on a specific endpoint (/books and /orders with GET and POST methods):

  1. GET /books
  • This tests if the endpoint returns a list of 3 books with the correct details and JSON content type.
  1. GET /orders
  • Checks if the endpoint returns an empty list of orders initially.
  • Creates an order using a POST request, then checks if the GET endpoint returns the newly created order.
  1. POST /orders
  • Tests successful order creation with the correct response data.
  • Tests for a "bad request" error when trying to order a non-existent book.

In each test, Fastify's inject method provides an easy way to simulate HTTP requests to the application without actually running a server. However, you can also test the application after starting the server if you prefer (see the Fastify documentation on testing).

You can execute the test using the Node test runner:

shell
npm test

The following output confirms that the tests pass successfully:

shell
▶ Application endpoints ▶ GET /books ✔ should return a list of books (17.612687ms) ▶ GET /books (18.132371ms) ▶ GET /orders ✔ should return an empty list of orders (4.607706ms) ✔ should return a list of existing orders (5.17094ms) ▶ GET /orders (10.086249ms) ▶ POST /orders ✔ should create a new order successfully (3.049189ms) ✔ should return a bad request for a non-existent book (3.595889ms) ▶ POST /orders (6.865771ms) ▶ Application endpoints (35.69981ms) . . .

And that's it!

Wrapping Up

In this two-part exploration of Node.js's built-in test runner, we've journeyed from basic to advanced testing concepts. You should now be equipped to create comprehensive and reliable tests for your applications without reaching for a third-party library.

Real-world experience from the community suggests that while the Node.js test runner is a promising tool, it does have some growing pains. For example, integrating it with TypeScript might require some creative workarounds, and while it's nice to use a built-in tool, some developers find themselves still reaching for familiar assertion libraries like expect.

However, the potential of the test runner is clear. As it matures, it could become a powerful and seamless part of your Node.js toolkit.

Thanks for reading!

P.S. If you liked this post, subscribe to our JavaScript Sorcery list for a monthly deep dive into more magical JavaScript tips and tricks.

P.P.S. If you need an APM for your Node.js app, go and check out the AppSignal APM for Node.js.

Damilola Olatunji

Damilola Olatunji

Damilola is a freelance technical writer and software developer based in Lagos, Nigeria. He specializes in JavaScript and Node.js, and aims to deliver concise and practical articles for developers. When not writing or coding, he enjoys reading, playing games, and traveling.

All articles by Damilola Olatunji

Become our next author!

Find out more

AppSignal monitors your apps

AppSignal provides insights for Ruby, Rails, Elixir, Phoenix, Node.js, Express and many other frameworks and libraries. We are located in beautiful Amsterdam. We love stroopwafels. If you do too, let us know. We might send you some!

Discover AppSignal
AppSignal monitors your apps