I struggled with two aspects of software development as a junior engineer: structuring large codebases and writing testable code. Test-driven development is such a common technique that is often taken for granted, but it's not always clear how code can be made fully testable.
I remember reading examples where an author would cleanly unit test a function, and in principle, it made sense. But real code doesn't look like those examples. No matter how thoughtfully it is written, real code has some level of complexity.
Ultimately, a lot of that complexity comes down to managing dependencies. This is arguably one of the chief challenges of software engineering; to quote the famous poem, "no man is an island entire of itself."
This article shares a few powerful tools to help you write testable code that grows into neat, manageable code bases.
But first, we need to ask: what are dependencies?
What Is A Dependency?
A dependency is any external resource a program needs to work. These can be external libraries the code literally depends on or services the program functionally needs, like internet APIs and databases.
The tools we use to manage these dependencies are different, but the problems are ultimately the same. A unit of code depends on other units of code, which themselves often have dependencies. For the program to work, all dependencies must be resolved recursively.
If you're not familiar with how package managers work, you might be surprised at the complexity of this problem. However, if you've written and attempted to test a webserver that relies on a database, you're probably familiar with another version of the same problem. Luckily for us, this is a well-studied problem.
Let's take a quick look at how you can use SOLID principles to improve the maintainability and stability of your code.
SOLID Principles
Robert Martin's SOLID principles are excellent guidelines for writing object-oriented code. I argue that two of these principles — the Single Responsibility principle and Dependency Inversion principle — can be critically important outside of OO design, as well.
Single Responsibility Principle
The Single Responsibility principle states that a class or function should have one — and only one — purpose, and thus only one reason to change. This resembles the UNIX philosophy — in essence, do one thing, and do it well. Keep your units simple and reliable, and achieve complex solutions by composing simple pieces.
For example, an Express handler function might sanitize and validate a request, perform some business logic, and store the result in a database. This function performs many jobs. Suppose we redesign it to follow the Single Responsibility principle. In that case, we move input validation, business logic, and database interactions into three separate functions that can be composed to handle a request. The handler itself does only what its name implies: handle an HTTP request.
Dependency Inversion Principle
The Dependency Inversion principle encourages us to depend on abstractions instead of concretions. This, too, has to do with separation of concerns.
To return to our Express handler example, if the handler function directly depends on a database connection, this introduces a host of potential problems. Say we notice our site is underperforming and decide to add caching; now we'll need to manage two different database connections in our handler function, potentially repeating cache checking logic over and over throughout the codebase and increasing the likelihood of bugs.
What's more, the business logic in the handler typically won't care about the details of the cache solution; all it needs is the data. If we instead depend on an abstraction of our database, we can keep changes in persistence logic contained and reduce the risk that a small change will force us to rewrite a ton of code.
The problem I've found with these principles is often in their presentation; it's difficult to present them on a general level without a fair bit of hand-waving.
I want to explain them concretely. Let's look at how to break a large, difficult-to-test handler function into small, testable units using these two principles.
Example: An Overwhelmed Express Handler for Node.js
Our example is an Express handler function that accepts a POST request and creates a listing on a job board for Node.js developers. It validates the input and stores the listing. If the user is an approved employer, the post is made public immediately, otherwise, it is marked for moderation.
const app = express(); app.use(express.json()); let db: Connection; const title = { min: 10, max: 100 }; const description = { min: 250, max: 10000 }; const salary = { min: 30000, max: 500000 }; const workTypes = ["remote", "on-site"]; app.post("/", async (req, res) => { // validate input const input = req.body?.input; try { const errors: Record<string, string> = {}; if ( input.jobTitle.length < title.min || input.jobTitle.length > title.max ) { errors.jobTitle = `must be between ${title.min} and ${title.max} characters`; } if ( input.description.length < description.min || input.jobTitle.length > description.max ) { errors.description = `must be between ${description.min} and ${description.max} characters`; } if (Number(input.salary) === NaN) { errors.salary = `salary must be a number`; } else if (input.salary < salary.min || input.salary > salary.max) { errors.salary = `salary must be between ${salary.min} and ${salary.max}`; } if (!workTypes.includes(input.workType.toLowerCase())) { errors.workType = `must be one of ${workTypes.join("|")}`; } if (Object.keys(errors).length > 0) { res.status(400); return res.json(errors); } } catch (error) { res.status(400); return res.json({ error }); } const userId = req.get("user-id"); try { // retrieve the posting user and check privileges const [[user]]: any = await db.query( "SELECT id, username, is_approved FROM user WHERE id = ?", [userId] ); const postApprovedAt = Boolean(user.is_approved) ? new Date() : null; const [result]: any = await db.query( "INSERT INTO post (job_title, description, poster_id, salary, work_type, approved_at) VALUES (?, ?, ?, ?, ?, ?)", [ input.jobTitle, input.description, user.id, input.salary, input.workType, postApprovedAt, ] ); res.status(200); res.json({ ok: true, postId: result.insertId, }); } catch (error) { res.status(500); res.json({ error }); } });
This function has a lot of problems:
1. It does too many jobs to be practically testable.
We can't test that validation works without being connected to a functioning database, and we can't test storing and retrieving posts from the database without building fully-fledged HTTP requests.
2. It depends on a global variable.
Maybe we don't want tests polluting our development database. How can we instruct the function to use a different database connection (or even a mock) when the database connection is hard-coded as global?
3. It's repetitive.
Any other handler that needs to retrieve a user from their ID will essentially duplicate code from this handler.
Layered Architecture for Separation of Concerns in JavaScript
Suppose each function or class performs only one action. In that case, a function needs to handle the user interaction, another needs to perform the desired business logic, and another needs to interact with the database.
A common visual metaphor for this that you're likely familiar with is a layered architecture. A layered architecture is often depicted as four layers stacked on top of one another, with the database at the bottom and the API interface at the top.
When thinking about injecting dependencies, though, I find it more useful to think of these layers like the layers of an onion. Each layer must contain all of its dependencies to function, and only the layer that immediately touches another layer may interact with it directly:
The presentation layer, for example, should not interact directly with the persistence layer; the business logic should be in the business layer, which may then call the persistence layer.
It may not be immediately clear why this is beneficial — it certainly can sound like we are just making rules for ourselves to make things harder. And it actually may take longer to write code this way, but we are investing time in making the code readable, maintainable, and testable down the road.
Separation of Concerns: An Example
Here's what actually happens when we start separating concerns. We'll start with classes to manage the data stored in the database (part of the persistence layer):
// Class for managing users stored in the database class UserStore { private db: Connection; constructor(db: Connection) { this.db = db; } async findById(id: number): Promise<User> { const [[user]]: any = await this.db.query( "SELECT id, username, is_approved FROM user WHERE id = ?", [id] ); return user; } }
// Class for managing job listings stored in the database class PostStore { private db: Connection; constructor(db: Connection) { this.db = db; } async store( jobTitle: string, description: string, salary: number, workType: WorkType, posterId: number, approvedAt?: Date ): Promise<Post> { const [result]: any = await this.db.query( "INSERT INTO post (job_title, description, poster_id, salary, work_type, approved_at) VALUES (?, ?, ?, ?, ?, ?)", [jobTitle, description, posterId, salary, workType, approvedAt] ); return { id: result.insertId, jobTitle, description, salary, workType, posterId, }; } }
Notice these classes are incredibly simple — in fact, they're simple enough to not need to be classes at all. You could write a function returning plain-old JavaScript objects or even "function factories" to inject dependencies into your functional units. Personally, I like to use classes, as they make it very easy to associate a set of methods with their dependencies in a logical unit.
But JavaScript was not born as an object-oriented language, and many JS and TS developers prefer a more functional or procedural style. Easy! Let's use a function that returns a plain object to achieve the same goal:
// Service object for managing business logic surrounding posts export function PostService(userStore: UserStore, postStore: PostStore) { return { store: async ( jobTitle: string, description: string, salary: number, workType: WorkType, posterId: number ) => { const user = await userStore.findById(posterId); // if posting user is trusted, make the job available immediately const approvedAt = user.approved ? new Date() : undefined; const post = await postStore.store( jobTitle, description, salary, workType, posterId, approvedAt ); return post; }, }; }
One disadvantage of this approach is that there isn't a well-defined type for the service object that's returned. We need to explicitly write one and mark it as the return type of the function, or use TypeScript utility classes elsewhere to derive the type.
We're already starting to see the benefits of separation of concerns here. Our business logic now depends on the abstractions of the persistence layer rather than the concrete database connection. We can assume the persistence layer will work as expected from inside the post service. The only job of the business layer is to enforce business logic, then pass persistence duty off to the store classes.
Before testing the new code, we can rewrite our handler function with injected dependencies using a very simple function factory pattern. Now, this function's only job is to validate an incoming request and pass it off to the application's business logic layer. I'll spare you the boredom of the input validation since we should be using a well-tested third-party library for this anyway.
export const StorePostHandlerFactory = (postService: ReturnType<typeof PostService>) => async (req: Request, res: Response) => { const input = req.body.input; // validate input fields ... try { const post = await postService.store( input.jobTitle, input.description, input.salary, input.workType, Number(req.headers.userId) ); res.status(200); res.json(post); } catch (error) { res.status(error.httpStatus); res.json({ error }); } };
This function returns an Express handler function with all contained dependencies. We call the factory with the required dependencies and register it with Express, just like our previous inline solution.
app.post("/", StorePostHandlerFactory(postService));
I feel pretty comfortable saying the structure of this code is more logical now. We have atomic units, be they classes or functions, that can be tested independently and re-used when needed. But have we measurably improved the testability of the code? Let's try writing some tests and find out.
Testing Our New Units
Observing the Single Responsibility principle means that we only unit test the one purpose a unit of code fulfills.
An ideal unit test for our persistence layer does not need to check that primary keys increment correctly. We can take the behavior of lower layers for granted or even replace them entirely with hard-coded implementations. In theory, if all our units behave correctly on their own, they will behave correctly when they compose (though this is obviously not always true — it's the reason we write integration tests.)
Another goal we mentioned is that unit tests shouldn't have side effects.
For persistence layer unit tests, this means that our development database is not affected by the unit tests we run. We can accomplish this by mocking the database, but I would argue that containers and virtualization are so cheap today that we may as well just use a real, but different, database for testing.
In our original example, this would be impossible without altering the app's global configuration or mutating a global connection variable in each test. Now that we're injecting dependencies, though, it's actually really easy:
describe("PostStore", () => { let testDb: Connection; const testUserId: number = 1; beforeAll(async () => { testDb = await createConnection("mysql://test_database_url"); }); it("should store a post", async () => { const post = await postStore.store( "Senior Node.js Engineer", "Lorem ipsum dolet...", 78500, WorkType.REMOTE, testUserId, undefined ); expect(post.id).toBeDefined(); expect(post.approvedAt).toBeFalsy(); expect(post.jobTitle).toEqual("Senior Node.js Engineer"); expect(post.salary).toEqual(78500); }); });
With only five lines of setup code, we're now able to test our persistence code against a separate, isolated test database.
Mocking on the Fly with Jest
But what if we want to test a unit in a "higher" layer, such as a business layer class? Let's look at the following scenario:
Given the job listing data from a user who is not pre-approved for immediate publishing, the post service should store a post with a null
approved_at
timestamp.
Because we're only testing business logic, we don't need to test the process of storing or pre-approving an application user. We don't even need to test that the job posting is actually stored in an on-disk database.
Thanks to the magic of runtime reflection and the underlying dynamic nature of JavaScript, our testing framework will likely let us replace those components with hard-coded "mocks" on the fly. Jest, a popular JavaScript testing library, comes with this functionality baked in, and many other libraries provide it as well (such as SinonJS).
Let's write a test for this scenario, isolating it from any actual persistence or database logic using some simple mocks.
describe("PostService", () => { let service: ReturnType<typeof PostService>; let postStore: PostStore; let userStore: UserStore; const testUserId = 1; beforeAll(async () => { const db = await createConnection("mysql://test_database_url"); postStore = new PostStore(db); userStore = new UserStore(db); service = PostService(userStore, postStore); }); it("should require moderation for new posts from unapproved users", async () => { // for this test case, the user store should return an unapproved user jest .spyOn(userStore, "findById") .mockImplementationOnce(async (id: number) => ({ id, username: "test-user", approved: false, })); // mocking the post store allows us to validate the data being stored, without actually storing it jest .spyOn(postStore, "store") .mockImplementationOnce( async ( jobTitle: string, description: string, salary: number, workType: WorkType, posterId: number, approvedAt?: Date | undefined ) => { expect(approvedAt).toBeUndefined(); return { id: 1, jobTitle, description, salary, workType, posterId, approvedAt, }; } ); const post = await service.store( "Junior Node.js Developer", "Lorem ipsum dolet...", 47000, WorkType.REMOTE, testUserId ); expect(post.id).toEqual(1); expect(post.posterId).toEqual(testUserId); }); });
Benefits of Mocking
Mocking, here, is simply temporarily replacing functions or class methods with predictable replacements (that have no external dependencies), inside which we can:
- Test the data that higher layers pass in.
- Fully control the behavior of layers of code lower than the layer we are currently testing.
That last part is incredibly powerful. It allows us to do things like test whether specific types of errors return accurate HTTP status codes, without actually having to break things to create those errors.
We don't need to disconnect from the test database to test if a connection refused error from the database results in a 500 Internal Server Error in the HTTP response. We can simply mock the persistence code that calls the database and throw the same exception we would see in that scenario. Isolating our tests and testing small units allows us to test much more thoroughly, so we can be sure that the behavior depended on by higher layers is correctly specified.
In well-isolated unit tests, we can mock any dependency. We can replace third-party web APIs with mock HTTP clients that are faster, cheaper, and safer than the real thing. If you want to ensure your application behaves correctly when an external API has an outage, you can replace it with a dependency that always returns a 503 for a subset of tests.
I know I'm really selling mocking here, but understanding the power of mock dependencies in small, focused unit tests was a kind of revelation for me. I'd heard the expression "don't test the framework" dozens of times, but it was only when mocking that I finally understood how it was possible to only test the behavior you're responsible for as a developer. It made my life much easier, and I hope this information can make yours easier, too.
A Note on Test Frameworks When Mocking Dependencies
I used Jest in the above example. However, a more universal (and in some ways superior) way of mocking dependencies in object-oriented code is through polymorphism and inheritance.
You can extend dependency classes with mock method implementations or define your dependencies as interfaces and write entirely isolated classes that fulfill those interfaces for testing purposes. Jest is just more convenient because it lets you easily mock a method once without defining new types.
Dependency Injection Libraries for TypeScript and JavaScript
Now that we're starting to think about dependencies as a sort of directed graph, you might notice how quickly the process of instantiating and injecting dependencies might become tiresome.
Several libraries are available for TypeScript and JavaScript to automatically resolve your dependency graph. These require you to manually list the dependencies of a class or use a combination of runtime reflection and decorators to infer the shape of your graph.
Nest.js is a notable framework that uses dependency injection, with a combination of decorators and explicit dependency declaration.
For existing projects, or if you don't want the weight of an opinionated framework like Nest, libraries like TypeDI and TSyringe can help.
Summing Up
In this post, we've taken a concrete example of an overwhelmed function and replaced it with a composition of smaller, testable units of code. Even if we accomplish identical lines-of-code test coverage for both versions, we can know exactly what broke and why when tests fail in the new version.
Before, we only generally knew that something broke, and we'd likely find ourselves digging through error messages and stack traces to figure out what input led to an exception, what the breaking change was, etc.
I hope this concrete example has helped to explain the two critical SOLID principles of single responsibility and dependency inversion.
It's worth noting that this is not the hammer for every nail. Our end goals are maintainability and reliability, and simple code is easier to maintain. Inversion of control is a great tool for managing complexity, but it is not a reason to introduce undue complexity to a simple program.
Until next time, happy coding!
P.S. If you liked this post, subscribe to our JavaScript Sorcery list for a monthly deep dive into more magical JavaScript tips and tricks.
P.P.S. If you need an APM for your Node.js app, go and check out the AppSignal APM for Node.js.