Jest, Typescript, and unit testing

Simon Lutterbie
8 min readFeb 5, 2021

--

This article chronicles a small part of my journey towards creating a full website that can be rendered in three parallel frameworks and three parallel design systems (React + Material UI; Angular + Clarity; “Vanilla” JS + Carbon).

← Previous article: Code quality: Prettier and ESLint

→ Next article: HTML templates in webpack

I am a strong advocate not only for writing quality code, but ensuring its correct behavior through effective testing. When I started this project, one of my first tasks was to switch from Javascript to Typescript; while Typescript is not commonly thought of as a testing framework, it does effectively provide static testing — ensuring that variable types are consistent, and that functions have clear contracts regarding the data types they accept as parameters and provide via return values. Additionally, Typescript assures that, when a function is called, the function’s “type contract” is respected — no more writing a function that expects a number parameter and trying to pass it a string!

But static testing only goes so far. To ensure a given function does what you expect it to, you need unit testing. A good unit test should:

  • Test the inputs & outputs of the function, not the implementation. The ends justify the means. And you don’t want an internal refactor to break your tests. If the function is still functional, your tests should still pass.
  • Tests potential variations in the parameters of a function. Have an optional parameter with a default value? Test both the default and provided-value cases. Have a parameter that accepts multiple types? Test each type.
  • Test reasonable edge cases. Static testing goes a long way to reducing edge cases, but they still exist. A common edge case would be expecting a string, and receiving an empty string.
  • Where possible, test the generic case rather than a specific case. I’ll discuss this further below, with the implementation of faker.

As much of a stickler as I am for test coverage, I opted not to write tests for all my functions, at this point. This is primarily because I plan on significantly refactoring, or even removing, many of them in the very near future. But I did want to ensure testing was up and running correctly in my project, so I focused on writing tests for my view.tility functions, which are the building blocks of my primary makeView functions.

First, I installed the packages necessary for jest, and for integrating babel in order to handle my Typescript codebase.

npm i -D jest babel-jest @types/jest// Already installed:
@babel/core @babel/preset-env @babel/preset-typescript

I then created my jest.config.js file:

A short jest.config.js file, pointing to my sourcecode and prioritizing Typescript files

clearMocks: true and verbose: true are instances in which I chose to override the default settings. moduleFileExtensions is an ordered list that jest uses to find imports. By putting Typescript files first in the list, I (slightly) improve test speed. And rootDir points to my source code, so jest knows it needn’t spend time looking for tests elsewhere.

Because my project is in Typescript and I plan to write my tests in Typescript as well, I needed to make sure jest type definitions would be accepted. This was achieved by adding jest to my tsconfig.json types array, which now includes ['node', 'jest'].

I then updated the scripts in my package.json to support testing. I’ve highlighted the new or updated scripts, below:

package.json scripts for running tests in development and production

For development, I have jests: jest --watch — this maintains an open session of jest, and re-runs relevant tests when it detects a change in my source or test files. Very useful for rapid feedback, and Test-Driven Development (TDD), my preferred approach programming.

I then created thetest:unit: jest command, and test: run-s test:unit. This may seem redundant at the moment, but I eventually plan to implement Cypress for integration tests, so I’m setting the stage for that, now. As with functions, I find npm scripts are easiest to understand when broken into individual steps, and then logically composed. It also aids in de-bugging, should a mysterious fault arise in the build process.

Finally, I added testing to my overall build process: build: run-s lint type test build:clean webpack:

  1. Catch syntax and code formatting errors
  2. Catch Type errors (static testing)
  3. Run unit tests
  4. Prepare to build
  5. Build

With setup complete, it was time to write some tests! The full set of tests I wrote are available in ./src/views/views.utils.test.tson GitHub; for now, I’ll review just a single one of the more interesting tests suites, which covers makeList(items: string[], ordered = false): HTMLOLListElement | HTMLULListElement.

Here’s the original function:

My makeList() function. The code picture can be found <<here, in my GitHub repo>>.

And its associated unit tests:

Unit tests for makeList(). The code picture can be found <<here, in my GitHub repo>>.

makeList takes two parameters:

  1. items: string[]. I know, not very flexible, as list items can contain almost any content type. But it was a quick implementation… and easier to test!
  2. ordered: boolean = false. An optional parameter which defaults the function to returning an unordered list, but will return an ordered list upon request.

What are my expectations regarding the inputs and outputs of this function? These are the things I want to test:

  • It should return an <ul> or <ol> list element, depending on the value provided to the ordered parameter.
  • The list element should include the same number of child elements as there are array items in the items parameter.
  • Each string in the items array should be found in a <li> within the return element, in the same order.

For the purposes of testing, how the list element is generated is largely irrelevant. As long as the function adheres to the above contract, it should continue to work within my larger application, and that’s the purpose of the test — if you change the function but don’t break the tests, your changes should therefore not break the larger application. Of course this assumes your tests are comprehensive… but even 80% confidence is a LOT better than refactoring blind.

My first unit tests confirms the function returns the correct type of list. items is held constant, as it doesn’t have a bearing on the outcome. I create two test cases, one for ordered = false and one for ordered = true. For each test case, I specify the expected element type. I then iterate through the test cases, confirming that the overall element type is as expected. Test complete.

The next test evaluates the contents of the list. This test is a bit more involved and, somewhat sneakily, tests both the number and content of the list items. Ideally, each test should contain a single assertion; however, seeing as the two assertions (the number and content of a set of list items) are closely intertwined, I took the shortcut of examining both within the same test.

To test the contents of the list, I used a package called faker, which generates a wide range of “fake” random data for testing and development purposes. Using faker is a personal choice, and I doubt every test-writer will agree.

On one hand, a good test should always pass, or always fail. “Occasionally failing” is called “test flake”, and indicates there’s something wrong with the test. By this logic, using different data each time creates an opportunity for flake, and thus one should use consistent data.

On the other hand (the hand I prefer), as long as the fake data matches the type of input the function expects, then it should be tested on arbitrary data of that type, not just a static value. From this perspective, any test flake would likely indicate a failure in the function — the very thing testing is designed to detect! For example, consider the following function, and its associated test:

function capitalizeWord(word: string): string {
return word.slice(5).toUpperCase();
}
describe(('capitalizeWord') => {
it(('Should capitalize the provided word') => {
expect(capitalizeWord('Hello')).toEqual('HELLO');
});
});

The above test will pass. But the function would convert “goodbye” into “GOODB” — probably not the intended output of capitalizeWord.

To make the point even more clearly (and with a more contrived example!), I’m going to refactor capitalizeWord in a way that doesn’t break its test:

function capitalizeWord(word: string): string {
return 'HELLO';
}
describe(('capitalizeWord') => {
it(('Should capitalize the provided word') => {
expect(capitalizeWord('Hello')).toEqual('HELLO');
});
});

You could deliver the user a screen full of “HELLO”, and your tests would never know the difference… that’s why I support using arbitrary (typed) data in tests.

This particular round of testing led to one final, and unexpected, adventure.

My initial site implementation, the one I set out to test, relied heavily on the following pair of functions renderView(view: ViewFn) and clearMainWrapper().

(View the original functions on GitHub)

clearMainWrapper is a helper function that removes all content from the <div id="section-content"> element from index.html, and is called within renderView, which then inserts the new view into the same div.

Writing tests for clearMainWrapper proved difficult and annoying, for (at least) three reasons:

  1. The wrapper?.childNodes.forEach((childNode) => {...}) loop proved difficult to test successfully, suggesting it is a rickety implementation.
  2. It relies on the existence of <div id="section-content">, or does nothing. Such hard-coding is an example of “code smell”, or bad practice.
  3. The reliance on <div id="section-content"> means that renderView was also implicitly reliant on the same, which is even worse because it appears no where in the function!

After more wrestling with tests than I care to admit, I realized it would be much easier to simply clear all content from <div id="section-content"> than attempt to remove each child individually… and that I could pass in an arbitrary id, rather than hard-code it. But that turned clearMainWrapper() into the following function:

function clearMainWrapper(id: string): HTMLElement {
const wrapper = document.getElementById(id)
wrapper.innerHTML = '';
return wrapper;
}

Better… but why is clearMainWrapper returning wrapper at all? That behavior isn’t implied by the function’s name — a seemingly strange but very useful guide to follow — and clearMainWrapperAndReturn would be cumbersome function.

This led me to realize it would be actually cleaner to take care of what clearMainWrapper did directly inside renderView. The refactored function is as follows:

(View the refactored function on GitHub)

This makes sense! You give renderView a view to render, and a place to render it. renderView clears away whatever used to be there, and renders the new view in its place. Exactly what you’d expect, and much easier to test. I could even take this a step further and make the “clean first” step optional, but that’s a feature I’m unlikely to require so, until I do, it’s just a complication.

In the end, implementing unit tests for my project not only increased my confidence in my code as expected, it actually led to a significant and positive refactor!

--

--

Simon Lutterbie
Simon Lutterbie

Written by Simon Lutterbie

Senior Frontend Engineer committed to creating value and being a force-multiplier. Typescript, React, GraphQL, Cypress, and more. Also: PhD in Social Psychology

No responses yet