Hiding fully-covered files from Jest coverage report

While working on a large JavaScript codebase, one thing that bothered me was the coverage report output to the console: as most files had 100% coverage, it was difficult to spot the few exceptions in the table.

Luckily, additional options can be passed to Istanbul reporters. I couldn’t find documentation for the text reporter, so I digged into its code and found the skipFull option:

{
   "coverageReporters": [
-    "text"
+    ["text", { "skipFull": true }]
   ]
 }

This hides all rows with full coverage, letting you focus on what matters most: partially or fully–uncovered files.

I also recommend jest-silent-reporter for an even quieter output (especially in CI builds) and jest-it-up to automatically bump up global Jest thresholds.

Making optional properties nullable in TypeScript

Let’s say you follow the TypeScript project coding guidelines and only use undefined. Your types are defined with non-nullable optional properties (e.g., x?: number), but the data coming from the API returns null instead.

You decide to write a function to strip all these null values from the response, so that they conform to your types:

function stripNullableProperties(obj) {
  // Return a new object without null properties
}

How can you strongly type such helper without duplicating your input and output types? You could try:

function stripNullableProperties<T extends {}>(obj: T): T;

But it won’t work in strict null checking mode, since obj might have null values that are not assignable to the non-nullable optional properties in T:

type A = {
  x: number;
  y?: number;
};

stripNullableProperties<A>({
  x: 1,
  y: null // Error: Type 'null' is not assignable to type 'number | undefined'.
});

What you really need is something like:

function stripNullableProperties<T extends {}>(obj: NullableOptional<T>): T;

The NullableOptional<T> type

The NullableOptional<T> type constructs a type with all optional properties of T set to nullable:

type A = {
  x: number;
  y?: number;
};

type B = NullableOptional<A>;
// {
//   x: number;
//   y?: number | null;
// }

You won’t find NullableOptional in the TypeScript documentation, and that’s because it’s a custom type. It actually looks like this:

type RequiredKeys<T> = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K }[keyof T];

type OptionalKeys<T> = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? K : never }[keyof T];

type PickRequired<T> = Pick<T, RequiredKeys<T>>;

type PickOptional<T> = Pick<T, OptionalKeys<T>>;

type Nullable<T> = { [P in keyof T]: T[P] | null };

type NullableOptional<T> = PickRequired<T> & Nullable<PickOptional<T>>;

In short:

  1. pick the required properties from T;
  2. pick the optional properties from T and make them nullable;
  3. intersect (1) with (2).

With this, you can remove all nullable properties from an object whose interface should only have non-nullable optional properties, while still ensuring type safety:

type A = {
  x: number;
  y?: number;
};

stripNullableProperties<A>({
  x: 1,
  y: null
});
// {
//   x: 1
// }: A

You could always do a type assertion and avoid all this trouble, but ensuring type safety—even in seemingly unharmful cases like this one—pays off in the long run.

Automating tests for the GTM data layer

At Travix we are constantly analyzing application and user behavior on our websites in order to offer the best experience to our customers. One of the tools employed for this purpose is Google Tag Manager (also called GTM), alongside a data layer. A data layer is a JavaScript object that is used to pass information from the website to the Tag Manager container[1], like product views and purchases.

While implementing new features and refactoring parts of the frontend application, we frequently faced an issue: some GTM events would go missing, be duplicated, dispatch at the wrong time, or just lack important dimensions. This can heavily impact our ability to analyze the data, and thus demanded a lot of manual testing to ensure that everything was working as intended.

We already have end-to-end tests in place, doing a lot of user interactions throughout the website, that in turn push events to the data layer. What if we could extend these tests to also check if the information in the data layer is consistent? Since the data layer is basically an array of JavaScript objects, we can do a sort of snapshot testing, comparing the current values with what we expect them to be.

In this article, I will cover how we automated GTM data layer testing using our end-to-end test framework of choice, TestCafe. The same principles can be easily applied to other test frameworks though.

Retrieving the data layer

The data layer is assigned to a global dataLayer variable. To retrieve the data layer items in end-to-end tests, we must execute code in the browser’s context. In TestCafe, this can be done with a ClientFunction:

import { ClientFunction } from 'testcafe'

const getDataLayer = ClientFunction(() => window.dataLayer)

However, if you try to run this function in your tests, you may face the following error:

ClientFunction cannot return DOM elements. Use Selector functions for this purpose.

This happens due to some events like gtm.click containing references to DOM nodes, which cannot be serialized. One way to fix this is to traverse over all items and remove any such references before returning the data. I will leave this as an exercise, mainly because the problem went away once we started filtering out default GTM events, as I will explain in the next section.

Filtering out default GTM events

One thing I noticed after printing the data layer a couple of times was that default GTM events (e.g., gtm.load, gtm.click) would fire at different points in time, thus their order would not be the same and our tests would often fail. To avoid this issue, I decided to simply filter out these default events, since they are not very relevant to us—we care more about the custom events we fire ourselves.

All default GTM events start with gtm, so we can just ignore them with a filter on the event name:

const getDataLayer = ClientFunction(() => window.dataLayer
  .filter(({ event }) => !event.startsWith('gtm'))
)

Comparing the data layer

Now that we have the data layer in hand, we can make assertions on it. Write your reference data layer snapshot and do a deep equality check against it[2]:

import { t } from 'testcafe'

const dataLayerSnapshot = [
  { event: "productClick" },
  { event: "addToCart" },
  { event: "removeFromCart" },
  { event: "promotionClick" },
  { event: "checkout" },
  { event: "checkoutOption" }
]

await t
  .expect(getDataLayer()).eql(dataLayerSnapshot)

If the data layer does not match the snapshot, the test will fail:

AssertionError: expected [ Array(5) ] to deeply equal [ Array(6) ]

Then it is a matter of fixing the code if it is a regression issue, or (manually) updating the snapshot.

Bonus: improving test failure output

You probably noticed that the error message is not very helpful—it does not tell you exactly what the difference is between the expected and the received values.

We can work around this by doing a string comparison instead, stringifying both the data layer and the snapshot before the assertion:

const getDataLayer = ClientFunction(() => JSON.stringify(
  window.dataLayer
    .filter(({ event }) => !event.startsWith('gtm'))
))
const dataLayerSnapshot = JSON.stringify([
  { event: "productClick" },
  { event: "addToCart" },
  { event: "removeFromCart" },
  { event: "promotionClick" },
  { event: "checkout" },
  { event: "checkoutOption" }
])

Not pretty, but it does the job—although it is still a bit difficult to spot what the actual problem is:

AssertionError: expected

'[{"event":"productClick"},{"event":"addToCart"},{"event":"promotionClick"},{"event":"checkout"},{"event":"checkoutOption"}]'
   to deeply equal

'[{"event":"productClick"},{"event":"addToCart"},{"event":"removeFromCart"},{"event":"promotionClick"},{"event":"checkout"},{"event":"checkoutOption"}]'

In a follow-up post I will explain how we managed to improve this even further by using the expect module inside TestCafe for a more Jest-like assertion output.

Conclusion

Manually testing the data layer after each frontend change is a very time consuming process. Inspection tools like dataslayer can help, but they are no match for proper automation. By leveraging the power of end-to-end tests, we can save valuable time from developers and data analysts, while being more confident that changes to the codebase will not negatively impact sales and performance tracking.


  1. See this GTM help center article for more information. ↩︎

  2. For a single-page application (SPA), this could be the very last step of the test. ↩︎

How to add a Netlify deploy status badge to your project

Ever since I moved this blog to Netlify I wanted to add a badge to the repository’s README displaying the deploy status. The Shields.io service doesn’t support Netlify badges yet, but luckily I found out that you can build dynamic badges by querying structured data from any public URL.

After digging into the Netlify REST API, I managed to make a badge that fetches all deploys for my site and extracts the status of the last deploy:

[![Deploy status](https://img.shields.io/badge/dynamic/json.svg?url=https://api.netlify.com/api/v1/sites/rbardini.com/deploys&label=deploy&query=$[0].state&colorB=blue)](https://app.netlify.com/sites/rbardini/deploys)

Which looks like this:

Deploy status

One shortcoming is that you cannot set a different color depending on the status, that’s why I’m using a “neutral” blue background here. Also, I assume deploy logs must be public for the link (and possibly the badge itself) to work.