Fine-tuning your eslint configuration

Last week, we introduced eslint and how it can help improve our code by identifying places where we have dead code or don’t follow best practices. Sometimes, we “break” some of these rules on purpose or decide to adopt a different convention, which is perfectly fine.

In that case, instead of giving up on eslint entirely, a better idea is to change its configuration to tweak the severity of a rule or even disable it. An es lint rule has three different severity settings:

  • “off” or 0 – turns the rule off
  • “warn” or 1 – turns the rule on as a warning (doesn’t make the lint command fail)
  • “error” or 2 – turns the rule into an error (makes the lint command fail with exit code 1 – a good option to fail a continuous integration build)

Such severity tweaks can be made in the .eslintrc.json file created in your project by the Angular schematics:

In the above example, I made the first two rules throw an error instead of a warning (I’m very much against disabling type-checking in TypeScript), but I’m OK with seeing some var keywords instead of let, so I turned off that third rule.

Getting the rule’s name is easy: When the linter fails, that name will be displayed in the console. Here @typescript-eslint/no-empty-function :

Some rules accept more configuration options to create an allowlist of accepted values. For instance, @angular-eslint/no-input-rename prevents you from renaming @Input values, but you can specify a config option that allows a few input names:

The config for that rule becomes an object that looks like this:

The above config allows renaming inputs only if the new name is check or test. This gives you more flexibility than turning off a rule entirely if it’s too restrictive for you.

Improve your code with eslint

eslint is a popular linter that parses your code and outputs a list of warnings and errors to help you improve. The library is designed to lint JavaScript code, and there are extra plugins for TypeScript and Angular so we can get even more specific feedback for our components and services. Here is an example of linting output:

A linter is a perfect complement to a compiler. For instance, angular-eslint, the eslint plugin for Angular, will also look at your HTML templates and flag code that doesn’t follow the Angular style guide. It’s also looking for possible mistakes, such as getting the ngModel 2-way data-binding syntax wrong:

To give you a better idea, here is the list of all the template rules and all the Angular TypeScript rules. If you want to give eslint a try, the first step is to install it with the help of some schematics:

This will download the proper dependencies and plugins and set up everything necessary to lint your code. If you’re using an older version of Angular or building a library instead of an app, there are step-by-step instructions to follow here. Once the set-up is done, all you have to do is run:

This command will parse all your files and output feedback in the console. Note that several IDEs can detect your eslint config and suggest automatic fixes to linting errors, which is even better!

Use Lighthouse to improve your Angular applications

As Angular developers, we tend to focus on component architecture, modules, TypeScript, and the best framework use. Most of the time, those things differ from what matters to end users.

End users usually want:

  • Performance – 60% of the web’s traffic happens on smartphones that don’t always have fast internet connections.
  • Accessibility – Is your website accessible to everyone? Did you check if color-blind people could see it correctly? Do you use alternate labels for images and buttons for screen readers that read the content to a blind user?

And, of course, if your website is supposed to be discoverable on the web, there’s search engine optimization (SEO).

The best way to know how you’re doing in all these categories (and more) is to use a Google Chrome browser built-in feature called Lighthouse. It’s a tab available in the dev tools:

Navigate to your web app (a public URL is needed), open Lighthouse in the dev tools, and click the “Analyze page load” button. Note that you can also simulate a mobile device to get a different report. You’ll get a report with scores in all these categories:

Clicking on any of the scores gives you a TODO list of possible improvements. You can expand every item to get more information about what to fix, how to do it, and why it’s important:

The nice thing about Lighthouse is that once you have improved your app, it takes just a few seconds to test your website again and see your scores increase.

Analyzing your bundle size

We covered build budgets and how they can help keep your application performant by detecting when a new dependency dramatically increases the size of your build output.

If you want to look at your build bundle and determine which dependency/module is the biggest, there’s another tool at your disposal: The Webpack Bundle Analyzer.

To install it, use the following npm command:

npm install --save-dev webpack-bundle-analyzer

Then, run your Angular build with the option that generates build statistics:

ng build --stats-json

Finally, run the Webpack Bundle Analyzer to read those build stats and give you a visual output of that bundle:

npx webpack-bundle-analyzer dist/stats.json

This command opens a new tab in your browser with the following user interface. Each rectangle represents a Javascript module. The bigger the rectangle, the bigger the module.

In the above example, we can see that the application code (main.js – in green color) is a lot smaller than the application dependencies.

For instance, we could improve this project by removing polyfills.js, as these polyfills were included to maintain compatibility with the now-retired Internet Explorer, and they take more space than our application code!

Browser dev tools for LocalStorage

Last week, I introduced localStorage and sessionStorage. I also suggested a few options to get notifications when such storage gets updated.

Today, I will cover how we can visualize and debug storage data. In Google Chrome, open the dev tools (right-click on the webpage, then “inspect” or press Ctrl + Shift + I).

Once the dev tools panel shows up, click on the “Application” tab:

You can select Local Storage or Session Storage on the left-hand side. This shows you the contents of those storages on the right-hand side in key-value pairs. Storage is by domain, so using those tools on a different website would show you different data.

If an object is stored as a JSON string, you can click on the key-value row, and Chrome shows a collapsible version of that Javascript object at the bottom of the screen, making it easy to explore complex objects:

There are two buttons in the top right corner of that same panel. One allows you to clear the entire storage for the current domain, while the second one (the X) clears the currently selected key-value pair:

You can edit a key or value by double-clicking on the corresponding cell in the right panel’s table. This makes testing different values, resetting a cache, or debugging specific scenarios easy.

Notifications from LocalStorage with Signals

Yesterday, we saw that LocalStorage can be used as a persistent cache to store our data in the browser. Earlier, we covered that services are a cache of their own but have one instance per app/browser tab, which means that applications opened in multiple tabs can have an inconsistent state since they each have their own “singleton” services.

LocalStorage can be used to share data between multiple tabs that render the same application. Even better, there is a storage event that can be listened to to know when another tab updated LocalStorage:

Using the Rxjs fromEvent function, we can turn the above event listener into an Observable:

And if we’re using Angular 16+, we can turn the above Observable into a Signal with one more function:

The above Signal could be used in a service to synchronize data between tabs. Spying on that Signal to see what’s going on in it is as easy as registering a side-effect on it using the effect function:

You can see that code example on Stackblitz.

Lifecycle of Angular applications

The lifecycle of an Angular application is something that many aspiring Angular developers struggle with. People often ask me questions such as:

  • How long does this component/service/directive stay in memory?
  • How do I save the data before I navigate to the next page/view/component?
  • What happens if I open that same app in another browser tab?

Here is how to think about it:

  • When we open an app in a browser tab, we’re booting an Angular application in a self-contained memory space, similar to a virtual machine.
  • Closing that tab is equivalent to killing the application, freeing any memory associated with it, just like when you close a desktop app in your machine’s operating system.

In Google Chrome, there’s even a task manager where you can see the memory footprint and CPU usage of your tabs and browser extensions – they’re just like independent desktop apps:

From an Angular perspective, a component gets loaded in memory whenever displayed on the screen. That’s when its class is instantiated.

Suppose the component gets removed from the screen (by navigating to a different component or removing it with a ngIf directive, for instance). In that case, the component is destroyed, and all of its memory state is gone. The same goes for directives and pipes: They get created when used by a component template and destroyed when that component gets destroyed.

Services are different, though. A service is created by Angular when a component needs it for the first time. Then, that instance remains unique and shared between all components that inject such service. A service doesn’t get destroyed: It remains in memory as long as your app is open and you don’t close your browser or tab.

This answers our three initial questions:

  • How long does this component/service/directive stay in memory?
    Components stay as long as they’re in the DOM. Services stay as long as the app is open.
  • How do I save the data before I navigate to the next page/view/component?
    Not if you save it in a service
  • What happens if I open that same app in another browser tab?
    You create another separate instance of everything: Components, services, etc. Both instances are independent and do not share any data or memory space.

Build size budgets

As mentioned earlier in this newsletter, the size of your Javascript matters a lot, as our code has to be downloaded first, then parsed and interpreted by a browser, which will get slower and slower as the size of your app increases. This is why we want optimized production builds. And this is also why it’s always a good idea to keep track of the size of your production code after each build.

Fortunately, the Angular team has our back and integrated build budgets in the Angular CLI. You can use those budget settings to decide when to get a warning or even fail a build it your code becomes too big. This configuration happens in angular.json:

The above (default) budgets would warn you if your Javascript exceeds 500Kb in size and fail once the build exceeds 1Mb. Those are already part of your projects, so you don’t have to do anything to use them.

If you do continuous integration, your build will fail right after the commit that degraded your bundle size, making it easy to troubleshoot and fix the issue.

Most of the time, dependencies are the culprits. I remember coaching a client who needed some Excel-like features in the browser, and their build exploded to over 25MB because of a massive monolithic Javascript Excel library. Thanks to the build error, we knew that this library wouldn’t work, so we chose a lighter one instead.

In the past, I’ve also inherited projects where I would track our build size version after version. My client was amazed to see that despite adding features, the build was getting smaller and smaller after each release, thanks to Angular always being better at tree-shaking and having the incentive to clean up old code and make it smaller. When you start tracking a metric, you want to improve it!

Running unit tests on continuous integration servers

We talked about unit testing a couple of times last week. One of the main benefits of unit tests is that they run quickly and provide immediate feedback. As a result, we get the best return on investment when running those unit tests on a continuous integration (CI) server after each commit.

Two challenges come with running unit tests on a CI server:

  1. The default test runner for Angular projects (Karma) opens a Chrome browser to run the tests, which is challenging on most CI servers (typically Unix based with no UI support).
  2. The ng test command doesn’t stop on its own. It keeps watching the source code for updates and then reruns the tests.

Fortunately for us, there is a single command that addresses both problems:

Running the command will result in a single test run (thanks to --watch false) and will run in a UI-less way (thanks to --browsers ChromeHeadless). That’s all your CI server needs to do.

If you’re using an older version of Angular, this old tutorial of mine might help you configure Karma accordingly.

Rollbar: Error tracking and reporting

Whenever we deploy a web application to production, one of the challenges is that our code will run on several different machines in different locations. When a user reports an error, we cannot access their browser console or stack trace unless the user is techy enough to share that information with us.

Of course, we can create a generic error handler and try to reproduce an issue in our environment, but this can be challenging as none of these options give us eyes on what is going on in that user’s specific browser.

This is where Rollbar shines. Rollbar is a library that can report Javascript errors to a server and create alerts/statistics/tracking of those errors over time:

In the above screenshot alone, there is a wealth of information that we would never get with console.log. For instance, the first error has happened 412 times on 72 different machines (IPs), and the last occurrence was three days ago.

Such a screen can help you identify if a new release solved an issue for all users or created a new one. Even better, Rollbar supports different environments so you can check production or pre-prod, or staging environments for those errors, as well as filter by level, activity, etc. :

Each error contains the browser info, locale, screen definition, stack trace, and more. Errors can also be assigned to developers for further investigation and marked as fixed or muted if the error is irrelevant.

All in all, once people start using Rollbar, it’s almost impossible to stop using it. You can see an Angular demo on Stackblitz here and read this quick set-up tutorial from the Rollbar documentation.