BY CHRIS NEALE
One of the most important metrics in any web application is the “time to interactive” (TTI), which is the number of milliseconds it takes from the user clicking a link to a web application to the user being able to actually use it. The lower the TTI score the better. It means the app loads and starts quickly, it can affect the app’s SEO ranking in Google if it’s publicly available, and it has a huge impact on the user’s perception of quality.
The TTI score for a web application is affected by a lot of different factors, from the speed of the user’s internet connection to the number and size of the files that the application has to download. A slow application might take 20,000 milliseconds to start, which is a full 20 seconds the user is waiting before they can actually use the application they want to use, compared to just a couple of hundred milliseconds which feels incredibly snappy and quick and will lead to much happier customers.
Some of the factors that affect an application’s TTI score are outside of the control of a web development team. We can’t upgrade users’ computers or buy them faster broadband, and in some instances we have no choice about how, or where, an application is hosted. These things limit what we as developers can do to make sure we build web apps that work fast.
Within the scope of what we have control over, we can and should do everything possible to reduce the TTI and provide a better user experience for our users.
One of the things that has a huge impact on the TTI score is simply the size of the initial download that the user needs to get a working application. This value, which is usually measured in kilobytes, is known as the “page weight”.
Every time a new piece of code is introduced into the application, whether it’s code that a developer has written themselves, or a package that’s been installed, the page weight increases which leads to longer download times for the user. Adding unnecessary code to a page is bad for the TTI metric.
All is not lost though if you have a large and complicated application. There is a way to add more code without increasing the TTI.
Today’s complex web applications are made up of dozens, even hundreds, of components that are used to compose the user interface. Some of these components are essential to the app and need to be included on the very first visit to the page. However, some components aren’t used until the user interacts with the application in a specific way.
For example, the user might not be able to click an export button and see the export dialogue until they have already created something to export. In cases like this we can reduce the time the browser takes to get in an interactive state by removing the code for the export dialogue from the initial download and tell the user’s browser to only download it when it’s needed. For features that the user only clicks on very occasionally, they could go through an entire session without downloading that code at all. That’s a win for them because they’ve not wasted time, and a win for the application server because it hasn’t done unnecessary work or used bandwidth on a feature that the user didn’t need.
In an application that uses the React library for component composition, lazy loading is very straightforward. Since React 16.6 the library has included two tools that are needed – the React.lazy() wrapper that loads a component from the server when it’s needed, and the <Suspense> component that displays something while the user’s browser is waiting for the component code to download.
The <Suspense> component takes a single prop – ‘fallback’. This needs to contain the component that is shown to the user while the lazy loading is happening in the background. It could be a simple text node, a loading spinner, or a skeleton of the component that’s going to replace it.
Another tremendous benefit of using React.lazy() is that Webpack, a commonly used code bundler that builds the JS files that drive an application, is aware of what React.lazy() needs. This means there’s no configuration needed to split the lazy loading components into their own individual bundles outside of the main bundle.
Using lazy loading on a moderately large application can reduce the amount of code needed for the initial page load by as much as 80%, and take the Time To Interactive score down by hundreds of milliseconds. That benefits the user considerably.
Caveats for Lazy Loading
Lazy loading is not without its downsides though. There are a few things that need to be considered when you’re building an application that doesn’t download all of the code that might be needed right at the start.
By using lazy loading an application is prioritising the initial load at the cost of short delays later when components are needed. For some applications where the user needs to have access to things quickly, eg a medical diagnosis application, that could present a critical problem. Which specific components can be loaded later needs careful thought.
The second problem with lazy loading components in a web application is that there is an assumption that the user is on a robust, always-available internet connection, and that isn’t necessarily the case. If a user is mobile they might start the application when they’re connected to wifi but need to use a component that needs to be fetched from the server later when they’re in an area without a connection to the internet. That component won’t be available until they’re connected to the internet again. Not only does this need to be accounted for in the user interface design for good user experience, there also needs to be a mechanism in the code to cope with what happens when a component fails to download. Leaving the user seeing a skeleton component or a loading spinner is not a good experience.
The final caveat is that writing a lazy loading component is slightly more complex than a simple component that is loaded as part of the main app bundle, which increases the time it takes to develop the code. Whether or not it’s worthwhile spending that additional time depends on the application, but it often is. User’s expectations of how quickly something will load are demanding, and lazy loading is a good route to improving that metric.
We help teams across the UK deliver successful digital projects. If you’d like to have a conversation with us, we’ll put the kettle on.