How to improve Google PageSpeed Insights score for a Next.JS projectOct 21, 2022·Last updated on Oct 24, 2022
Share this article:
According to Google's statistics, 53% of mobile users abandon websites that take over 3 seconds to load. In this article, we will share our findings on what helped us to improve the PageSpeed Insight (PSI) score. At Unstoppable Domains we use Next.js, but the following findings are also applicable to other frameworks.
General recommendations on what to watch at all times:
- Keep JS bundles small
- There should be a good balance between a number of bundles and their weight
- Analyze user navigation behavior and avoid loading code that isn't necessary on the initial page load
- Postpone loading Google Tag Manager (GTM) and other 3rd parties on the initial page load
- Apply Progressive Hydration to avoid unnecessary computations on page load
- Content security policy can block resources which in turn slow down the site
Keep JS bundles small
First of all, it is recommended to set up a webpack-bundle-analyzer and check what is used in the shared JS bundle (aka the common JS file). From our experience, there is always something to extract and make it lighter. This brings an improvement to all pages on the site.
Lodash tree-shaking is tricky, some other libraries might be as well. Not always named imports only import what's relevant. In Lodash's case, we recommend using default imports and only import the function that's used. Avoid full-package imports. For VS Code there is also a helpful extension called "Import Cost" which can help detect unnecessarily large imports.
Optimize your code for Webpack's tree-shaking mechanism.
If your project uses Typescript it is recommended to extract interfaces/types to external files, separate from the implementation. For ex.: types.ts. If not, in the future when you import these types/interfaces to another file - webpack brings all JS which are there, not only types, and increases %% of unused code on pages that don't necessarily use it.
Also, consider loading JS packages on demand only when they are actually needed:
In the screenshot above, the getFirebase function dynamically loads all necessary Firebase packages. Prior to this implementation, Firebase packages were included in the main chunk, however, they were only used on the auth page. We were able to decrease our common JS bundle by 8.78% with just such Firebase refactoring alone!
Keep a healthy balance between chunks overall weight and their count.
If you have 10 chunks of 1-5KB each, it could take longer for a browser to download all of them, compared to a single 10-15KB chunk. Connection time for each request negatively affects the PSI score as sometimes the connection time is longer than the actual file download time.
You can check it out in “Developer tools” -> Network tab in Chrome/Brave:
In this case, we are waiting for the server response longer than actually downloading the asset.
Minimize render-blocking scripts
Without adding extra attributes, JS files are loaded synchronously by a browser, which affects page rendering. Our goal is to minimize render-blocking resources on the initial page load, to keep high FCP and LCP scores.
Isolate critical JS into its own chunk so it is cached and is loaded synchronously, since other scripts depend on it. Framework dependencies such as React can also be included into this file. For all other JS chunks consider adding asynchronous loading HTML script tag attributes: async & defer.
Preload relevant assets in the background
Aside from JS, there are also images, custom fonts, and CSS resources that we usually deal with.
For images, consider using highly optimized formats like .webp or .avif. This will help you serve less content over the network, which in turn speeds-up overall page load time.
Same as for JS content, images must be loaded on demand, whenever users scroll to a particular content which becomes visible in a viewport. For this, lazy attribute can be used.
For mission critical images that are visible right away, consider using link HTML tag with rel="preload" attribute. Note: such tags must be added into the HEAD.
The same for fonts:
Let’s talk about CSS. For best performance, consider using native CSS, CSS-in-JS or its JS alternatives which generate style tags on the fly.
If your site is rendered on the client side and used CSS-in-JS it would be good to think about moving to server-side rendering(SSR). It gives you a better PSI score.
Also, don't forget about sharing the correct cache between the server and the client. Otherwise, the browser will be required to call a long task for the calculation of all CSS, which blocks the main thread.
Analyze user behavior for your site and make all for his good experience for the first seconds
Analyze your user behavior on the first time/first few seconds on the website and do all that you can do to delay load or not load all unnecessary assets/data. It gives your user a better experience and your site performance and PSI score :)
- The largest content paint (LCP) is the block on the screen which is the biggest one and if these blocks load longer than all others it’ll decrease your PSI. A bad example of this can be big images that are loading after the main content or custom font or popups after page loading. The content on the screen can be shifting and the user will be baffled. Please be friendly to your client.
- If your website uses external fonts do not forget to load them with a “preload” attribute. Also, be sure that you are not loading unnecessary fonts on the page where they will not be used.
- Images can block the main thread for loading. So be sure that you have added the “lazy” attribute. Also, you can save your images to .webp/.avif format for better compression and optimized loading time. If you have a big list of small images and you can see it takes a long time to load - you can try to create icons sprite, and it gives you less connection time and better performance.
Google Tag Manager (GTM) and other analytics
GTM is the main issue (if you have done all steps above) of bad PSI. That’s because you can not control what this script is loading and when. This blocks the main thread and for mobile devices it could be a bottleneck. As a solution, you can use server-side tracking or create a delay function to load the GTM script on the client side.
By default google gives you a placeholder for load script asynchronously:
We could change the attribute async to defer and the script will download whilst the HTML is parsing, and only execute once the page has finished loading.
This is an improvement! For your users, there will be no additional delay to load your page. Unfortunately, the PSI score will still be less! This is because it will wait until the document is loaded, and then continue to wait a little longer until any additional scripts have finished.
After this update, your GTM script will load after 4 seconds and PSI score will be better.
The same technique you can use for similar analytics. It does not work for services that are required to be loaded before content.
Hydration - is a process when the browser attaches your JS handlers to HTML elements. This is the main blocker for the CPU after the page load. By default react hydrate all elements on the page. In the latest version (v18) this process has been optimized and works better, but it does not work with SSR.
How we can optimize our page for CPU? We can render only visible parts of the page. In this way, we can use browser API IntersectionObserver (IO) and write logic to render our components or use third-party libraries such as react-render-if-visible.
Also, we can combine lazy-load with IntersectionObserver - it will load only the visible part of JS on the page which is also good for the first-page load.
But you can not cover all components with IO because it will load the CPU with additional event listeners. As elsewhere, you must keep a balance.
Content security policy (CSP)
This is not an obvious point and it is from my experience.
We added a new font on the page but forgot to add a new CDN to our CSP policy and PSI score dropped.
After adding this CDN to the list of allowed, PSI became as before.
The explanation of this is simple - when a browser tries to load something on a page it takes CPU power for this task and if it fails the browser will try to do it again a few more times. Each retry takes CPU time and blocks the main thread as well. As a result, the TBT metric is down which is important for PSI score calculation. So check your console sometimes and be sure everything is loaded as expected :)