+ Core Web Vitals are three performance metrics that measure how + quickly a site loads. Google uses these metrics to show faster + sites higher in search results. +
++ The metrics are called Largest Contentful Paint (LCP), First + Input Delat (FID), and Cumulative Layout Shift (CLS). +
++ In this talk, let’s look at what Largest Contentful Paint, First + Input Delay, and Cumulative Layout Shift are, how to profile + them with WebPageTest and Chrome DevTools, and how to improve + them if the site is slow. +
+ > + } + /> ++ This talk was brought to you by PerfPerfPerf. We + help companies to earn more by making web apps faster. +
++ Have a web performance issue or just want to learn what to improve?{' '} + We’d be glad to help +
++ To learn about Core Web Vitals, let’s take a look at the CNN site! +
++ CNN is a large American media company. For media, performance matters + because the faster the site is, the better is typically the engagement + and the search traffic. +
++ Like, for example, Financial Times redesigned their website in 2017 – + and tests showed that{' '} + + the new site gets up to 30% more engagement + + . Or GQ, a fashion magazine, in 2015, made their site 5 times faster –{' '} + + and their traffic increased by 80% + + .{/* TODO: link to 3perf client as well */} +
+Anyway. Numbers, numbers, numbers, numbers. Back to CNN.
+ ++ The CNN website is, surprisingly, pretty slow. The Lighthouse score of + a single news page is 6. +
++ And Core Web Vitals aren’t good either. Largest Contentful Paint is + 3.7 seconds (yellow), Cumulative Layout Shift is 0.18 (yellow), and + only First Input Delay is green (45 ms). +
++ These measurements were done in November 2020. +
++ + In this talk, you’ll see “PageSpeed Insights” and “Lighthouse” used + interchangeably. That’s because PageSpeed Insights uses{' '} + Lighthouse{' '} + under the hood, so both tools calculate the performance score{' '} + + in the same way + + . + +
+ +Let’s start with Largest Contentful Paint (LCP).
+ ++ + Largest Contentful Paint + {' '} + is how much time it takes for the page to render the largest page + element (hero image, or the largest text paragraph, or something + else). +
++ “LCP is 3 seconds” means that, in that specific test, the time between + the page started loading and between the largest page element rendered + was 3 seconds. +
++ To analyze LCP, I am going to use two tools. The first one is Chrome + DevTools, which you must’ve used already. And the second one is + WebPageTest. +
+ ++ What is WebPageTest? It’s an advanced performance analysis tool. It’s + available at WebPageTest.org; it traces how the page loads and gives + back lots of information in an easy-to-consume format. Yes, it looks a + bit dated, and it could be a bit too advanced at times… But that’s + probably its only drawback. Otherwise, it’s great. +
+ ++ In this case, I am going to use WebPageTest – and I am going to show + you how to use WPT – to look at the loading waterfall – and figure out + why CNN’s largest contentful paint is so, well, large. +
+ +So, in order to do that,
+* I am going to go into WebPageTest;
+* I’m going to paste the URL into the address bar;
++ * I am going to change the browser setting to Moto G4 – Chrome (one + thing I love about WebPageTest is that it allows you to test on real + mobile devices. Not just emulated mobile, like in Chrome DevTools – + but actual physical mobile phones) +
++ * I am also going to change the connection to 4G to make it closer to + what PageSpeed Insights runs with +
+* and I am going to start the test
+ ++ Awesome. So, to save time I'm not going to wait until the test + finishes, because I already have a completed test with the same + settings. So I'm going to just switch to that test and proceed + forward. Here’s the test. +
+ ++ So, the test page has lots of info, and it could be overwhelming at + times. But, right now, we only care about two pieces.{' '} +
++ * **Core Web Vitals.** First, it’s Core Web Vitals. Here, CWV are + measured specifically at this device. And yeah, they look not so good. +
++ * **Waterfall.** Second, it’s the network waterfall. It’s a waterfall + of all resources that are loaded over network, and it’s located right + here: …. CNN downloads a lot of stuff, and I’m pretty sure most of + that is third parties – and that’s the reason why the TBT is high – + but for now we’re talking about the LCP, so let’s focus on that. +
+ + ++ So, to debug LCP, we’re going to use Chrome DevTools and WebPageTest. + Now, how do we actually debug LCP? There are three things I typically + focus on first when auditing LCP: +
++ These issues, in my experience, tend to affect LCP most frequently. +
+ ++ The first issue that frequently worsens LCP is server response time. +
++ If a server is slow at serving HTML, it’s going to take longer for a + browser to load the HTML and render the page. The same holds true for + CSS and{' '} + + render-blocking JavaScript + + . +
+ ++ So how do you check if this has an effect, how do you check if your + server is actually slow? This is easy to do both in WPT and in Chrome + DevTools. +
++ * In WebPageTest, you have this huge waterfall. So what you do is you + click that waterfall, you scroll down, and you click on some responses + – say, on the HTML response. This opens a dialog with a detailed info + – with lots of information, a large part of which is not available in + Chrome DevTools – about that request. So, there’s a lot of stuff here. + But what we actually care about is the “Time to First Byte” value. +
++ * [If you’re using Chrome DevTools, you can also see this in Chrome + DevTools. If you open the page, open Chrome DevTools, click reload, + click the network entry, and switch to “Usage”, you’ll see time to + first byte here as well. And here, TTFB is 40 ms.] +
++ So, in this case, time to first byte is actually great. It’s 190 ms + in the WebPageTest setup and X ms from my local computer – and this is + a great time. I’d say all values below 300 ms are pretty good. So, in + the CNN’s case, this doesn’t seem to be the issue. +
++ As we saw above, the server response time isn’t an issue for CNN. But + if your server response time turns out to be bad, what can{' '} + you do? +
++ My go-to recommendation is to employ a CDN – like{' '} + Cloudflare or{' '} + Fastly. A CDN{' '} + + will host your resources close to the user + {' '} + – and will greatly reduce your server response time. +
+ {/* TODO: remove bitly/holy-perf-links */} + ++ A CDN works great for static resources – like styles, scripts, or + images, – but it doesn’t help much with dynamic ones. If you’re + dynamically generating your HTML pages on the server, you’re a bit out + of luck – those pages would still have a high server response time. +
++ But if you need just a tiny bit of interactivity – eg the only dynamic + part of the page is a footer where you’re showing a different phone + number based on the country – you can move that server logic to the + CDN level as well. And that can be done with edge functions. +
++ With an edge function, you put your logic into a small function and + upload it to the CDN. Then, the function runs on every request in the + same servers that serve your cached files. This means the logic run + close to the user, and the server response time is small. +
++ Almost every CDN has its version of edge functions. Cloudflare has{' '} + Workers, AWS CloudFront + has Lambda@Edge, + Netlify has{' '} + Netlify Functions, and + Fastly has{' '} + + Compute@Edge + + . +
+ +Back to CNN. If it’s not server response time, then what?
++ Another common issue that makes a page render later is render-blocking + resources. +
+ +
+ Let’s say a page’s <head> has the following code:
+
+
+ <head>
+
+ <link rel="stylesheet" href="style.css">
+
+ <script src="script.js">
+
+ </head>
+
+
+
+
+ With this code, the browser would be forced to keep the page blank
+ until it downloads style.css and downloads & executes{' '}
+ script.js.
+
That happens for several reasons.
+ +
+ <link rel="stylesheet"> tags block
+ rendering because browsers want to avoid showing unstyled content.
+
+ If a browser didn’t wait for the stylesheet to load, the user would + see how a page loads without styles and them jumps to become a page + with styles. That’s not a great experience! +
+
+
+ Note 1: stylesheets block rendering even if you put
+ them in <body>. However,{' '}
+ <body> stylesheets only block{' '}
+
+ the content that comes after them
+ {' '}
+ – not the full document.
+
+
+ + Note 2: stylesheets block not only rendering, but + also script execution –{' '} + + even if a script is inline + + . This can also make a page slower. + +
+ +
+ <script> tags block rendering because{' '}
+
+ they actually block parsing
+
+ . When a browser loads the HTML page, it starts parsing it tag by tag,
+ top-to-bottom. And whenever it hits a <script> tag,
+ it pauses parsing completely – and waits until the script downloads
+ and executes.
+
+ And because parsing stops, rendering also stops. So when a page has a{' '}
+ <script> tag, nothing after that tag will be parsed
+ and rendered – until the script finishes executing.
+
+ This behavior exists mostly{' '} + + for historical reasons + {' '} + – scripts learned to rely on it, and we can’t fix it now. +
+
+
+ Note 1: This only applies to regular old{' '}
+ <script> tags. Scripts with async,{' '}
+ defer, or type="module"{' '}
+ attributes{' '}
+
+ load differently and don’t block parsing
+
+ .
+
+
+ + Note 2: the description of parsing above is a bit + simplified. Modern browsers have a “preload scanner” that{' '} + + pre-parses the document ahead of the primary parser + + . This helps to load the resources sooner + +
+ +So, what this means is:
+<head> has a stylesheet that takes a while to
+ download, that stylesheet will delay the render;
+ <head> has a script that takes a while to
+ download, that script will delay the render;
+ <head> has an inline script that doesn’t need
+ to be downloaded but takes a while to execute, that script will also
+ delay the render;
+ And that will increase Largest Contentful Paint.
+Now, let’s see if that’s the case with CNN.
+ +
+ ::Live demo:: The simplest way to check for render-blocking resources
+ is to simply * go to the page source, * copy it, * paste it somewhere,
+ * remove the document body, * format it slightly, * and search for
+ these resources. And if you try to search for stylesheets, you’ll find
+ nothing! That could mean one of two things: either there’re no
+ external styles at all, or they’re inserted dynamically, with a
+ script. For CNN, it’s the first case. And if we search for{' '}
+ <style> tags, we’ll see that, indeed, the page
+ ships with a lot of inline stiles. Which is good! Critical CSS is
+ used. There are no render-blocking stylesheets.
+
+ Same thing with scripts. If you search for scripts, you’ll find that
+ there’re 13 scripts. All of them, in this case, are render-blocking. A
+ script with a defer or an async attribute is
+ not render-blocking, but there’re no scripts with such attribute here.
+ Most of them are inline – remember, inline scripts are still
+ render-blocking. They don’t need to download, but they still need to
+ execute. And 3 of these scripts are external – they need to download{' '}
+ and execute.
+
+ So, we have 13 render-blocking scripts, of which 3 need to download. + Let’s see if they actually affect our rendering time. Like, maybe + they’re fast and inexpensive, and we don’t need to do anything with + them. * So, to see whether downloading may be a bottleneck, I am going + to go to WebPageTest again. And if I open the waterfall, I’ll see all + these external scripts again: [I’ll see 1, 2, 3]. And, right here, I + can see how much time they took to download as well. [So, the first + script takes X to download… The second… The third…] Now, I don’t know + whether that’s a lot – because we’re using a 4G network and a pretty + cheap phone – but that’s one thing to keep in mind. If they were + smaller, if they downloaded faster, LCP would happen sooner. +
+<head>,
+ and the larger the rectangle, the longer the script took to execute.
+ + As we saw, CNN has 13 render-blocking scripts, but most of them are + inline (so we don’t pay network cost) and execute quickly (so we pay + almost no CPU cost). However, there’re three external scripts that + take a while both to download and to execute. +
++ How to optimize these scripts? There are a few optimizations I’d + typically do. +
++ Use code splitting. Code splitting is my favorite + JavaScript optimization. With code splitting, you remove unnecessary + parts of the bundle, and this makes the bundle both smaller{' '} + and faster to execute. +
++ + How to do code splitting:{' '} + Web Perf 101 intro{' '} + ·{' '} + web.dev guide{' '} + ·{' '} + + webpack guide + + . + +
++ + Or migrate to a framework like{' '} + Next.js (for React) + or Nuxt.js (for + Vue.js) that implement code splitting and other performance best + practices automatically. See also:{' '} + Quick apps in 3 steps. + +
+ +
+ Do a bundle audit to remove what’s not needed. Run{' '}
+
+ webpack-bundle-analyzer
+
+ , go over the report, and look for everything you don’t recognize.
+ Check if you have:
+
url-loader
+
+ )
+
+
+ If you don’t use webpack, here are your alternatives: Rollup has{' '}
+
+ rollup-plugin-visualizer
+
+ ; Parcel v2 has{' '}
+
+ bundle analysis built-in
+
+ ; esbuild bundles are supported{' '}
+ in Bundle Buddy. If
+ neither of these solutions works for you, try building a report
+ based on source maps with{' '}
+
+ bundle-wizard
+ {' '}
+ or{' '}
+
+ source-map-explorer
+
+
+
+
+ Delay scripts by adding defer, async, or{' '}
+ type="module" attributes.
+ {' '}
+ External scripts with{' '}
+
+ a defer attribute
+ {' '}
+ load in the background and{' '}
+
+ execute only when the browser finishes parsing the document.
+ {' '}
+ Even if such script lives in <head>, it won’t
+ increase LCP.
+
+ External scripts with{' '}
+
+ an async attribute
+ {' '}
+ also load in the background. However, they execute as soon as they
+ load – so if they load before the document is fully parsed,{' '}
+
+ they’ll block parsing
+
+ . You probably want to use defer instead unless you have
+ a good reason not to.
+
+ Scripts with a type="module" attribute{' '}
+
+ work like defer ones
+
+ .
+
+
+ Note: defer and async attributes work only
+ with external scripts (as in,{' '}
+ <script src="..." defer>).
+ Unfortunately, if you add them to an inline script (
+ <script defer>...</script>), they’ll do
+ nothing.
+
+
+
+ But this doesn’t apply to type="module"{' '}
+ scripts!{' '}
+
+ <script type="module">...</script>
+ {' '}
+ is still deferred. Yeah, this is confusing!
+
+
+
+ See also{' '}
+
+ a great gist by Jakub Gieryluk
+ {' '}
+ with more details on defer, async, and{' '}
+ type="module"
+
+
+ Use Critical CSS.{' '}
+
+ “Critical CSS” is an approach
+ {' '}
+ when each page loads only the styles it needs for the first render –
+ and nothing extra. These styles are typically loaded in an inline{' '}
+ <style> tag, to save time on an extra roundtrip to
+ the server. Styles for JS functionality like popups and for other
+ pages are loaded in background.
+
+ This makes render-blocking styles much smaller. Thanks to this, LCP + also happens sooner. +
++ Today, Critical CSS is typically done{' '} + + with CSS-in-JS libraries like styled-components or tools like + Critical + + . +
+ ++ The third common Largest Contentful Paint offender is a late hero + image. +
+ ++ A hero image is a large image that comes in the beginning of + the page. Here’s an example ← (or ↑, if you’re reading this on mobile) +
++ A hero image is typically the largest – visually the largest – page + element. If a hero image is late – as in, it loads slowly – then + Largest Contentful Paint is going to be delayed. That’s because + Largest Contentful Paint, by definition, is the moment the largest + page element gets rendered.{' '} + {/* TODO: make the image nicer (re-draw the painting on top) */} +
+Here’s how to figure out whether you have this issue:
+ ++ Now, how could I learn if CNN experiences this issue? I can do this in + WebPageTest as well! ::Live demo:: +
+This delays the LCP.
++ There are a few ways to make a hero image load faster if appears late. +
++ Compress, resize, serve WebP/AVIF. Make sure you do{' '} + all the standard stuff + : +
+<img srcset> and <picture>{' '}
+ tags
+ {' '}
+ to serve smaller images to smaller devices.
+ + In the past, compressing, converting, and resizing images was a + laborous manual process. Today, however, a lot of tools would do it + for you automatically: there are image CDNs ( + Cloudinary,{' '} + Imgix,{' '} + Uploadcare), open-source proxies + (imgproxy), and framework + components ( + + next/image + + ,{' '} + + gatsby-plugin-image + + , nuxt-img + ). Pick the one that’s easier to integrate for you – and have all your + images optimized automatically! +
++ + Further reading/watching:{' '} + + Google’s images compression deep dive + {' '} + – my favorite 30-minute intro into image compression;{' '} + + AVIF drawbacks + {' '} + – specifically, AVIF images take longer to encode than JPEG or WebP. + +
+ {/* TODO: slide: mention AVIF; gatsby-image → gatsby-plugin-image; mention + nuxt-img */} + ++ Test switching to progressive JPEG. JPEG is old, and + it doesn’t compress very well, but one feature that it has – and webp + or AVIF don’t – is{' '} + + progressive loading + + . +
++ Non-progressive images load and render top-to-bottom. Progressive + images, however, first load a low-quality version of the image. Then, + as they keep downloading, they gradually enhance to full quality. +
++ + Further reading: a super visual deep dive into{' '} + + how the JPEG format works + {' '} + (the author literally edits bytes and shows how this affects the + image!). + +
+ {/* TODO: make it animated. Or replace with a video from + https://cloudinary.com/blog/progressive_jpegs_and_green_martians? */} + ++ Often, a progressive JPEG version of an image appears sooner than its + WebP or AVIF version. This doesn’t directly help with LCP (LCP happens + when the final image frame renders,{' '} + + although there’s a discussion to change this + + ). However, it does help with user experience! Here’s{' '} + + a great case + {' '} + by Harry Roberts: +
++ Fascinating issue on a current client site where we reduced their + masthead image weight by over 20% by switching it to WebP. It’s now + rendering almost 2× later because WebP doesn’t offer progressive + rendering like the previous JPG did. ++ +
+ Ensure you’re not lazy-loading the hero image. With + lazy loading, images aren’t loaded until they become visible. There is{' '} + + native lazy loading + {' '} + as well as{' '} + + lots of JavaScript-based implementations + + . +
++ Lazy loading is generally a helpful optimization, which is why some + tools do it for every image by default. But it isn’t helpful for hero + images! Even if the image is immediately visible in the viewport, lazy + loading adds a delay. +
+
+
+ How much of a delay?
+
— With JS-based lazy loading, image requests are managed by
+ JavaScript. Because of this, the browser can’t even
+ start
+ {' '}
+ loading the first image until it downloads and executes the
+ JavaScript code responsible for that. Depending on the bundle size
+ and the network speed, that can easily take 2-4 seconds.
+
— And even with native lazy loading, in my tests, Chrome and
+ Firefox seem to delay the lazy-loaded image a bit (perhaps to figure
+ out whether it’s in the viewport).
+
+
+ One tool that lazy-loads every image by default is{' '}
+
+ next/image
+
+ .
+
+ next/image is a Next.js package that optimizes images.
+
+ When you serve an image using next/image, Next.js, by
+ default, sets the src attribute on that image to an empty
+ GIF. Then, once the bundle loads, Next.js replaces the empty GIF with
+ an actual image URL. This delays image loading until the bundle is
+ available.
+
+ Now, imagine the image is critical for LCP. next/image{' '}
+ will delay LCP until the bundle loads!
+
+ To avoid this behavior with next/image, you have to
+ remember to add the priority={true} attribute
+ on critical images. However, if you forget to do this, your LCP will
+ suffer.
+
CNN also has this issue.
+
+
+ Apart from next/image, other tools with similar
+ defaults are{' '}
+
+ gatsby-plugin-image)
+ {' '}
+ (see{' '}
+
+ loading="lazy"
+
+ ) and WordPress{' '}
+
+ from 5.5 to 5.8
+
+ .
+
+
img, you’ll
+ see that img src defaults to an empty picture – and the
+ actual url is provided in data attributes.) I’m not sure why. I
+ guess they had a good reason for this. But performance-wise, this
+ means that the image is discovered and downloaded very-very late.
+ ::Demo with WPT:: If you look at the loading waterfall in
+ WebPageTest, you will find that the image [request 68] only starts
+ downloading as late as 7+ seconds after the page is loaded. And our
+ LCP is actually 7.7s! What a coincidence! (Obviously, not a
+ coincidence.)
+ To fix this issue, CNN can:
+src attribute will be set to the actual image
+ URL, and the browser will discover & start loading the image as
+ soon as it parses the HTML.
+ <link rel="preload"> with the
+ image URL.
+ {' '}
+ If it’s impossible to disable JS-based lazy loading,{' '}
+ <link rel="preload"> will also help the
+ browser to discover the image sooner. However, that’s a hack, so{' '}
+
+ it comes with its own drawbacks
+
+ .
+ + And that’s it for Largest Contentful Paint! Ensure{' '} + your server response time is low, + eliminate{' '} + render-blocking resources,{' '} + optimize the image appropriately – and, + in most cases, LCP will stay low and good. +
+ ++ Next, let’s talk about the second Core Web Vital: First Input Delay + (FID). +
+ ++ + First Input Delay + {' '} + measures how quickly the page reacts when the user clicks or types + something for the first time. Typically, the more JS the page runs + when it loads, the worse FID is. +
++ FID is challenging to optimize because it’s{' '} + + not available in Lighthouse + + . FID is based on the data Google collects from real visitors to the + site. You can’t measure FID manually, so if your FID is bad, and + you’re trying to improve it, you’ll only see whether your fixes are + helping when you deploy them to production and{' '} + + wait for up to 28 days + + . +
++ Thankfully, there’s another metric FID has a close match to:{' '} + Total Blocking Time (TBT). TBT + measures how much the page hangs while it’s loading. It’s not the same + thing as FID, but optimizing TBT would typically improve FID as well. +
+That’s why, for this talk, I’m going to focus on TBT – not FID.
+ ++ There’s only one gotcha to be aware of. FID is often good even when + Total Blocking Time is bad. (This is the case with CNN as well.) +
++ This means that if you’re optimizing only for search ranking, + forget about TBT. For SEO,{' '} + + only FID matters + + . So if your FID is good (it frequently is), you don’t need to + optimize Total Blocking Time or Time to Interactive. Making them lower + won’t help you rank better. +
++ TBT still matters if you care about user experience, though. The + higher TBT is, the busier the user’s CPU is going to be. You don’t + want your users thinking “woah, my laptop gets hot and noisy every + time I visit this site”. +
+ {/* TODO: create slide: highlight FID vs TBT on the CNN’s screenshot */} + ++ So, let’s take a look at CNN’s Total Blocking Time. WPT measured TBT + as more than 12 seconds, with more than here meaning + that it simply gave up measuring. Which is, actually, pretty + explainable. If you scroll all the way down to the end of the + waterfall chart, you’ll see a section called “Browser Main Thread”. + This section shows when the browser is busy with JavaScript (which is + yellow), or layouting work (which is violet). In this case, during the + whole trace, the primary thread stays super busy with JavaScript. +
++ Okay, so what do we do to improve our Total Blocking Time in a case + like this one? +
+Total Blocking Time consists of two things:
++ And I’m going to share an observation that’s pretty uncomfortable to + me, to be honest.{' '} + + From my experience with React-based apps and sites, third-party code + is often responsible for as much as a half of the total JS cost. + +
++ This means that we, as engineers, can only get so far by optimizing + the first-party code. If we want to actually reduce the JavaScript + cost and improve TBT, we have to optimize the third-party code as + well. We have to go and talk to marketing. We have to make policy + changes instead of code changes. We have to negotiate. +
++ This is not what we, engineers, are used to doing. But if we want to + make TBT better, that’s something we have to do. +
+ ++ Harry Roberts{' '} + has a great talk about the + way we, as engineers, can talk about third parties with marketing. +
++ The important thing is to avoid blaming anyone. Analytics and ads also + serve an important business role, just like performance does. So + instead of approaching this as “we’re good, you’re guilty”, approach + this as “here’s a challenge our company has, can you help me fixing + it?” +
++ Harry Roberts’ talk shows a bunch of tricks to measure and discuss the + third-party performance. Here’s one of them, applied to the CNN’s + site. +
+ ++ Okay, so, how do we measure how much JS cost third parties actually + contribute? I have a favorite trick for this. (And BTW I learned this + trick from the same Harry Roberts’ talk.) The trick looks as follows.{' '} +
++ So, what this is going to do is it’s going run a performance trace of + the CNN website with all the third parties blocked. And once the test + finishes (which I’m not going to wait for since I already have a + completed test) you can see the Total Blocking Time without third + parties. In this case, the first run didn’t block third parties, for + some reason, but the second and the third one did, so I’m going to + open the second one. And, well, with all third parties blocked, the + total blocking time gets down to 2 seconds. From 12+ seconds to 2 + seconds. This is huge. And this is why the first item you need to + tackle when optimizing your TBT is third parties. +
++ How do you optimize third parties if you realize they are expensive? + Here are a few ways: +
+Now, let’s talk about the first-party code.
+ +A frequent issue I see in React sites is the hydration time.
++ When you have an app or a site that’s written in React and that uses + server-side rendering, once the page gets downloaded, it goes{' '} + + through hydration + + . During hydration, React renders the full app in the browser, goes + through every rendered component, extracts event listeners, and + attaches them to the server-generated markup. +
++ Rendering the full app is expensive, as React apps tend to have a lot + of components on a page. As a result, hydration typically becomes the + most (or one of the most) expensive first-party operation. +
+ ++ Here’s my approach to check if the hydration time is something one + needs to focus on: +
+[letter].hydrate
+ [letter].hydrate’s duration and compare it
+ with other chunks of JavaScript in the recording
+
+ If the [letter].hydrate entry is one of the largest
+ JavaScript entries in the recording, hydration is the bottleneck.
+
There are two ways to optimize the hydration time.
++ Avoid loading React at all. Keep using React on the + server but remove it from the client. If you need to add any + interactivity, do it with inline scripts. +
++ This approach works great for static pages like blog posts or landing + pages. It keeps React’s great development experience without + sacrificing the the user’s loading time. +
+
+ If you use Gatsby,{' '}
+
+ gatsby-plugin-no-javascript
+ {' '}
+ (+
+
+ utils
+
+ ) will do this for you. Next.js has an undocumented{' '}
+
+ unstable_runtimeJS
+ {' '}
+ flag that does a similar thing. If you do server-side rendering
+ yourself,{' '}
+
+ ReactDOMServer.renderToStaticMarkup
+ {' '}
+ is the API you need.
+
+ + Skipping React on the client is what{' '} + 3perf.com (the site you’re at) does. + This was the largest optimization that{' '} + + helped it reach 100 on PageSpeed Insights + + . + +
+ ++ Use partial hydration. Hydrate the page as usual but + avoid hydrating component that don’t need any interactivity (such as + article content or footer). +
++ This approach works great for semi-dynamic pages that have a lot of + static content but also a lot of interactivity. Example:{' '} + + The New York Times’ investigation into cellphone tracking + + . +
+ {/* TODO: mention React 18’s Suspense */} ++ To do partial hydration in React, you have to resort to hacks.{' '} + + The Current Official Way + {' '} + to do partial hydration is fairly hacky: on the server, you render the + component, and on the client, you render a +
+
+
+ {``}
+
+
+ + instead. (The Future Official Way may be{' '} + + React Server Components + {' '} + but they’re still a year or a few away.) +
+
+ Thankfully, there are a few libraries that hide that hack under the
+ hood. My favorite one is{' '}
+
+ react-lazy-hydration
+ {' '}
+ which provides an API like this:
+
+
+ {'{...} \n'}
+ {'{/* or */}\n'}
+ {'{...} \n'}
+ {'{/* or */}\n'}
+ {'{...} \n'}
+
+
+ {/* TODO: re-draw the slide */}
+
+
+ Note: partial hydration only helps when you’re
+ optimizing expensive components. For example, avoid a temptation to
+ wrap every link or text paragraph with{' '}
+ {`. Unless these components are
+ expensive to render, partial hydration won’t save you much time.
+
+
+ + To find expensive components, open{' '} + + React Profiler + + , click “Reload and start profiling”, and see what’s happening + during the first render. + +
+ ++ To summarize: Total Blocking Time is affected by first-party JS and + third-party JS. +
+The third Core Web Vitals is Cumulative Layout Shift (CLS).
+ ++ CNN’s Cumulative Layout Shift is 0.18. This isn’t great! To make CLS + good, we need to push it below 0.1. +
++ However, CNN’s Cumulative Layout Shift is also 0, which is perfectly + green. Why are there two different numbers? +
+ ++ PageSpeed Insights reports two different numbers because it measures + performance in two different ways. +
++ Field Data is the most precise data out there. However, sometimes, it + gets outdated: if you make the page 2× faster, the Field Data will not + show that immediately. It aggregates data over the whole month, + whereas you might’ve had optimized the page just today! +
++ That’s where Lab Data comes in. If you make the page 2× faster, you’ll + see changes in Lab Data immediately. +
+ ++ However, Lab Data is also limited. PageSpeed Insights can’t click the + page or scroll it like a real user does. +
++ And that’s what causes the difference here: if you have an ad block + that loads ⅔rd past the article and shifts the text, a real user will + notice it, whereas PageSpeed Insights will not. +
+ ++ Let’s look at the CNN audit again. The CNN website has a + higher-than-zero layout shift. To detect what’s causing that layout + shift, we can use WPT again. +
+::Live demo::
++ The most common cause of high CLS I see on news sites is ads and video + blocks. They typically load after the page finishes loading, appear + out of nowhere, and shift the page text down right when you’re looking + at it. +
++ If this happens, talk to your designer and figure out the solution + together! Perhaps you could move the element to a different place + where it wouldn’t shift anything. Or maybe you can reserve a space for + the element ahead of time. +
+ ++ The next most common CLS antipattern, from my experience, is images + without dimensions. +
++ When the page renders for the first time, all images without an + explicit width and a height are rendered zero-spaced. The browser + doesn’t know how much space they’ll take, so it reserves none! +
++ Then, when such images start loading, the browser learns their size + and rerenders the page reserving space for them. This shifts the + content down – and causes a layout shift. +
+ +TODO: a new slide, with a code like ↓
+
+
+ <!-- HTML -->
+
+ <img width="600" height="400">
+
+
+ /* Computed CSS */
+
+ img { aspect-ratio: 600 / 400; }
+
+ */}
+
+ A solution for this issue is to{' '}
+
+ set image width and height attributes
+
+ . Browsers use these attributes to precompute{' '}
+
+ the aspect-ratio CSS property
+
+ .
+
+ The aspect-ratio property tells the browser how much size
+ the image is going to occupy – even before the browser starts loading
+ it and learns actual image dimensions.
+
+ With this property, browsers reserve space for the image ahead of + time. This prevents the layout shift completely. +
+ +
+ Another thing that frequently increases the layout shift is{' '}
+ font-display: swap.
+
+ font-display: swap is a common way{' '}
+
+ to make text with custom fonts render sooner
+
+ . It tells the browser to render text in a fallback font before the
+ custom one loads.
+
+ A layout shift happens when the fallback font is significantly smaller + or larger than the custom one. When the custom font loads and replaces + the fallback one, the text may become 1-2 lines longer or shorter. + This will cause a lot of page elements to move. +
+There are three ways to solve this issue:
+font-display: optional instead of{' '}
+ font-display: swap. This tells the browser to keep the
+ fallback font visible even after the custom one loads. The custom
+ font becomes visible when the user navigates to a new page. This
+ completely avoids the layout shift because the font is never
+ replaced.
+ size-adjust CSS property
+
+ . size-adjust is designed specifically to adjust{' '}
+ @font-face sizes and prevent a layout shift when the
+ custom font loads.
+ + To summarize. Cumulative Layout Shift’s most common offenders here + are: +
+width and height attributes)
+ font-display: optional)
+ And that’s it from me. Thanks!
+ {/* TODO: https://twitter.com/iamakulov/status/1331916406733107201 */} ++ Follow me on Twitter:{' '} + @iamakulov +
+ {/*TODO:+ Thanks to Alex Riaronc, ... +
*/} + + > + ); +}; + +const SlidesContentWithQuery = () => ( +