How to Improve Your Google PageSpeed Score: A Practical Guide for 2026
A straightforward walkthrough of what actually moves the needle on PageSpeed Insights — from image optimization to render-blocking resources — with benchmarks and real examples.
MigrateLab Team
Migration Experts

What PageSpeed Insights Actually Measures
Google PageSpeed Insights runs Lighthouse under the hood and scores your page from 0 to 100. But the score isn't a single measurement — it's a weighted combination of six metrics. Understanding what each one measures is the first step to improving your score, because blindly optimizing the wrong thing wastes time.
The six metrics and their current weights in Lighthouse 12: First Contentful Paint (10%), Largest Contentful Paint (25%), Total Blocking Time (30%), Cumulative Layout Shift (25%), and Speed Index (10%). Notice that Total Blocking Time and LCP together account for 55% of your score. If you only have time to fix two things, those are the two. CLS at 25% is the other major factor — and it's often the easiest to fix.
Google categorizes scores as: 0-49 (poor), 50-89 (needs improvement), and 90-100 (good). For context, the average mobile PageSpeed score across the web is around 50. Most no-code platform sites score between 35 and 65 on mobile. A well-optimized site built with modern frameworks typically scores between 90 and 100.
Images: The Lowest-Hanging Fruit
Images are responsible for the largest share of page weight on most websites. HTTP Archive data from 2025 shows images account for roughly 42% of total page bytes on the median site. Serving unoptimized PNGs or full-resolution JPEGs is the single most common reason for a low PageSpeed score — and the single easiest thing to fix.
Use Modern Image Formats
WebP and AVIF are the two modern image formats that every site should be using. WebP offers 25-35% smaller file sizes compared to JPEG at equivalent quality. AVIF pushes that to 30-50% smaller. Browser support for WebP is now universal (97%+ globally), and AVIF is supported by Chrome, Firefox, and Safari as of 2024.
For most sites, WebP is the safe default. If you want maximum compression and can accept slightly longer encoding times, use AVIF for hero images and large photos. Tools like Squoosh (Google's free image optimizer), Sharp (Node.js library), or your framework's built-in image component can handle conversion automatically.
Responsive Images with srcset
A single hero image served to all devices is a common performance mistake. If you upload a 1920px-wide hero image, mobile users on 375px screens still download the full file. The srcset attribute lets you provide multiple sizes, and the browser picks the right one. A typical setup serves 640w, 960w, 1280w, and 1920w versions.
In Next.js, the <Image> component generates responsive images automatically — you provide the source and it creates optimized versions at build time or on-demand. Astro's <Image> component does the same thing. If you're on a platform without built-in image optimization, tools like Cloudinary or imgix can serve responsive images via URL parameters.
Lazy Loading and Explicit Dimensions
Below-the-fold images should use loading="lazy" so the browser only loads them when they're about to enter the viewport. This reduces initial page weight significantly on content-heavy pages. Important exception: never lazy-load the LCP image (usually the hero). The LCP image should load eagerly and ideally be preloaded with <link rel="preload" as="image">.
Always set explicit width and height attributes on every image. Without them, the browser doesn't know how much space to reserve, so it renders the page first, then shifts content when the image loads. This causes layout shift (CLS), which accounts for 25% of your Lighthouse score. The aspect-ratio CSS property is an alternative if you need fluid sizing.
JavaScript: The Biggest Performance Tax
JavaScript has the highest performance cost per byte of any resource type. Unlike an image, which just needs to be decoded, JavaScript must be downloaded, parsed, compiled, and executed — and it blocks the main thread while doing so. This is why Total Blocking Time is weighted at 30% of the score, the single highest weight.
The median website in 2025 ships around 500KB of JavaScript according to the HTTP Archive. Reducing that to under 200KB typically moves your PageSpeed score by 15-25 points. Every 100KB of JavaScript adds approximately 150-250ms of main thread blocking time on a mid-range mobile device.
Audit Your Third-Party Scripts
Third-party scripts are usually the biggest source of JavaScript bloat. A single chat widget can add 200-400KB. Analytics scripts (especially with multiple tracking pixels) can add 100-300KB. Social media embeds, A/B testing tools, heatmap recorders, and CRM integrations each add their own weight.
The audit process: open Chrome DevTools, go to the Network tab, filter by JS, and sort by size. Then check the Coverage tab (in the drawer) to see how much of each script is actually used during page load. It's common to find that 60-80% of third-party JavaScript is never executed on any given page.
Practical fixes: defer non-critical scripts with async or defer attributes. Load chat widgets only when the user interacts (click-to-load). Use Google Tag Manager to control when analytics fire. Replace heavy social embeds with static previews that load the embed on click. Remove any scripts that aren't actively providing value.
Code Splitting and Tree Shaking
If you're using a framework like Next.js or Astro, code splitting is often automatic — each page only loads the JavaScript it needs. But if you're on a platform that bundles everything into one file, every page loads code for every other page. This is one of the fundamental reasons single-page application frameworks can be slow on initial load.
Tree shaking removes unused exports from your JavaScript bundles. If you import one function from a library but the library exports 200 functions, tree shaking strips the other 199. Modern bundlers (webpack, Vite, esbuild) do this automatically — but only if the library uses ES modules. Libraries that use CommonJS (module.exports) can't be tree-shaken and always include all their code.
Server Response Time (TTFB)
Time to First Byte measures how long the browser waits before it starts receiving data from your server. Google considers anything under 800ms acceptable, but under 200ms is the target for a strong score. Your TTFB is the foundation — no metric can be faster than TTFB, because the browser can't start rendering until it receives the first byte of the HTML response.
TTFB is affected by three things: server processing time (how long your application takes to generate the response), network latency (the physical distance between server and user), and DNS/TLS handshake time (establishing the connection). For most sites, server processing time is the bottleneck.
Static Generation vs Server Rendering
The fastest possible TTFB comes from serving pre-built HTML files from a CDN — this is what static site generation (SSG) does. The CDN serves the file from the nearest edge location in 20-50ms. No database queries, no server processing, no waiting. Frameworks like Astro default to this approach, and Next.js supports it for any page that doesn't need real-time data.
For pages that need real-time data (authenticated content, live pricing, personalization), server-side rendering (SSR) is the alternative. SSR generates HTML on each request, so TTFB includes the time for database queries and page generation. Optimizing SSR means caching aggressively, optimizing database queries, and using edge rendering (Vercel Edge Functions, Cloudflare Workers) to run your server close to the user.
CDN Configuration
If you're not using a CDN, your TTFB for users on other continents can be 500-2000ms just from network latency. A CDN caches your content in data centers around the world, so every user gets served from a nearby location. Cloudflare (free tier available), Vercel Edge Network, and AWS CloudFront are the most common options.
For static assets (images, CSS, JavaScript), set long cache headers: Cache-Control: public, max-age=31536000, immutable for content-addressed files (files with hashes in their names). For HTML, use shorter cache times or stale-while-revalidate to balance freshness with speed.
CSS: Render-Blocking and Unused Styles
CSS blocks rendering — the browser won't paint anything until it has downloaded and parsed all CSS files in the <head>. Large CSS files delay First Contentful Paint directly. The impact is proportional to file size: a 200KB CSS file on a 4G connection takes roughly 400-500ms to download, and that's 400-500ms before the user sees anything at all.
The No-Code CSS Problem
No-code platforms are particularly bad here. Webflow sites typically export 12,000-18,000 lines of CSS including styles for every component variant, even ones not on the current page. Wix generates component-scoped CSS but loads the framework's base styles regardless. WordPress page builders (Elementor, Divi) often generate 300-500KB of CSS across multiple files.
In contrast, a Tailwind CSS site with proper purging produces 10-30KB of CSS. The same design, the same visual result — but 10-20x less CSS. This is a fundamental architectural difference that can't be solved by optimization plugins or caching; it requires a different approach to how the CSS is generated.
Critical CSS and Deferred Loading
Critical CSS is the technique of inlining the styles needed for above-the-fold content directly in the HTML <head>, then loading the rest of the CSS asynchronously. This lets the browser render the visible content immediately without waiting for the full CSS file. Tools like Critters (used by Next.js) can extract critical CSS automatically during the build process.
To defer non-critical CSS, use the preload pattern: <link rel="preload" href="styles.css" as="style" onload="this.onload=null;this.rel='stylesheet'">. This downloads the CSS without blocking rendering, then applies it once loaded. The transition is usually invisible because above-the-fold styles are already inlined.
Fonts: A Surprising Performance Bottleneck
Custom web fonts can cause both render delays and layout shift. Each font file is typically 20-50KB, and if you load multiple weights (regular, medium, bold) and styles (normal, italic), it adds up quickly. Loading 4-6 font files is common and can add 200-300KB to your page weight.
Self-Hosting vs Google Fonts
Loading fonts from Google Fonts requires a DNS lookup to fonts.googleapis.com, a connection to that server, then a redirect to fonts.gstatic.com for the actual font files — adding 100-300ms of latency before the font even starts downloading. Self-hosting eliminates this entirely because the fonts load from your own domain (or CDN), which the browser is already connected to.
In Next.js, the next/font module handles this automatically — it downloads the font at build time, self-hosts it, and generates the optimal @font-face declarations. For other frameworks, download fonts from Google Fonts (or services like Fontsource), subset to only the characters you need, and serve them from your own origin.
Font Display Strategies
The font-display CSS property controls what happens while fonts load. swap shows the fallback font immediately and swaps when the custom font loads — fast FCP but can cause CLS. optional only uses the custom font if it loads very quickly, otherwise sticks with the fallback — zero CLS but might not show your custom font on slow connections.
For most sites, font-display: swap combined with a well-matched fallback font is the right choice. The key to avoiding CLS with swap is choosing a fallback font with similar metrics (x-height, character width). Tools like fontaine or next/font can generate fallback font overrides that match your custom font's dimensions precisely.
Third-Party Script Management
Beyond the JavaScript size issue, third-party scripts cause specific performance problems. They often load synchronously (blocking the parser), establish connections to new domains (adding DNS/TLS overhead), and execute long tasks that block the main thread. A typical business site has 5-15 third-party scripts: analytics, ad trackers, CRM, chat, consent management, social widgets, and more.
The Pareto approach: identify the 2-3 scripts that are essential and defer everything else. Google Analytics (or a lightweight alternative like Plausible/Fathom) should load first. Everything else gets either deferred, lazy-loaded on interaction, or removed.
Concrete strategies: load chat widgets on click rather than page load. Use loading='lazy' attributes on social embeds. Replace heavy consent management platforms with lightweight alternatives. Use rel='preconnect' hints for domains you know you'll need: <link rel='preconnect' href='https://www.google-analytics.com'>. This establishes the connection early so the actual script loads faster.
Mobile vs Desktop: Why Your Scores Differ
PageSpeed Insights shows separate scores for mobile and desktop. The mobile score is almost always lower because Lighthouse simulates a mid-tier mobile device with CPU throttling (4x slowdown) and a throttled 4G connection. This means JavaScript that runs in 50ms on your laptop takes 200ms in the simulation, and network requests take 3-4x longer.
The mobile score is the one that matters for SEO — Google uses mobile-first indexing. If your desktop score is 95 but your mobile score is 55, the 55 is what affects rankings. The most impactful things for mobile specifically: reduce JavaScript (it hurts more with CPU throttling), optimize images for mobile viewports (don't serve desktop-sized images), and ensure fonts load quickly (mobile connections have higher latency).
Measuring and Monitoring
PageSpeed scores fluctuate between runs. Your score can vary by 5-10 points depending on server load, network conditions, and Lighthouse's simulated throttling. Don't chase a perfect 100 if you're consistently at 92. Instead, track trends over time and focus on real user data.
The best monitoring approach: use Google Search Console for Core Web Vitals field data (what Google actually uses for rankings), PageSpeed Insights or Lighthouse for lab testing and debugging, and web-vitals JavaScript library (by Google) for custom real-user monitoring. Run Lighthouse in CI to catch regressions before deployment — tools like Lighthouse CI or Unlighthouse can do this automatically.
When Optimization Isn't Enough
There's a point where you've optimized everything you can control and the score still isn't good. This usually means the bottleneck is the platform itself — its JavaScript framework, its CSS generation, its server infrastructure. No-code platforms ship platform code on every page that you can't remove, strip, or defer because it's what makes the platform work.
If you're consistently seeing scores below 60 despite optimizing images, deferring scripts, and minimizing third-party code, the platform is likely the ceiling. Migration to a modern framework like Next.js or Astro removes the platform overhead entirely. The same design that scores 45 on a no-code platform will typically score 90+ when rebuilt as clean, optimized code — not because of optimization tricks, but because there's simply less unnecessary code between your content and the user.
PageSpeed Quick Wins Checklist
Convert images to WebP/AVIF
Use tools like Squoosh or your framework's built-in image optimization. Target 80% quality for WebP — visually identical, 30-50% smaller. For hero images, consider AVIF for additional savings.
Tip: Next.js and Astro handle this automatically with their Image components.
Audit and remove unused JavaScript
Use Chrome DevTools Coverage tab to identify unused JS. Third-party scripts (analytics, chat widgets, tracking pixels) are usually the biggest culprits. Defer or lazy-load non-essential scripts.
Tip: Every 100KB of JS removed typically improves TBT by 150-250ms on mobile.
Inline critical CSS and defer the rest
Extract the CSS needed for above-the-fold content and inline it in the <head>. Load remaining styles asynchronously. Use PurgeCSS or Tailwind's built-in purging to remove unused styles.
Tip: Tools like Critters (used by Next.js) can automate critical CSS extraction.
Self-host and subset your fonts
Download fonts from Google Fonts, subset to Latin characters, and serve from your own domain. Add font-display: swap and preload the primary font. Match fallback font metrics to prevent CLS.
Tip: next/font handles self-hosting, subsetting, and fallback matching automatically.
Enable caching and use a CDN
Set proper Cache-Control headers for static assets. Use a CDN like Cloudflare (free tier) or Vercel Edge Network. For static sites, serve pre-built HTML for sub-50ms TTFB worldwide.
Preload your LCP element
Identify what your LCP element is (usually hero image or large heading). Add a <link rel="preload"> for it in the <head>. Never lazy-load the LCP image — it should be the first image to load.