SEO Mythbusters
Data-backed SEO myth analysis
Join our community of websites already using SEOJuice to automate the boring SEO work.
See what our customers say and learn about sustainable SEO that drives long-term growth.
Explore the blog →Run a real page-level Lighthouse scan, keep every completed run, and turn the raw audit into plain-English priorities your team can actually work from.
What it means
Lighthouse is the audit engine behind Chrome DevTools and PageSpeed Insights — the same scoring Google publishes for every URL it crawls.
When you run a page through Lighthouse, you're running the same audit set Google uses to decide whether your page meets the performance thresholds it weighs into search ranking. The categories — Performance, Accessibility, Best Practices, SEO — each combine dozens of individual audits into a single 0–100 score.
The Performance score is the most consequential. It's a weighted blend of Largest Contentful Paint (when the biggest visible element finishes painting), Cumulative Layout Shift (how much content jumps as the page loads), Interaction to Next Paint (how responsive the page is when a user clicks or types), First Contentful Paint, Total Blocking Time, and Speed Index. Three of those — LCP, CLS, INP — are the official Core Web Vitals that ride along with every URL in Google's index.
We run the audit in lab mode on a simulated mobile device with throttled CPU and network. That matches the conditions Google's PageSpeed Insights uses for its lab data and gives you a reproducible baseline. The numbers won't exactly match the field-style metrics from real users (Chrome's CrUX dataset), but they tell you whether the page as built is fundamentally fast — which is the part you can actually fix.
How it works
Each weighted equally in the headline number.
Core Web Vitals: LCP, FID/INP, CLS, TTI, total blocking time.
ARIA labels, color contrast, keyboard navigation, semantic HTML.
HTTPS, console errors, deprecated APIs, image aspect ratios.
Meta tags, mobile viewport, robots.txt, crawlable links.
Who uses this
Confirm a perf fix actually moved the needle before merging. Drop the share URL into the PR description so reviewers see the same numbers.
Run a baseline scan on a client's top landing pages, then quantify how much of the recommended-fix list actually got shipped.
Sanity-check the marketing site or the most-trafficked product page after a redesign — without spinning up the full Lighthouse CLI locally.
Reading your report
Performance score
The headline 0–100 number. 90+ is good, 50–89 needs work, under 50 is poor.
Core Web Vitals
LCP, CLS, INP, FCP, TTFB, TBT with field-style threshold colors.
Top recommended fixes
Click-to-expand, ranked by leverage. Each shows what's broken, why, and the recommended action.
Full audit evidence
Every Lighthouse audit we ran. Open it when you want to see why a metric scored what it scored.
Performance delta
From the second scan onward, the difference vs the previous run is shown next to the score.
Benchmark
Where this URL sits relative to other completed scans of the same site type.
What good looks like
Headline performance score thresholds, straight from Google's published guidance.
Page is fundamentally slow
Heavy JS, unoptimized images, render-blocking CSS, slow server response. LCP usually well over 4 seconds. Real users on average mobile networks are bouncing.
Workable, with known wins available
Some Core Web Vitals fail their thresholds. Common causes: unused CSS or JavaScript, oversized images, third-party scripts on the critical path. Most fixes are mechanical.
Fast across all Core Web Vitals
LCP under 2.5s, CLS under 0.1, INP under 200ms. The page itself is not what's holding back rankings. Holding 90+ requires perf budget enforcement in CI.
We run the full Lighthouse audit suite on a headless Chromium instance with mobile emulation, simulated mid-tier device CPU throttling, and a 1.6 Mbps / 150 ms RTT slow 4G network — the canonical PageSpeed Insights "lab" defaults. We do not modify the scoring weights Google ships with Lighthouse.
Performance score weighting (current Lighthouse 11+): FCP 10%, Speed Index 10%, LCP 25%, TBT 30%, CLS 25%. The single-number score is a weighted geometric mean — a single very-bad metric drags the score harder than the arithmetic mean would suggest.
Lab metrics differ from field metrics. Field data (CrUX, real-user monitoring) measures what your real users see; lab data measures what a controlled simulated device sees. Use lab as the "is the page itself slow?" answer; field as "are users experiencing slowness?" answer.
One audit per URL per minute (rate-limited). The browser instance is destroyed after each run so there's no contamination between scans.
FAQ
A single page. Lighthouse is page-based, so the public report is tied to the exact normalized URL you submit.
Yes. Same-day rescans are saved as separate runs so you can compare releases, fixes, and regressions.
The latest Lighthouse category scores, Core Web Vitals snapshot, resource sizes, benchmark comparison, and readable action items.
Lighthouse does not publish one authoritative cross-category composite, so the report keeps Performance as the lead score and shows the other three categories separately.
PageSpeed Insights runs the same Lighthouse engine and shows both lab data (from a single run) and field data (from real-user CrUX measurements). This tool runs the lab side and persists the result at a public URL with shareable history. For field data, use PSI directly — we'd rather not duplicate Google's strongest signal.
Lighthouse scores have inherent variance — typically ±5 points between consecutive runs on the same URL. Causes include CPU contention on the host, network jitter, third-party script timing, and run-to-run differences in browser cache. Don't react to a single 5-point swing; trend over 3+ runs is what's meaningful.
Mobile, with the canonical Lighthouse mobile emulation profile (Moto G4-class device, slow-4G network). Mobile is the right default — Google switched to mobile-first indexing in 2019 and the bulk of search traffic is mobile.
Not the score itself. Google uses Core Web Vitals (LCP, CLS, INP) measured from real users (CrUX field data) as a ranking input. The lab-style Lighthouse score is a strong correlate but isn't directly fed into ranking — it's a fast feedback loop, not the ranking signal.
Yes — there's a Re-run scan button on the report page. Each run is appended to the history. Rate limit is 5 reruns per IP per day on free; signed-in accounts get 50.
The report's Top fixes accordion will name the offenders. The standard playbook: defer everything that doesn't need
to run before LCP; load analytics with async; route ad/marketing pixels through a single tag manager;
lazy-mount any chat / consent widgets after first interaction.
Yes. Email vadim@seojuice.io with the report URL and we'll delete the public report within one business day.