seojuice
Page-based public reports

Lighthouse Score Checker

Run a real page-level Lighthouse scan, keep every completed run, and turn the raw audit into plain-English priorities your team can actually work from.

Real Google Lighthouse audit No signup required Public share URL Score history per URL

What it means

What this Lighthouse score actually measures

Lighthouse is the audit engine behind Chrome DevTools and PageSpeed Insights — the same scoring Google publishes for every URL it crawls.

When you run a page through Lighthouse, you're running the same audit set Google uses to decide whether your page meets the performance thresholds it weighs into search ranking. The categories — Performance, Accessibility, Best Practices, SEO — each combine dozens of individual audits into a single 0–100 score.

The Performance score is the most consequential. It's a weighted blend of Largest Contentful Paint (when the biggest visible element finishes painting), Cumulative Layout Shift (how much content jumps as the page loads), Interaction to Next Paint (how responsive the page is when a user clicks or types), First Contentful Paint, Total Blocking Time, and Speed Index. Three of those — LCP, CLS, INP — are the official Core Web Vitals that ride along with every URL in Google's index.

We run the audit in lab mode on a simulated mobile device with throttled CPU and network. That matches the conditions Google's PageSpeed Insights uses for its lab data and gives you a reproducible baseline. The numbers won't exactly match the field-style metrics from real users (Chrome's CrUX dataset), but they tell you whether the page as built is fundamentally fast — which is the part you can actually fix.

How it works

The four categories Lighthouse scores

Each weighted equally in the headline number.

  1. 1 25%

    Performance

    Core Web Vitals: LCP, FID/INP, CLS, TTI, total blocking time.

  2. 2 25%

    Accessibility

    ARIA labels, color contrast, keyboard navigation, semantic HTML.

  3. 3 25%

    Best Practices

    HTTPS, console errors, deprecated APIs, image aspect ratios.

  4. 4 25%

    SEO

    Meta tags, mobile viewport, robots.txt, crawlable links.

Who uses this

Three patterns where the lab-style Lighthouse run earns its keep

Frontend engineers

Confirm a perf fix actually moved the needle before merging. Drop the share URL into the PR description so reviewers see the same numbers.

SEO consultants

Run a baseline scan on a client's top landing pages, then quantify how much of the recommended-fix list actually got shipped.

Founders & PMs

Sanity-check the marketing site or the most-trafficked product page after a redesign — without spinning up the full Lighthouse CLI locally.

Reading your report

Six panels per scan, here's what each one is for

1

Performance score

The headline 0–100 number. 90+ is good, 50–89 needs work, under 50 is poor.

2

Core Web Vitals

LCP, CLS, INP, FCP, TTFB, TBT with field-style threshold colors.

3

Top recommended fixes

Click-to-expand, ranked by leverage. Each shows what's broken, why, and the recommended action.

4

Full audit evidence

Every Lighthouse audit we ran. Open it when you want to see why a metric scored what it scored.

5

Performance delta

From the second scan onward, the difference vs the previous run is shown next to the score.

6

Benchmark

Where this URL sits relative to other completed scans of the same site type.

What good looks like

The three Lighthouse-defined performance bands

Headline performance score thresholds, straight from Google's published guidance.

0–49 · Poor

Page is fundamentally slow

Heavy JS, unoptimized images, render-blocking CSS, slow server response. LCP usually well over 4 seconds. Real users on average mobile networks are bouncing.

50–89 · Needs work

Workable, with known wins available

Some Core Web Vitals fail their thresholds. Common causes: unused CSS or JavaScript, oversized images, third-party scripts on the critical path. Most fixes are mechanical.

90–100 · Good

Fast across all Core Web Vitals

LCP under 2.5s, CLS under 0.1, INP under 200ms. The page itself is not what's holding back rankings. Holding 90+ requires perf budget enforcement in CI.

Methodology & scoring details

We run the full Lighthouse audit suite on a headless Chromium instance with mobile emulation, simulated mid-tier device CPU throttling, and a 1.6 Mbps / 150 ms RTT slow 4G network — the canonical PageSpeed Insights "lab" defaults. We do not modify the scoring weights Google ships with Lighthouse.

Performance score weighting (current Lighthouse 11+): FCP 10%, Speed Index 10%, LCP 25%, TBT 30%, CLS 25%. The single-number score is a weighted geometric mean — a single very-bad metric drags the score harder than the arithmetic mean would suggest.

Lab metrics differ from field metrics. Field data (CrUX, real-user monitoring) measures what your real users see; lab data measures what a controlled simulated device sees. Use lab as the "is the page itself slow?" answer; field as "are users experiencing slowness?" answer.

One audit per URL per minute (rate-limited). The browser instance is destroyed after each run so there's no contamination between scans.

FAQ

Frequently asked questions

Does this check a full domain or a single page?

A single page. Lighthouse is page-based, so the public report is tied to the exact normalized URL you submit.

Can I rerun the same page more than once a day?

Yes. Same-day rescans are saved as separate runs so you can compare releases, fixes, and regressions.

What gets saved in the public report?

The latest Lighthouse category scores, Core Web Vitals snapshot, resource sizes, benchmark comparison, and readable action items.

Why is the main headline score Performance instead of a composite?

Lighthouse does not publish one authoritative cross-category composite, so the report keeps Performance as the lead score and shows the other three categories separately.

How is this different from PageSpeed Insights?

PageSpeed Insights runs the same Lighthouse engine and shows both lab data (from a single run) and field data (from real-user CrUX measurements). This tool runs the lab side and persists the result at a public URL with shareable history. For field data, use PSI directly — we'd rather not duplicate Google's strongest signal.

Why does my score change between runs?

Lighthouse scores have inherent variance — typically ±5 points between consecutive runs on the same URL. Causes include CPU contention on the host, network jitter, third-party script timing, and run-to-run differences in browser cache. Don't react to a single 5-point swing; trend over 3+ runs is what's meaningful.

Is this mobile or desktop?

Mobile, with the canonical Lighthouse mobile emulation profile (Moto G4-class device, slow-4G network). Mobile is the right default — Google switched to mobile-first indexing in 2019 and the bulk of search traffic is mobile.

Does Google use the Lighthouse score for ranking?

Not the score itself. Google uses Core Web Vitals (LCP, CLS, INP) measured from real users (CrUX field data) as a ranking input. The lab-style Lighthouse score is a strong correlate but isn't directly fed into ranking — it's a fast feedback loop, not the ranking signal.

Can I rerun a scan on the same URL?

Yes — there's a Re-run scan button on the report page. Each run is appended to the history. Rate limit is 5 reruns per IP per day on free; signed-in accounts get 50.

My third-party scripts are tanking my score. What now?

The report's Top fixes accordion will name the offenders. The standard playbook: defer everything that doesn't need to run before LCP; load analytics with async; route ad/marketing pixels through a single tag manager; lazy-mount any chat / consent widgets after first interaction.

Can I get my report removed?

Yes. Email vadim@seojuice.io with the report URL and we'll delete the public report within one business day.

Glossary
LCP — Largest Contentful Paint
Time when the largest visible element finishes painting. Good ≤ 2.5s.
CLS — Cumulative Layout Shift
Sum of unexpected layout shifts from page load through user interaction. Good ≤ 0.1.
INP — Interaction to Next Paint
The page's responsiveness to user input at the 75th percentile. Good ≤ 200ms. Replaced FID in 2024.
FCP — First Contentful Paint
Time when the browser renders the first DOM content. Good ≤ 1.8s.
TBT — Total Blocking Time
Total time the main thread was blocked between FCP and Time-to-Interactive. Good ≤ 200ms.
TTFB — Time to First Byte
Time from request start to the first byte from your server / CDN. Good ≤ 800ms.
Speed Index
Average time at which visible parts of the page are displayed. Good ≤ 3.4s.
Lab vs Field
Lab data is measured in a controlled simulated environment; field data is measured from real users (CrUX, RUM).
View all →