seojuice

SPA SEO Is a Delivery Problem, Not a Rendering Problem

Vadim Kravcenko
Vadim Kravcenko
Oct 25, 2024 · 13 min read

TL;DR: SPA SEO is not a rendering problem anymore. It is a delivery problem: the URLs, status codes, metadata, and primary content must exist before JavaScript gets a vote, because Google may render late and AI crawlers often do not render at all.

SPA SEO is not about whether Google “supports JavaScript” anymore

Most SPA SEO advice still starts with the wrong question: can Google render JavaScript?

Yes. Google can render JavaScript. Martin Splitt has been saying this for years, and people still debug SPAs by staring at view-source: like it is the page Google indexed.

“A lot of people are still looking at view source. That is not what we use for indexing. We use the rendered HTML.”

Martin Splitt, Developer Advocate at Google

That quote matters because it kills one bad habit. If you only inspect the initial source of a React, Vue, Angular, SvelteKit, Nuxt, Remix, or Next.js app, you may miss what Google eventually sees. Rendered DOM matters.

But that does not mean client-side rendering is safe for every public route. Rendering costs time. Rendering can fail. Rendering may happen later than crawling. Other bots may never render at all. The real question for spa seo is whether the crawler receives a meaningful document soon enough.

View source is the wrong debugging habit

View source shows the first HTML response. For a classic CSR app, that response may be one empty shell, one root node, one script bundle, and a prayer. Google might render the page later, execute the route, call the APIs, and discover the actual content. Might.

That “might” is where rankings get weird. The page can look perfect in your browser and still be fragile for search — the browser is patient, but crawlers are systems with queues, budgets, timeouts, and failure modes.

Rendered DOM matters, but first-byte HTML still wins

seojuice.com is split on purpose — public pages ship static-first HTML, the logged-in dashboard behaves like an app. Two rendering strategies under one domain, because those routes have different jobs.

The blog, tools, and landing pages need to be found, crawled, understood, shared, and cited. The dashboard does not need to rank for “page health scoring UI” and never will. JavaScript can improve the public page after load, but the first response should already look like a page.

AI crawlers changed the risk profile

For years, I treated this as a Google-only problem (I was wrong about this for years). Then AI crawlers made the old shortcut worse.

“The results consistently show that none of the major AI crawlers currently render JavaScript.”

Vercel Engineering

If GPTBot, ClaudeBot, PerplexityBot, or another crawler sees only an app shell, your content may as well be missing for that surface. Google rendering support helps Google (and only Google) — it does not save every crawler, preview bot, monitoring tool, social parser, or AI ingestion system.

Timeline comparing how Googlebot, non-JavaScript crawlers, and AI crawlers see SPA content
SOURCE: SEOJuice SPA-SEO reference, based on Google rendering documentation and Vercel’s 2024 measurement of AI-crawler JavaScript execution.

Decide which SPA routes are pages and which routes are app states

This is the step most teams skip. They ask whether the whole SPA needs SSR. That framing wastes engineering time.

A real SPA almost always mixes two things: crawlable pages and private app states. Pricing pages, blog posts, documentation, templates, integrations, comparison pages, and product landing pages are pages. Dashboards, onboarding steps, modals, filters, account screens, and saved reports are app states.

The fix starts with classification. Do not ask engineers to “fix SEO for the app.” Ask them to mark which routes deserve search traffic, then choose rendering, indexing, canonicals, and status codes per route.

Route type Should rank? Rendering choice Indexing rule
Blog post Yes SSG or SSR Index, canonical self
Product landing page Yes SSG or SSR Index, canonical self
Search results page Usually no CSR or SSR Often noindex
Dashboard route No CSR is fine Block behind auth or noindex
Faceted filter URL Sometimes SSR only if curated Canonical or noindex
Decision tree for choosing SEO treatment and rendering model for SPA routes
SOURCE: SEOJuice SPA-SEO route-classification framework.

This is how I would explain it inside a technical SEO audit. A SPA with 40 public URLs and 4,000 dashboard states does not have 4,040 SEO pages. It has 40 pages and a product interface.

That distinction changes the roadmap. Public routes need stable URLs, self-canonicals, server-delivered metadata, crawlable links, useful first-byte HTML, and correct status codes. Dashboard routes need fast interaction, auth, state management, and privacy. Forcing both groups into one rendering model usually makes both worse.

At mindnow, this was the most common SPA mistake I saw with client projects. The team wanted one elegant frontend architecture. Search wanted boring documents. The compromise was not to abandon the app; it was to stop pretending every route had the same purpose.

Choose the right rendering model before you write SEO tickets

Rendering strategy is an architecture choice, not an SEO plugin setting — if you pick the wrong model, every later ticket becomes a workaround.

Comparison chart of SPA rendering models for SEO
SOURCE: SEOJuice rendering-strategy reference, drawing on Google’s rendering documentation and Vercel’s Next.js rendering guide.

CSR: fine for dashboards, risky for landing pages

Client-side rendering can be perfectly fine for authenticated software screens. If a user must log in, crawlers should not index the route anyway. CSR becomes risky when the same app shell serves pricing pages, docs, articles, and product pages where the content appears only after JavaScript runs and APIs respond.

SSG: boring, fast, and usually the right answer

Static site generation means pages are built into HTML ahead of time (usually at deploy or build). For blogs, docs, changelogs, glossary pages, templates, and most marketing content, SSG is hard to beat. It is fast, cacheable, cheap, and crawler-friendly.

SSR: useful when public content changes often

Server-side rendering is a better fit when public content changes by request, geography, inventory, permissions, or freshness requirements. Lee Robinson described the basic Next.js model plainly:

“Next.js pre-renders the page into HTML on the server on every request.”

Lee Robinson, VP of Developer Experience at Vercel

SSR gives crawlers HTML without waiting for the client bundle, while still letting the page reflect fresh data.

ISR: the practical middle for large sites

Incremental static regeneration, or ISR (static pages refreshed after publishing), is often the best middle ground for large content libraries. You get static HTML for most requests, then regeneration when content changes. For programmatic SEO, docs, and large template libraries, ISR can prevent rebuild pain without falling back to full CSR.

Dynamic rendering: the workaround that should expire

Dynamic rendering serves one version to crawlers and another to users. It can rescue a legacy SPA when a migration is not ready — but I would not design a new search strategy around it.

“So I would see this as something kind of as a temporary workaround – where temporary might mean a couple of years – but it's more of a time-limited workaround.”

John Mueller, Search Advocate at Google

That is the right mental model. Use dynamic rendering when you need a bridge, then replace the bridge with server-rendered or static-first public routes.

Fix the SPA crawl traps that still break indexing

The hard SPA SEO failures are usually boring. They are not mysterious ranking penalties. They are delivery bugs.

The first trap is the universal shell. Every URL returns the same 200 response, the same empty root node, and the same bundle. The router decides later whether /pricing, /docs/api, or /totally-fake-url exists. That makes crawlers work too hard, and it creates the second trap: soft 404s.

“Instead of responding with 404, it just responds with 200 … always showing a page based on the JavaScript execution.”

Martin Splitt, Developer Advocate at Google

Invalid routes should return real 404 or 410 status codes. A cute client-side “page not found” component served with 200 is still a bad signal (this is the “soft 404” trap that wrecks indexability budgets).

Diagram showing the difference between SPA soft 404 responses and real 404 status codes
SOURCE: SEOJuice SPA-SEO crawl-trap reference, based on Google’s soft-404 documentation and Martin Splitt’s public guidance.

The third trap is navigation that crawlers cannot follow. Buttons, click handlers, custom components, and router events are fine for interaction, but internal discovery still needs crawlable anchors with real href values. If your most important pages are reachable only after a user clicks a JavaScript handler, your crawlability is weaker than it looks.

Metadata is another common failure. Many SPAs update titles, descriptions, canonicals, robots tags, Open Graph tags, and schema after route changes. That may work visually in a browser tab. It can still fail for crawlers, social parsers, and AI bots. Route-specific metadata should be present in the returned HTML for any indexable URL.

Canonicals deserve their own warning. I have seen hydrated apps overwrite a correct canonical with a staging domain, a root URL, or the previous route. That kind of bug is quiet. Nobody notices until duplicate URLs cluster badly or the wrong page starts ranking.

Infinite scroll is another trap when it hides content behind client state. If page two, page three, and older items have no crawlable URLs, search engines may never discover them. Use paginated fallback URLs for important archives and category pages.

API-loaded main content is fragile too. If the H1, body copy, product details, reviews, or internal links require two API calls after hydration, you have more failure points. Bot traffic may hit rate limits. APIs may block unfamiliar user agents. Timeouts may leave the rendered DOM thin.

Hash routing should stay out of indexable public pages. A URL like /docs#pricing can work for fragments, but hash-based app routing for real pages makes clean discovery, canonicalization, and analytics harder than they need to be.

Finally, watch auth and bundle weight together. Public content accidentally wrapped behind login checks can disappear from crawlers. Heavy bundles can delay rendering and waste crawl budget. Both problems look like “JavaScript SEO,” but the practical fix is cleaner route boundaries and less client work for public pages.

Build every indexable route like a document first and an app second

The best SPA SEO rule I know is simple: if the route deserves search traffic, the first response should look like a page.

That means each public URL should return useful HTML with the core signals already in place:

  • Correct <title>.
  • Meta description.
  • Self-referencing canonical.
  • One clear H1.
  • Main content.
  • Crawlable internal links.
  • Structured data where relevant.
  • Correct status code.
  • Open Graph and Twitter tags if sharing matters.
HTML-first SPA page structure with JavaScript hydration added after core SEO elements
SOURCE: SEOJuice SPA-SEO architecture playbook for HTML-first public routes.

Then JavaScript can hydrate components, personalize elements, load calculators, track events, filter tables, and enrich the experience. It should not be required for the crawler to understand what the page is about.

This is also where site architecture and SPA SEO meet. A public route with no crawlable links pointing to it is still weak, even if it is server-rendered. A beautifully rendered document buried five clicks deep behind client-only navigation will not perform like a page that lives in a clear internal linking system.

The document-first rule keeps teams honest. Pricing is a document. A blog post is a document. A docs page is a document. A saved dashboard filter, open modal, or onboarding step is app state. Treating app state like a search page creates index bloat. Treating public pages like app state creates invisibility.

At seojuice.com, this split is intentional. Public routes need to be boring enough for crawlers. The product can still be interactive after login. Those two ideas can live together.

Test SPA SEO with rendered HTML, not hope

If you only test the browser experience, you are testing the happiest path. SPA SEO needs uglier tests.

  1. Fetch the URL with JavaScript disabled and check whether the content still makes sense.
  2. Inspect the URL in Google Search Console and review the rendered HTML.
  3. Compare initial HTML against the rendered DOM in Chrome DevTools.
  4. Test status codes directly with curl -I https://example.com/missing-route.
  5. Crawl the site with one JS-capable crawler and one non-JS crawler.
  6. Confirm titles, canonicals, robots tags, schema, and internal links exist before hydration.
  7. Check server logs for bot hits, blocked APIs, timeouts, and unexpected redirects.
  8. Validate structured data with Google’s Rich Results Test after rendering.

The uncomfortable test is the H1 test. If Googlebot needs five steps and two API calls to find the H1, the page is fragile even if it eventually gets indexed.

Screaming Frog, Sitebulb, Google Search Console, Chrome DevTools, Rich Results Test, and server logs all help. The specific tool matters less than the comparison. You want to know what exists at first response, what appears after rendering, and what Google actually indexed.

This is also where many JavaScript SEO audits stop too early. They prove Google can render one page. Good. Now test invalid routes, paginated routes, canonical changes, metadata changes, API failures, slow responses, and non-Google crawlers.

SPA SEO best practices checklist

Use this checklist at route level. A sitewide “pass” hides too many SPA failures.

  • Rendering: Public pages use SSG, SSR, or ISR. Private app screens can use CSR.
  • Routing: Every indexable URL has a unique route, unique content, and a self-canonical.
  • Status codes: Missing pages return 404 or 410, not 200.
  • Links: Internal navigation uses crawlable anchors with real href attributes.
  • Metadata: Titles, descriptions, canonicals, robots tags, Open Graph tags, and schema are route-specific.
  • Content: Main copy, H1s, product information, and key links exist without waiting on client-only data.
  • Performance: Bundle size, hydration cost, third-party scripts, and route-level code splitting are controlled.
  • Index control: Dashboards, private routes, low-value filters, and thin search pages are blocked or noindexed.
  • Testing: Initial HTML, rendered DOM, and indexed content are compared on important templates.
  • AI visibility: Key content appears in HTML because many AI crawlers do not render JavaScript.

“If it is not crawled, it can't be surfaced in search. No matter the surface.”

Jamie Indigo, Technical SEO Consultant at Not a Robot

That sentence is the whole checklist compressed into one line. Search, AI answers, link previews, and discovery systems all depend on access first. Rankings come later.

The simplest SPA SEO architecture I would ship today

If I were starting a modern SPA with search traffic in mind, I would not make the entire product server-rendered. I would split it.

Site area Recommended approach
Marketing site Static generation
Blog and docs Static generation or ISR
Product pages SSR or ISR
Programmatic SEO pages Static generation with strong pruning
Dashboard CSR behind auth
Search and filter pages Noindex unless manually curated
Invalid routes Real 404 or 410
Shared layout Server-rendered metadata and navigation

This is how I would split it on seojuice.com. Marketing pages and articles should be HTML-first. Product surfaces that need freshness can use SSR or ISR. The dashboard can stay app-like because ranking it would be pointless.

Programmatic SEO pages need extra restraint. Static generation makes it easy to create thousands of pages, including thousands nobody should index. Generate only pages with real search demand, useful content, and internal links. Prune the rest before Google has to make the decision for you.

The winning SPA is not the one that proves crawlers can run JavaScript. The winning SPA is the one that does not make crawlers do unnecessary work.

FAQ

Can a single-page application rank on Google?

Yes. A SPA can rank if indexable routes return crawlable content, correct metadata, internal links, and valid status codes. Google can render JavaScript, but relying on rendering for everything makes the site more fragile.

Is server-side rendering required for SPA SEO?

No, not for every route. SSR is useful for public pages with changing content. SSG or ISR is often better for stable content. CSR is fine for private dashboards, account screens, and app states that should not be indexed.

Are hash routes bad for SEO?

Hash routes are a poor choice for indexable pages. They can work for on-page fragments, but public content should have clean URLs, route-specific metadata, and server-level status codes.

Should SPA search results pages be indexed?

Usually no. Internal search pages and faceted filters often create thin or duplicate URLs. Curated filter pages can be indexed when they have unique demand, stable content, and a clear canonical strategy.

How do I know if my SPA has a soft 404 problem?

Request a fake URL and check the status code. If /this-page-should-not-exist returns 200 with a client-side not-found message, you have a soft 404 risk.

Need help turning your SPA into crawlable pages?

SEOJuice helps teams strengthen crawlable internal links across the public pages that actually deserve search traffic. If your SPA has orphaned routes, buried templates, or pages Google never seems to reach, internal linking automation can make the document layer easier for crawlers to follow.

Discussion (2 comments)

Business Builder

Business Builder

7 months, 1 week

Love this deep dive on SPAs and client-side rendering — the React/Vue/Angular SEO pitfalls were explained really well! 🙌 I migrated a React SPA to hybrid SSR + prerendering and saw a crawl/index uptick in under a week. Please do a tutorial on hydration and sitemap strategies next 🙏

KeywordMaster

KeywordMaster

7 months, 1 week

Nice — glad it helped and awesome you saw a quick uptick! I did the same move from CRA SPA → hybrid SSR + prerendering last year and saw similar gains, fwiw.

A few practical tips from that migration that might help for the tutorial you asked for:
- Hydration pitfalls: mismatches usually come from non-deterministic things in render (Date.now(), Math.random(), generated IDs, or useEffect producing visible DOM changes). Fix by moving client-only stuff into useEffect or guarding it (if (typeof window === 'undefined') ...), or use deterministic id libs.
- Streaming/partial hydration: if you’re using React 18/Next, streaming SSR + selective client hydration (islands-ish or client boundary components) reduces TTI without sacrificing SEO — imo worth covering.
- Debugging: curl or fetch the page server-side and compare to what Chrome renders after hydration; React devtools console will show hydration mismatch warnings. Also check Search Console’s “Inspect URL” to see what Googlebot sees.
- Sitemap strategy: generate sitemaps at build for static routes, dynamically for API-driven content (rebuild or incremental), split into sitemap index if >50k URLs, include lastmod, and reference it from robots.txt. For multi-lingual sites include hreflang entries or separate sitemaps per locale.
- Tools I used: Next.js (SSR + static props), next-sitemap for generation, prerender.io for tricky bots, and Search Console + server logs to confirm indexing.

If you want, I can write that hydration + sitemap tutorial — what would you prefer: code-heavy step-by-step for Next.js, or framework-agnostic notes + examples? Any stack specifics (Next/Remix/Vite/Netlify) you’re on?

GrowthHacker23

GrowthHacker23

7 months

ngl SPAs can rank.