seojuice

SEO Agency Selection Checklist: Filter Out the Content Factories

Vadim Kravcenko
Vadim Kravcenko
Oct 27, 2024 · 12 min read

TL;DR: Most SEO agency selection checklists help weak agencies pass. Before you listen to the pitch, inspect the work product, reporting model, contract, ownership terms, and the first 90 days, because a “rankings are up” agency can still waste your budget and train Google to distrust your site.

The wrong question is “Which SEO agency is best?” The useful question is “Which agency can prove it will not turn my domain into its content quota?”

I have seen clients at mindnow buy SEO because the deck looked senior and the delivery came from a junior content queue. On vadimkravcenko.com, I have also paid for tools and services that made the dashboard look more serious than the work. With seojuice.io, the bar is simpler: show me what changes, who owns it, how it gets measured, and what happens when we stop working together.

SEO agency selection is procurement for risk — not a polite vibe check with references at the end. A good agency survives uncomfortable due diligence. A bad one survives only if you keep the questions soft.

What the current advice gets right, and where it fails

The top search results on choosing an SEO agency usually land in three places. Boulder SEO Marketing shows what a strategic consultation can look like. Reddit captures the buyer’s fear that everyone sounds legit. SEO Sherpa covers the classic checklist: experience, case studies, bad promises, and common pitfalls.

Those are useful. They still miss the buyer-side system.

Result type What it gets right What it misses
Agency consultation guide Shows proof, method, AI SEO competence, and realistic expectations. The buyer still lacks a procurement system for comparing three vendors and checking contract risk.
Reddit thread Captures real suspicion: referrals feel safer, and everyone wants proof. Anecdotes rarely verify process, ownership, or delivery quality.
Classic agency checklist Covers experience, case studies, questions, and warning signs. It mostly solves the 2022 problem. AI Overview click loss, scaled AI content, buyer-impact reporting, and exit rights need a 2026 lens.

Start before you book agency calls

Most companies start with discovery calls. The agency then shapes the problem for them. That sounds harmless until three agencies describe three different problems, each one matching the service package they sell best.

Write a one-page internal brief first (a decision memo, not an RFP). It should force your team to state the business outcome, target markets, current site risk, internal owner, and non-negotiables before anyone sees an agency deck.

Define the business outcome first

“More organic traffic” is too weak to guide SEO agency selection. A SaaS company may need demo requests from comparison pages. A marketplace may need more indexed inventory pages. A consulting firm may need trust, named expertise, and branded demand. Same channel. Different job.

When I work on seojuice.io, I do not start with keyword volume. I start with the pages that can change revenue or trust. Same at mindnow. If the page cannot be tied to a buyer action, the keyword is probably a distraction.

Your brief should name the outcome in plain language:

  • Increase demo requests from high-intent non-branded pages.
  • Improve indexation and crawl paths for revenue-generating inventory.
  • Build trust around founder, product, pricing, and comparison pages.
  • Reduce dependency on paid search for a small set of expensive terms.

Write down what the agency is not allowed to break

This is where buyers get too optimistic. SEO work can damage a site. A migration can drop indexed pages. A content program can flood the domain with weak articles. A link campaign can create cleanup work you never asked for.

List the protected assets before outreach: top revenue pages, branded queries, product templates, international folders, tracking setup, and any pages already used by sales. If an agency suggests changing one, they need a reason and a rollback plan.

Decide who owns SEO internally

Every agency needs a client-side owner. Without one, SEO becomes a monthly call where everyone agrees the CMS is slow, the developer is busy, and the same recommendations will be revisited next month.

Name the owner. Give that person access to product, engineering, sales, and analytics. SEO crosses departments, so a powerless owner creates fake accountability. The agency gets blamed for inaction. The client gets billed for reminders.

Use the first 15 minutes to filter out content factories

Decision tree for filtering SEO agencies during the first discovery call
How an agency opens the first call usually predicts the rest of the engagement.

The first call should test how the agency thinks. Listen less to what it says it can produce and more to what it asks before proposing output.

The stakes changed after Google’s March 2024 core update. Google said the rollout led to 45% less low-quality, unoriginal content in search results. That matters because many agencies still sell scaled rewrites, thin location pages, generic AI articles, and keyword-first blog calendars. In other words, they sell the work Google has been pruning.

If the first deliverable is “30 posts per month,” you may be interviewing a publishing quota — not a strategy partner. A strong agency asks about sales calls, objections, close rates, positioning, and the pages that already influence revenue.

“It’s very possible to hit your KPIs in search and not win with the people that did the search.” — Wil Reynolds, Seer Interactive

That quote is the cleanest description of KPI theater I have seen. A content factory can lift impressions and still fail the buyer. A strategic partner cares whether the searcher believed you, remembered you, and took the next step.

Ask who will actually do the work

Do not accept “our team” as an answer. Ask for names or roles: strategist, technical SEO, writer, editor, developer, analyst. Ask who joins calls and who touches the site. Many agency decks are sold by senior people and delivered by a queue.

That is not automatically fraud, but you should know the model. Junior delivery can work when senior review is real. It fails when “strategy” means a template brief passed down to a low-cost writer.

Ask to see one anonymized audit, not a case study

Case studies are polished. Audits show how an agency thinks when the work is messy. Ask for one anonymized audit or strategy memo with client data removed. You are looking for specificity: URLs, tradeoffs, sequence, evidence, and risk notes.

If the audit could apply to any website, it tells you almost nothing. If it names the actual failure points and explains why they matter, keep talking.

Ask whether the strategist would put their name on the content

This question sounds dramatic. Use it anyway. If the agency is proposing AI-assisted content, scaled editorial production, or outsourced writing, ask whether the strategist would publish that page under their own name. Watch the pause.

Agency answer What it usually means
“We publish X articles per month.” They sell output before diagnosis.
“We need to audit the site first.” They may be thinking from the domain outward.
“We guarantee top rankings.” End the call.
“We need sales or CRM context.” They understand SEO is tied to buyer behavior.

Demand proof that maps to buyers, not dashboards

SEO agency reporting flow from tasks and rankings to buyer actions
Trace each agency activity to a metric movement and then to a real buyer action.

The old proof stack was rankings, traffic, and keyword screenshots. Those still matter. They just no longer carry the whole argument.

Ahrefs found that top-ranking organic results can lose up to 34.5% of clicks when an AI Overview appears on the SERP (for queries where that feature is shown). Treat that as a precision warning, not a universal law. The implication is still blunt: an agency selling ranking position as the main prize is selling a weaker metric than it used to.

Rankings are evidence, not the finish line

Rankings show whether Google associates a page with a query. They do not prove the page changed buyer belief, trust, or action. A page can rank and still answer the wrong question. A page can attract traffic from people who will never buy.

Ask the agency to separate inputs from outcomes. Inputs include technical fixes, content updates, links, schema, internal linking, and crawl improvements. Outcomes include qualified traffic, demo requests, assisted pipeline, branded search lift, sales enablement, and better performance on pages that matter.

Good reporting shows what changed and why

A useful SEO report should answer five questions:

  • Which pages changed?
  • Which queries changed?
  • Which buyer actions changed?
  • What did we stop doing because the data said it was weak?
  • Which recommendation was rejected, and why?

If the report only shows a green traffic chart, ask for the page-level view. If it only shows rankings, ask for assisted conversions or CRM context. If it cannot connect SEO work to any buyer action, you are funding motion.

The agency should separate leading and lagging indicators

Leading indicators move first: crawl health, indexation, content quality, internal links, publishing quality, and technical errors. Lagging indicators come later: pipeline, revenue, qualified leads, brand demand, and reduced paid-search dependency.

Good agencies report both. Weak agencies hide behind whichever number looks best that month. For a practical reporting model, build against an SEO reporting dashboard that shows actions, page groups, and outcomes together.

Read the proposal like a technical spec

Comparison of generic and strategic SEO agency proposals
A strategic proposal names site-specific failures; a generic one resells the package.

Most buyers read proposals like sales documents. Read them like implementation plans. A strong SEO proposal contains diagnosis, prioritization, sequence, owners, access needs, risk notes, reporting cadence, and the first 90 days of work.

A generic proposal says “technical SEO audit, keyword research, content strategy, link building.” A real proposal says your product comparison pages are cannibalizing category pages, your template is missing crawlable internal links, and your best revenue page lacks proof above the fold.

The first 90 days should not be a content calendar

A 90-day plan can include content. It should not start with a fixed blog calendar before the site has been diagnosed. The first phase should usually inspect technical health, analytics reliability, page priorities, content quality, internal linking, and conversion paths.

I used to overvalue case studies at this stage (I was wrong about that for years). A clean case study tells you what went well after editing. A 90-day plan tells you what the agency will actually do when your CMS, dev queue, and tracking setup get in the way.

Every recommendation needs an owner

“Improve internal linking” is a vague note. Better: “Add crawlable links from the category template to the top 20 revenue pages; client developer owns template change; agency owns anchor map; target sprint two.” That level of ownership exposes whether the plan can ship.

Use a technical SEO audit checklist as a reference, but do not let checklist completion replace judgment. Some fixes can wait. Some need engineering. Some are distractions.

Ask what they would avoid

This is one of the best proposal questions: “What would you avoid doing on our site for the first six months?” Good answers reveal tradeoffs. Bad answers sound like “we can do whatever you need,” which signals lack of judgment, not flexibility.

Proposal area Pass Fail
Diagnosis Names site-specific issues Uses generic package labels
Prioritization Explains sequence and tradeoffs Treats every task as equal
Team Names senior owner and delivery team Hides delivery behind “our team”
Measurement Ties work to buyer action Reports only rankings and sessions
Risk Notes migrations, indexation, and CMS limits Pretends SEO has no downside

Price the agency against the actual work

Ahrefs surveyed 439 SEO service providers and found the average SEO agency retainer was $3,209 per month, 138% higher than the freelancer average of $1,348. The most common monthly retainer band was $501 to $1,000.

Cheap is not automatically bad. Cheap just has physics.

A sub-$1,000 monthly retainer usually cannot fund senior strategy, deep technical work, original research, editorial review, reporting, and account management for a serious B2B site. Something has to disappear. The question is whether the agency will tell you what.

A cheap agency has to remove something

Maybe they remove senior review. Maybe they remove technical depth. Maybe they remove original interviews. Maybe they reduce reporting to screenshots. Any of those can be fine for a small local site with low risk. They are dangerous for a venture-backed SaaS company, marketplace, or enterprise site with revenue pages already ranking.

Ask where the hours go

For a healthy retainer, I want to see the rough allocation: strategy, technical work, content and editorial, reporting, and project management. The exact percentages vary by site (especially after migrations), but the agency should explain where attention goes.

If the agency premium pays for senior judgment, team capacity, and accountability, fine. If those are absent, you are paying freelancer economics with agency overhead. Use SEO retainer pricing benchmarks to sanity-check the number, then inspect the work behind it.

Match budget to risk, not ego

Do not buy the biggest agency because the logo feels safer. Do not buy the cheapest agency because procurement likes the line item. Match spend to the downside of getting SEO wrong: lost rankings, broken templates, weak content, polluted analytics, or months of work your team cannot reuse.

Red flags that should end the call

Buyers often see the red flag and keep going because the agency sounds confident. Confidence is cheap.

“If someone promises overnight results, it’s likely a red flag. In 99% of cases, such claims are bogus.” — Aleyda Solis, Orainti

Apply that to 2026 pitches. Guaranteed top-three rankings, “AI search domination in 90 days,” fixed traffic promises, secret link networks, instant authority, and content volume sold as a moat should all make you slow down or leave.

Guaranteed rankings are still a scam signal

No agency controls Google. The agency controls research, recommendations, implementation quality, measurement, and iteration. Any promise that skips those inputs and guarantees the output is selling certainty it does not own.

“AI SEO” is not a strategy by itself

Some AI-search work is valid. The agency might improve entity coverage, citations, page structure, brand evidence, author credibility, and measurement across classic search and AI search surfaces. That belongs in an AI search optimization plan.

The red flag is the rebadge: standard content marketing renamed as “AI search optimization” with a monthly addon, but no change in research, structure, evidence, or reporting (in 2026, this is no longer optional).

No access, no transparency, no deal

  • They promise a fixed ranking timeline.
  • They cannot name who writes or reviews the work.
  • They own the accounts.
  • They refuse to show sample deliverables.
  • They lead with backlinks before diagnosis.
  • They cannot explain what changes for your market.
  • They report wins you cannot connect to revenue, pipeline, or trust.

One red flag may be a misunderstanding. Three is the business model.

The contract is part of the SEO work

SEO agency contract ownership map for client accounts and work product
Spell out who keeps each account and artifact before the contract is signed.

This is where most selection advice gets too soft. The contract decides whether you own the value after the agency leaves. That includes content, briefs, audits, scripts, dashboards, keyword data, reporting history, and account access.

“Beware of long-term agreements, sticky cancellation clauses and work ownership claims.” — Corey Morris, Search Engine Land

Read that before you sign, not when you want out (see Morris’s full Search Engine Land piece for the broader agency-selection context). I have signed contracts that violated three of these rules and only spotted it later. Pain is an efficient teacher; it is just expensive.

Own the accounts from day one

Google Search Console, Google Analytics, CMS, ad accounts, tag manager, rank tracking, dashboards, and data exports should sit under client control. The agency can have access. It should not be the landlord.

Shared access is normal. Agency-owned infrastructure is a trap. If the relationship ends, you should not need permission to see your own history.

Own the work product after termination

Ask directly:

  • What is the minimum term?
  • What is the cancellation notice?
  • Who owns content, briefs, audits, scripts, dashboards, keyword data, and reporting history?
  • Are Search Console, Analytics, CMS, and ad accounts under our control?
  • What happens to published content and internal links when the contract ends?

Clean exit terms are not hostile. They are a trust test. A confident agency does not need IP ambiguity to keep a client.

Avoid contracts that punish learning

Some SEO engagements need time. Six months may be reasonable for a complex site. Twelve months can be reasonable during a migration. The issue is not duration by itself; the issue is a contract that makes it painful to leave after you learn the delivery model is weak.

Good contracts create room for work. Bad contracts create fear of leaving.

Final SEO agency selection scorecard

Weighted scorecard for choosing an SEO agency
A weighted scorecard kept across all three vendors prevents the best presenter from winning by default.

After the calls, memory will favor the best presenter. Use a scorecard before the internal conversation turns into “I liked them.” These weights are what I would use for a serious B2B site; yours may shift if the risk profile is different.

Category Weight What you are scoring
Strategic diagnosis 25% Did they identify the real constraints and opportunities?
Proof quality 20% Did proof connect to buyers, not only dashboards?
Team seniority and access 15% Do you know who owns strategy and delivery?
Reporting model 15% Will reporting show changes, reasons, and buyer actions?
Contract and ownership terms 15% Do you own the work, data, and accounts after exit?
Price fit 10% Does the budget match the work and risk?

A small worked example: in one mindnow selection process, the best presenter lost. Their deck had cleaner charts, but the audit was generic and the contract kept too much work-product language vague. The quieter agency won because they named two template issues, admitted one recommendation needed developer time, and gave us clean account ownership. The scorecard caught what the call energy hid.

Counter: checklists can still fail you. A charismatic agency can learn the right answers. That is why the request should always move from words to artifacts: show me the audit, show me the owner, show me the contract, show me what happens when we leave.

I would rather hire the agency that gives me three uncomfortable truths about seojuice.io than the one that promises a clean upward chart. Pretty charts are easy. Clean exits, named owners, specific tradeoffs, and work a senior person will defend are harder.

FAQ

How many SEO agencies should I compare?

Three is usually enough. Fewer limits contrast. More creates noise unless you have a formal procurement team. Use the same brief, same questions, and same scorecard for each agency.

Should I ask for case studies?

Yes, but do not stop there. Ask for an anonymized audit, a sample report, and a first-90-days plan. Case studies show the edited story. Work samples show the operating system.

Is a monthly content package always a red flag?

No. Some sites need steady publishing. The red flag appears when content volume comes before diagnosis, buyer research, technical inspection, and quality standards. For editorial evaluation, pair the proposal with content quality scoring.

How long should I give a new SEO agency?

For meaningful sites, expect several months before lagging indicators move. But you should see leading indicators early: completed audits, shipped fixes, page updates, cleaner reporting, and better decisions.

What should I do if the agency wants to own the accounts?

Decline or renegotiate. The client should own analytics, Search Console, CMS, dashboards, and reporting history. Agency access can be added and removed. Ownership should stay with you.

Want a cleaner SEO agency selection process?

If you are comparing agencies, use this checklist before the next call. SEOJuice helps teams inspect SEO work by page, outcome, reporting signal, and quality risk, so the conversation moves from “trust the pitch” to “show me the evidence.”

Discussion (2 comments)

SEO_Wizard_2019

SEO_Wizard_2019

7 months

tbh the checklist's emphasis on finding the "right SEO partner" and transparency hit home — always ask for a sample keyword strategy and two client refs in your niche before signing. ngl I did a 30-day paid trial once and added a contract clause for "no black-hat" + rollback; saved us from a messy penalty later. anyone else require dashboard access or monthly KPI exports?

DigitalStrategy

DigitalStrategy

7 months

100% — demand dashboard access. I always ask for GA4 + Search Console read-only, a Looker Studio dashboard + monthly raw CSVs for rankings, traffic, backlinks and CTR. Add SLA: exports within 5 days, rollback/penalty indemnity — if they refuse, walk. Watch for overnight spikes from spammy links (red flag). #SEO — what KPIs do you lock in?

performance_geek

performance_geek

7 months

Useful checklist but it feels high‑level — calling out “transparency, strategy, expertise and ethics” and warning about agencies that “guarantee instant results or low prices” is necessary but not sufficient. Insist on a baseline audit (crawl + log analysis, Lighthouse CI, Search Console + GA4 exports), a sample playbook with concrete tactics, explicit KPIs/SLA and a contractual rollback/clawback for risky link-building. Also ask how they validate causality — do they run randomized A/B tests or time‑series models, and how do their changes scale across multi‑language, high‑crawl‑budget sites?

SERPSlayer

SERPSlayer

6 months, 3 weeks

100% — love this level of specificity. A few concrete additions I’d expect to see if I were vetting an agency (tbh most dodge this stuff):

- Baseline audit (concrete): run a full crawl with Screaming Frog/DeepCrawl + server log analysis (BigQuery + logs or Loggly) to map crawl patterns, Lighthouse CI for core web vitals baseline, export Search Console + GA4 to BigQuery for historical signal. Deliverable: CSVs + a prioritized findings spreadsheet (priority, effort, risk, owner, ETA).

- Sample playbook (what it should actually contain): exact tactics per issue type (e.g., fix duplicate titles → rename pattern, update templates, test on N sample pages), expected timeline per tactic (dev → staging → prod → observation window), roll‑out strategy (canary by URL group, language, or subdomain). Ask for a real example playbook used on a past client (redact PII).

- KPIs / SLA (practical examples): tracked weekly — organic clicks, impressions, average position for target keyword set, pages crawled/day, indexation rate, Core Web Vitals percentiles, revenue per organic session. SLA: incident response within 48h for critical drops, mitigation plan within 5 business days, monthly reporting. Tie payment milestones to KPI windows (e.g., baseline → 3 months → 6 months).

- Rollback / clawback: define thresholds (e.g., >10% drop in organic traffic for >14 days across >X pages) that trigger immediate rollback and partial refund proportional to sustained loss. Include who owns dev time/costs to revert.

- Validating causality (real talk): A/B tests are ideal but tough at site-wide SEO scale. Practical options:
- URL split tests: change X% of similar pages and hold the rest as control (we did this on ~300 product pages and waited 6–10 weeks for signal).
- Staggered rollouts / canary releases by region/language and compare with controls.
- Time‑series / causal models: CausalImpact, Prophet, synthetic control — good for attribution if you’ve got a decent pre‑period and control series.
- Beware confounders (seasonality, external campaigns). Always pair statistical tests with qualitative checks (indexing, render logs).

- Scaling across multi‑lang, high crawl‑budget sites: use consistent hreflang + language mapping, avoid duplicate content via correct canonicals, test on a low-risk subdirectory first, prioritize sitemaps + paginated XMLs, use crawl-delay or server-side throttling during heavy tests, and ensure CDN + server config are stable to not skew CWV.

Personal anecdote: I once insisted on a URL-split playbook with a 10% holdout. Agency pushed generic promises until we forced a staged rollout; the split test gave a clean +12% uplift in 8 weeks. Saved us from a bad full-site push.

If you’re negotiating with an agency, ask them to provide a redacted sample audit + one real playbook and a proposed causal test plan for your site. What kind of site are you thinking about (size, vertical)? That changes the testing strategy.