AI Search Engines vs. Publishers: A Battle for Traffic in the Gemini 3 Era
How Gemini 3-style AI answers change publisher traffic, what data shows so far, and practical tactics to protect clicks without fighting users.

AI answer layers are not “just another SERP feature.” They’re a distribution regime change: publishers increasingly power the answer while search platforms capture the interaction. If Gemini 3 fulfills the “thought partner” promise described in [our comprehensive guide to Gemini 3’s search transformation], the strategic question for publishers becomes blunt: How do you keep earning clicks—and revenue—when the best user experience is often to not click?
This supporting briefing focuses on traffic impact mechanics and the counter-moves that protect business outcomes (not a full Gemini 3 feature tour).
**The visibility–traffic decoupling (what executives should internalize)**
- AI search referrals can collapse vs. classic search: TollBit data shared via Forbes reports AI search engines send 96% less referral traffic to news sites/blogs than traditional Google search. (<a href="https://www.forbes.com/sites/rashishrivastava/2025/03/03/openai-perplexity-ai-search-traffic-report/?utm_source=openai" rel="nofollow noopener" target="_blank">forbes.com</a>)
- Scraping pressure rises even as clicks fall: The same reporting notes scraping “more than doubled,” averaging ~2 million scrapes in Q4 2024 across the analyzed set. (<a href="https://www.forbes.com/sites/rashishrivastava/2025/03/03/openai-perplexity-ai-search-traffic-report/?utm_source=openai" rel="nofollow noopener" target="_blank">forbes.com</a>)
- AI-mediated browsing introduces new trust risks: Security audits cited by Tom’s Hardware allege Comet can be manipulated via prompt injection and phishing/scam flows—making “verify on the source site” more valuable. (<a href="https://www.tomshardware.com/tech-industry/cyber-security/perplexitys-ai-powered-comet-browser-leaves-users-vulnerable-to-phishing-scams-and-malicious-code-injection-brave-and-guardios-security-audits-call-out-paid-ai-browser?utm_source=openai" rel="nofollow noopener" target="_blank">tomshardware.com</a>)
What’s changing: from “10 blue links” to Gemini 3-style answer layers
The new SERP stack: AI overview + citations + follow-ups
In the classic model, Google’s job was routing: rank pages, send clicks, let publishers monetize. In the emerging model, the SERP becomes a three-layer product:
This is the same structural logic pushing beyond Google: Perplexity’s Comet browser embeds an assistant directly into browsing, turning “web navigation” into an AI-mediated experience rather than a link-first one. Wikipedia lists Comet as a Chromium-based browser released for Windows/macOS on July 9, 2025 (and Android on November 20, 2025), explicitly positioning the assistant alongside the page. (en.wikipedia.org)
SERP layout examples (illustrative, not product screenshots):
- Desktop (query: “best noise-cancelling headphones”): AI summary block + 3–6 cited sources + “refine” chips can occupy most above-the-fold real estate, pushing organic listings down.
- Mobile (query: “how to fix iPhone storage full”): AI answer + step list + follow-up chips can require multiple scroll gestures before the first traditional result is fully visible.
- Commercial intent (query: “best CRM for small business”): AI summary competes with shopping modules/ads; citations may be present but are no longer the primary call-to-action.
The economic shift is from destination value (pageviews) to source value (being used to generate answers). Forbes reports a stark data point from a TollBit analysis: AI search engines send 96% less referral traffic to news sites and blogs than traditional Google search. (forbes.com)
That number matters less as a universal truth and more as a directional signal: the platform’s incentives are no longer aligned with your session growth. Citations can become a branding vehicle while sessions erode—a dynamic we’ll call the citation paradox.
Actionable recommendation: Update executive reporting: stop treating “being cited” as a win by default. Require teams to show incremental business impact (leads, subscriptions, affiliate EPC) tied to cited pages, not just visibility.
---
The traffic squeeze: where publishers lose clicks (and where they still win)

Query types most vulnerable to zero-click AI answers
AI answer layers thrive when the intent is compressible. Your most at-risk pages typically map to:
- Definitions / “what is” explainers
- Simple comparisons (“X vs Y”, especially when differences are widely repeated)
- Basic troubleshooting with standard steps
- “Best X” shortlists that don’t require personalization, tools, or live data
The Forbes/TollBit reporting also highlights a second-order risk: even as referrals shrink, scraping intensity rises. TollBit found AI developers’ scraping “more than doubled” in recent months and averaged ~2 million scrapes in Q4 2024 across the analyzed set, with pages scraped multiple times. (forbes.com)
Contrarian lens: Many publishers are still optimizing these compressible queries because they historically scaled. In an AI-first SERP, that strategy can become a margin trap: you pay to produce content that is easy to summarize and cheap to substitute.
Actionable recommendation: Reclassify your content inventory into compressible vs. defensible. If a page can be faithfully summarized in 6–10 bullets, assume it will be.
Query types that still drive clicks: depth, tools, and trust
Clicks remain resilient when the user needs something the SERP can’t fully deliver:
- Complex decisions (multi-criteria, context-specific, “it depends”)
- High-stakes/YMYL-adjacent topics where credibility, sourcing, and accountability matter
- Local nuance (regulations, pricing, availability, regional differences)
- Interactive tools (calculators, configurators, selectors)
- Original reporting and primary-source synthesis
- Deep how-tos requiring visuals, downloads, or step verification
There’s also a growing “trust surface” problem for AI-mediated browsing. Security research reported by Tom’s Hardware describes audits (Brave and Guardio) alleging Comet can be manipulated via prompt injection and is vulnerable to phishing/scam flows—underscoring that AI mediation introduces new failure modes. (tomshardware.com)
Implication: As AI becomes the interface, trust becomes the differentiator—and publishers with strong editorial standards can still win clicks when users want to verify, go deeper, or reduce risk.
Actionable recommendation: Build “trust hooks” into vulnerable content: add verification steps, edge cases, and decision checkpoints that explicitly invite the click (“If X applies, use the full checklist / calculator / template”).
---
A new battleground metric: visibility without the click (and how to measure it)

KPIs to add alongside sessions: citation share, assisted conversions, branded lift
In the Gemini 3 era, session-only dashboards can mislead leadership into cutting the very investments that protect long-term relevance. Add a blended scorecard:
- Citation Share of Voice (CSOV)
CSOV = (# of times your domain appears as a cited source) / (total citation opportunities in tracked queries) - Impressions-to-Click Delta (ICΔ)
ICΔ = (Impressions change %) – (Clicks change %)
A widening gap can indicate answer-layer absorption. - Branded Search Lift %
Lift = (Branded query impressions this period – baseline) / baseline - Assisted conversions (view-through or multi-touch) for pages that are frequently cited but rarely clicked.
Forbes’ “96% less referral traffic” statistic is the warning shot: if you don’t measure non-click value, you’ll underinvest in the assets that keep you in the answer supply chain. (forbes.com)
Actionable recommendation: Put CSOV and ICΔ into your weekly exec SEO report within 30 days—even if the first version is manual sampling.
Instrumentation: how to detect AI-driven exposure in your data
Practical steps (and their limits):
- Annotate SERP shifts (major rollouts, visible layout changes) and compare CTR pre/post.
- Query-level CTR monitoring: flag queries where impressions hold but CTR drops sharply.
- Rank tracking that flags AI modules (where available): segment “AI present” vs “AI absent.”
- Server log monitoring for bot spikes and crawl/scrape behavior (especially if costs rise).
- Brand demand monitoring: direct traffic + branded queries + newsletter signups.
Limitations matter: Publishers report difficulty distinguishing bot intent and face tradeoffs when blocking crawlers; blocking major search crawlers can risk SEO performance. (Only include the Olivia Joslin quote if you can cite a source that contains the exact quotation.) (forbes.com)
Actionable recommendation: Establish a “SERP change war room” process: when CTR drops, your first step is SERP diagnosis, not content rewrites.
Sample weekly tracking table
| Metric | How to calculate | Why it matters | Owner |
|---|---|---|---|
| CTR by query group (AI present vs absent) | GSC export + tagging | Detect absorption | SEO |
| ICΔ (Impressions vs clicks) | %Δ impressions – %Δ clicks | Early warning | Analytics |
| Branded Search Lift % | Branded impressions vs baseline | Non-click value proxy | Growth |
| CSOV (manual sample) | Citation appearances / opportunities | “Answer presence” | SEO/Content |
| Conversion rate by page type | CVR by template | Monetization resilience | CRO |
Publisher counterplay: tactics that earn clicks even when AI answers exist

Make your content “non-summarizable” (in a good way)
If your page is a clean, generic explanation, AI will happily compress it. To resist:
- Add decision logic: “If A, do X; if B, do Y” with edge cases.
- Publish primary evidence: screenshots, experiments, benchmarks, interviews.
- Include failure modes and troubleshooting branches.
Actionable recommendation: For your top 20 vulnerable pages, add a “Decision Tree” section that cannot be reduced to a generic paragraph without losing usefulness.
Package value: tools, templates, calculators, and unique data
This is where publishers can create click gravity. AI can cite your tool, but it can’t replace the interactive experience without rebuilding it.
A useful cross-signal: Anthropic’s move to open-source “Agent Skills” as an open standard suggests the ecosystem is shifting toward reusable, modular capabilities—which will accelerate agentic experiences that consume more sources per answer. (techradar.com)
If agents are going to read 10–20 links to produce an output (as TollBit’s CEO described), you need to be the source that also offers the next step artifact: template, calculator, dataset. (forbes.com)
Actionable recommendation: Commit to one “toolification” sprint per quarter: convert a high-traffic explainer into a calculator/template + supporting narrative.
Optimize for citation + click: structure, schema, and quotable lines
You want dual optimization: easy to cite, hard to replace.
- Use tight definitions and clear headings so AI can attribute accurately.
- Add “quotable lines” (short, precise claims) that are safe to cite.
- Implement appropriate schema (Article/HowTo/FAQ where it genuinely fits).
- Strengthen internal linking to deeper assets (templates, case studies, tools).
Tie this back to the larger strategy in [our comprehensive guide to Gemini 3 as a search thought partner]: the winners will design content for multi-turn exploration, not single-click visits.
Actionable recommendation: Create a “citation-ready block” pattern (definition + constraints + link to tool/template) and deploy it across your top revenue pages.
:::comparison :::
âś“ Do's
- Build defensible assets (tools, templates, datasets, decision trees) so the “next step” requires a visit—not just a summary.
- Add trust hooks (verification steps, edge cases, decision checkpoints) to convert AI-era skepticism into clicks.
- Report CSOV + ICΔ + assisted conversions alongside sessions so leadership doesn’t confuse “visibility” with “growth.”
âś• Don'ts
- Don’t treat citations as a KPI win unless you can connect them to revenue outcomes (leads, subscriptions, affiliate EPC).
- Don’t keep scaling compressible content (definitions, generic “best X,” standard troubleshooting) without adding unique evidence or interactivity.
- Don’t respond to CTR drops with immediate rewrites before diagnosing whether an AI module absorbed the query on-SERP.
The business tension: licensing, attribution, and the next publisher–AI deal terms

Attribution standards: what publishers should ask for
The negotiation frontier is moving from “please link” to commercial terms and controls. Forbes describes the rise of intermediaries like TollBit that track scraping and charge AI companies per scrape, and notes OpenAI has content deals with publishers such as the Associated Press, Axel Springer, and the Financial Times. (forbes.com)
Your ask list should include:
- Prominent citation placement (not hidden behind expanders)
- Stable link formatting (consistent, trackable)
- Snippet/summary length limits
- Clear bot identification and enforceable controls
- Economic participation when content is used at scale
Actionable recommendation: Define an internal “minimum acceptable attribution” policy now, before you negotiate—so product, legal, and revenue teams align.
Licensing and paywalls: when restricting access helps or hurts
A pure “block everything” posture is often self-defeating for top-funnel discovery. But leaving everything open can commoditize premium value. The right move is selective defensibility:
- Keep open: broad explainers that seed brand demand and citations.
- Gate: proprietary research, benchmarks, datasets, and tools that drive subscriptions or leads.
Also watch platform competition: The Economic Times reports Apple is building an AI-powered search tool as part of a Siri overhaul, described internally as an “answer engine,” with a possible launch in spring 2026 (reported via Bloomberg). (m.economictimes.com)
More answer engines means more places where your content can be used without a visit—making licensing strategy and content packaging more urgent.
For a broader strategic frame on how these systems reshape discovery and intent, revisit [our comprehensive guide to Gemini 3’s search shift] and map your content to where you can still own the outcome.
Actionable recommendation: Build a content monetization matrix (ad RPM vs subscription conversion vs affiliate EPC) and decide—category by category—what stays open, what gets gated, and what gets licensed.
Key Takeaways
- AI answer layers are a distribution regime change: They shift value from destination clicks to “answer presence,” even when publishers supply the underlying material.
- Expect visibility–traffic decoupling: TollBit data shared via Forbes suggests AI search can send 96% less referral traffic than traditional Google search for the analyzed set. (forbes.com)
- Scraping can rise as referrals fall: The same reporting highlights scraping “more than doubled,” averaging ~2 million scrapes in Q4 2024 across the analyzed set—creating cost and control pressure. (forbes.com)
- Compressible queries are the click danger zone: Definitions, simple comparisons, basic troubleshooting, and generic “best X” lists are easiest to satisfy on-SERP.
- Defensible clicks come from depth + tools + trust: Interactive assets, unique evidence, and high-stakes verification needs are harder to replace with summaries.
- Measure what the SERP is doing, not just what your pages are doing: Add CSOV and ICΔ to reporting so CTR drops trigger SERP diagnosis before content churn.
- Negotiate for attribution and economics, not just links: As per Forbes’ reporting on deals and per-scrape models, publishers need minimum standards for citation placement, formatting, and compensation. (forbes.com)
FAQ
Do AI search engines like Gemini reduce publisher traffic?
Evidence suggests meaningful reduction in referrals: TollBit data shared with Forbes indicates AI search engines send 96% less referral traffic than traditional Google search for the analyzed set. (forbes.com)
What types of content lose the most clicks to AI answers?
Compressible intents—definitions, simple comparisons, basic troubleshooting, and generic “best X” lists—are most vulnerable because AI can satisfy the query on-SERP. (forbes.com)
How can publishers measure traffic loss from AI Overviews or AI summaries?
Use query-level CTR analysis (GSC), segment queries by “AI module present,” and track ICΔ (impressions vs clicks). Complement with branded search lift and assisted conversions to capture non-click value. (forbes.com)
How do you optimize content to get cited by AI search engines and still earn clicks?
Make pages easy to cite (clear structure, quotable definitions) but worth visiting (tools, templates, unique data, decision trees). This dual strategy becomes more important as agentic systems expand. (techradar.com)
Should publishers block AI crawlers or license their content instead?
Blocking can protect content but may risk discoverability; Forbes notes publishers’ difficulty in blocking major bots without SEO consequences and highlights the emergence of licensing and per-scrape models. A selective approach—open top-funnel, gate proprietary value, pursue licensing where leverage exists—is typically more durable. (forbes.com)

Founder of Geol.ai
Senior builder at the intersection of AI, search, and blockchain. I design and ship agentic systems that automate complex business workflows. On the search side, I’m at the forefront of GEO/AEO (AI SEO), where retrieval, structured data, and entity authority map directly to AI answers and revenue. I’ve authored a whitepaper on this space and road-test ideas currently in production. On the infrastructure side, I integrate LLM pipelines (RAG, vector search, tool calling), data connectors (CRM/ERP/Ads), and observability so teams can trust automation at scale. In crypto, I implement alternative payment rails (on-chain + off-ramp orchestration, stable-value flows, compliance gating) to reduce fees and settlement times versus traditional processors and legacy financial institutions. A true Bitcoin treasury advocate. 18+ years of web dev, SEO, and PPC give me the full stack—from growth strategy to code. I’m hands-on (Vibe coding on Replit/Codex/Cursor) and pragmatic: ship fast, measure impact, iterate. Focus areas: AI workflow automation • GEO/AEO strategy • AI content/retrieval architecture • Data pipelines • On-chain payments • Product-led growth for AI systems Let’s talk if you want: to automate a revenue workflow, make your site/brand “answer-ready” for AI, or stand up crypto payments without breaking compliance or UX.
Related Articles

Perplexity's Comet Browser: Redefining the AI-Powered Web Experience
Explore Perplexity’s Comet browser and how AI-native browsing changes discovery, citations, and workflows—plus what it signals for Gemini 3’s search future.

OpenAI starts testing ads in ChatGPT — the monetization moment AI search strategists have been waiting for
OpenAI’s ChatGPT ad tests signal a new era for AI search. Learn what’s changing, how targeting may work, and how to prepare with Knowledge Graph-led GEO.