The Query Fan-Out Reality: Why Provider Behavior Determines Your AI Visibility
Analysis of 102,018 AI queries reveals the truth: query fan-out varies by provider. Perplexity: 70.5% single-query. ChatGPT: 32.7% single-query. This provider-specific behavior is the hidden factor determining who wins in AI search.
Last month, we helped a SaaS company discover why they ranked #1 in Perplexity but were invisible in ChatGPT.
The reason: Perplexity generated 1 search query for their topic. ChatGPT generated 8.
They optimized for the average (3-4 queries). They should have optimized for the reality: query fan-out is provider-dependent.
We decided to quantify this with real data.
The Discovery:
We analyzed 102,018 search queries from 38,418 user prompts across multiple AI providers (Sept-Nov 2025).
The results? AI search behavior isn't one thing. It's a spectrum:
- Perplexity AI: 70.5% of prompts generate exactly ONE query
- ChatGPT: Only 32.7% generate exactly ONE query
That's a 2x difference in search behavior between providers.
If you're optimizing for "average AI," you're leaving half your visibility on the table.
Executive Summary: What You Need to Know
The Provider Split:
- Perplexity: 70.5% single-query, 29.5% multi-query (avg: 2.24 queries/prompt)
- ChatGPT: 32.7% single-query, 67.3% multi-query (avg: 3.51 queries/prompt)
- Overall Average: 2.65 queries/prompt (true but hides massive provider variation)
Why It Matters:
Multi-query prompts offer 5-10x visibility advantage, but only when providers actually fan out queries. Understanding which AI your audience uses changes everything.
The Two Laws:
- The Provider Effect: Query behavior varies 2x+ by AI platform (search-first vs conversation-first architecture)
- The Multiplier Effect: 100% coverage of query variations = 5-10x visibility (but only for providers that fan out)
What To Do:
Know your audience's AI provider → Tailor optimization strategy → Track coverage by provider.
Part 1: The Data That Changed Everything
Methodology
Dataset:
- 102,018 web search queries generated by AI systems
- 38,418 user prompts analyzed
- Two providers tracked (Perplexity 67.2%, ChatGPT 32.8%)
- Period: September-November 2025
- Source: Qwairy Search Intelligence platform
What We Tracked: Every search query AI systems generated in response to user prompts, then analyzed distribution patterns by provider, time, and query characteristics.
Study Limitations (The Fine Print)
Before we dive in, full transparency:
- Provider Sampling Bias: Perplexity represents 67% of our dataset vs ~5% real-world market share (13.4x over-sampled due to our client base). This affects overall averages but not provider-specific patterns.
- Geographic Bias: Dataset weighted toward French/European brands (31% French, 14% English, 55% mixed/other queries)
- Provider Coverage: Analysis focused on Perplexity and ChatGPT, as these are the only providers where we can reliably capture search query fan-out data
- Time Window: Sept-Nov 2025 snapshot; patterns may shift seasonally
What This Means: Provider-specific findings (e.g., "Perplexity is 70.5% single-query") are statistically robust. Overall statistics should be interpreted with sampling bias context.
Part 2: The Provider Effect: Why Architecture Determines Behavior
Here's what we found when we split the data by AI provider:
Perplexity AI (67.2% of dataset):
| Behavior | Prompts | Percentage |
|---|---|---|
1 query | 18,194 | 70.5% |
2+ queries | 7,604 | 29.5% |
Average | - | 2.24 queries/prompt |
ChatGPT (32.8% of dataset):
| Behavior | Prompts | Percentage |
|---|---|---|
1 query | 4,133 | 32.7% |
2+ queries | 8,487 | 67.3% |
Average | - | 3.51 queries/prompt |

Why This Happens: Architecture Matters
Perplexity (search-first):
- Built to minimize queries and maximize precision
- Only fans out when topic is genuinely ambiguous
- Citation-focused: needs fewer queries when it finds quality sources
- User expectation: Fast, precise answers
ChatGPT (conversation-first):
- Built to explore and understand context
- Fans out by default to gather multiple perspectives
- Discovery-focused: explores solution space thoroughly
- User expectation: Comprehensive, exploratory answers
The Strategic Implication:
If your audience uses Perplexity: Win the single query. Be the #1 definitive source.
If your audience uses ChatGPT: Win all query variations. Coverage = competitive advantage.
Real Example: GDPR Compliance Agencies
User prompt: "What are the best GDPR compliance agencies in France?"
Perplexity response: 1 query
GDPR compliance agency France
ChatGPT response: 6 queries
GDPR compliance agency FranceGDPR consulting services Parisdata protection services FranceGDPR agency France 2025GDPR consultant enterprisedata protection expert France
Result: Brand appearing in all 6 ChatGPT queries gets 6x the visibility vs brand appearing in 1. But for Perplexity, coverage advantage doesn't exist. Only ranking matters.
Part 3: What Triggers Query Fan-Out? (The Patterns Nobody Talks About)
We analyzed 38,418 prompts to understand what causes AI to fan out queries. The results reveal clear patterns that determine whether you get 1 query or 50+.
Pattern 1: List & Top Keywords = Massive Fan-Out
| Keyword in Prompt | Avg Queries/Prompt | Multiplier vs Baseline |
|---|---|---|
"list/liste" | 49.01 | 13.9x |
"top" | 8.44 | 2.4x |
"comparison/vs" | 5.67 | 1.6x |
"best/meilleur" | 3.71 | 1.1x |
"how to/comment" | 3.40 | 1.0x (baseline) |
"why/pourquoi" | 2.89 | 0.8x |
"what is/qu'est" | 1.96 | 0.6x |

Key Finding: Prompts with "list/liste" generate 14x more queries than factual questions. "Top" prompts generate 2.4x more.
Why This Matters:
- Recommendation-seeking queries = high fan-out = multiplier effect kicks in
- Factual queries = low fan-out = ranking matters more than coverage
Strategic Implication: If your content targets "top X" or "best Y" queries, you're competing in a high fan-out environment where coverage advantage is real. Optimize for all query variations.
Is my brand visible in AI search?
Track your mentions across ChatGPT, Claude & Perplexity in real-time. Join 1,500+ brands already monitoring their AI presence with complete visibility.
Pattern 2: The Short Prompt Paradox
| Prompt Length | Avg Queries/Prompt | Sample Size |
|---|---|---|
Short (<50 chars) | 9.31 | 1,947 prompts |
Medium (50-100) | 3.10 | 23,644 prompts |
Long (100-200) | 3.27 | 3,228 prompts |
Very Long (200+) | 4.07 | 30 prompts |
Counterintuitive Discovery: Short prompts generate 3x more queries than medium-length prompts.
Why? Short prompts are often:
- List-seeking ("top 10...")
- Ambiguous (AI needs to explore)
- Recommendation-heavy ("best...")
Longer prompts are more specific, giving AI clear direction = fewer queries needed.
Pattern 3: What Triggers Extreme Fan-Out
High fan-out patterns (1,000+ queries):
Competitive B2B service queries in local markets consistently generate extreme fan-out:
- "Top 10" listicle queries
- Agency/consultant recommendations
- Tool comparisons
- Location-specific service searches
Low fan-out patterns (single query):
Informational, non-commercial, specific questions typically result in a single targeted query:
- "Why" explanatory questions
- Specific factual lookups
- Non-commercial curiosity queries
What This Means for Your Strategy
High Fan-Out Topics (top, best, agencies, tools):
- ✓ Invest in comprehensive coverage
- ✓ Create content clusters for all semantic variations
- ✓ Schema markup critical (help AI understand relationships)
- ✓ Track coverage % (missing 1 variation = 20% visibility loss)
Low Fan-Out Topics (why, what is, specific facts):
- ✓ Focus on ranking #1 for the single query
- ✓ Be the definitive source
- ✓ Don't over-invest in variations (marginal returns)
Part 4: How AI Transforms Your Prompts (The Hidden Layer)
We analyzed 102,018 generated queries to reveal how AI rewrites user prompts. Three patterns change everything:

Pattern 1: AI Auto-Adds Recency (28% of Queries)
AI adds "2025" to 28.1% of queries even when users don't mention it. The ratio is extreme: 2025 appears 184x more than 2024.
What AI Does:
- User: "best project management tools"
- AI searches: "best project management tools 2025"
Why: Architectural recency bias confirmed by Waseda University (2025): 65-89% of AI citations favor 2023-2025 content.
Action: Add current year to titles, H1s, and first paragraph. If your content says "2024" or has no year, you're invisible.
Pattern 2: AI Adds Geographic & Intent Keywords (Without Being Asked)
Real example:
- User: "Je cherche une assistance juridique dédiée aux droits des travailleurs"
- AI generates:
assistance juridique droits travailleurs+...France+...Paris
AI adds evaluative keywords too: "meilleur" appears in 16.7% of queries, "best" in 5.9%.
Most auto-added keywords:
- Geographic: "France" (13.9%), "Paris" (5.1%)
- Evaluative: "meilleur" (16.7%), "best" (5.9%), "top" (4.6%)
- Trust signals: "reviews/avis" (4.0%), "comparatif" (1.8%)
Strategy: Optimize for intent-implied keywords (best, top, local) even if users don't type them.
Pattern 3: Query Stability Varies by Provider
Critical discovery: Query stability is provider-dependent, not universal.
We analyzed 13,610 questions asked multiple times across 2.1M+ pairwise comparisons:
| Provider | Pairwise Comparisons | Avg Query Overlap | Variation Rate |
|---|---|---|---|
Perplexity | 2,057,376 | 92.8% | 7.2% |
ChatGPT | 128,565 | 11.0% | 89.0% |
Perplexity is deterministic: Same prompt → same query 93% of the time. The search-first architecture generates consistent, targeted queries.
ChatGPT is non-deterministic: Same prompt → different queries 89% of the time. The conversation-first architecture explores different angles each run.
Why This Matters:
The strategic implications are opposite depending on your audience's AI:
For Perplexity users:
- Query stability means ranking consistency
- Win the single query = sustained visibility
- Less need for extensive semantic coverage
For ChatGPT users:
- Query instability means unpredictable visibility
- Semantic coverage is critical (you can't predict which variation will be asked)
- Traditional keyword targeting becomes less reliable
Strategy Shift: Know your audience's AI preference. For ChatGPT-heavy audiences, cover all semantic variations. For Perplexity-heavy audiences, focus on ranking #1 for the primary query.
Part 5: The Two Laws of AI Search Visibility
Law 1: The Provider Effect
Finding: Query behavior varies by 2x+ depending on AI provider
| Provider | Single-Query Rate | Multi-Query Rate | Avg Queries/Prompt |
|---|---|---|---|
Perplexity | 70.5% | 29.5% | 2.24 |
ChatGPT | 32.7% | 67.3% | 3.51 |
Difference | -37.8pp | +37.8pp | +57% |
What This Means for You:
Perplexity Optimization:
- ✓ Rank #1 for the core query
- ✓ Citation-worthy content (data, research, original stats)
- ✓ Structured content (tables, lists that are easy to parse)
- ✓ Fresh content (updated regularly with current year)
- ✗ Don't obsess over query variations (matters less)
ChatGPT Optimization:
- ✓ 100% coverage across all semantic variations
- ✓ Content clusters (pillar page + supporting pages)
- ✓ Optimize for different intents (info, comparison, commercial)
- ✓ Schema markup (help AI understand relationships)
- ✓ Multi-angle content (different ways to answer same question)
Academic Confirmation:
The ArXiv paper "Towards AI Search Paradigm" (2024) confirms that different AI architectures "dynamically adapt to the full spectrum of information needs" with fundamentally distinct search strategies.
Is my brand visible in AI search?
Track your mentions across ChatGPT, Claude & Perplexity in real-time. Join 1,500+ brands already monitoring their AI presence with complete visibility.
Law 2: The Multiplier Effect (When It Matters)
Finding: Appear in all query variations = 5-10x visibility advantage (for multi-query providers)
The Math:

Scenario: "Best project management tools for remote teams"
AI Generates 5 queries:
best project management software remote teams 2025top remote team collaboration toolsproject management tools comparison remote workremote team productivity software reviewsbest remote work management platforms
Brand A (comprehensive coverage): Appears in all 5 → 5 impressions
Brand B (single-page strategy): Appears in 1 → 1 impression
Result: Brand A has 5x visibility advantage
Key Insight: Missing even 1 query variation means losing 20% of potential visibility. The math is simple: if ChatGPT generates 5 queries and you only appear in 4, you lose 20% of impressions.
Critical Caveat: This multiplier effect only applies to conversation-first providers like ChatGPT with 67.3% multi-query rate. For Perplexity (29.5% multi-query), coverage advantage is minimal.
How to Achieve 100% Coverage:
- Map all semantic variations of your core topic
- Create content clusters: Pillar page + supporting pages for each variation
- Optimize for different intents: Informational, commercial, comparison, local
- Use schema markup to help AI understand content relationships
Part 6: Surprising Insights
Language Distribution Reflects Client Base
| Language | Percentage |
|---|---|
French | 31.2% |
English | 13.7% |
Mixed/Other | 55.1% |
Insight: Our dataset reflects our French/European client base. Among clearly-detected languages, French queries dominate (31.2% vs 13.7% English). The high "Mixed/Other" category (55.1%) includes queries with minimal text or mixed-language content.
Implication: Language distribution varies by market. Track your actual query language distribution rather than assuming English dominance.
Common Mistakes (And How to Avoid Them)
❌ Mistake 1: Optimizing for "Average AI"
Wrong: "AI generates 3-4 queries, so I'll create 3-4 content pieces and call it a day."
Reality: Perplexity (2.24 avg) vs ChatGPT (3.51 avg) = 57% difference.
Right: Know which AI your audience uses. Tailor strategy to that provider's architecture.
❌ Mistake 2: Assuming All Providers Behave the Same
Wrong: "I'll optimize the same way for all AI platforms."
Reality: Search-first (Perplexity) vs conversation-first (ChatGPT) = fundamentally different behaviors.
Right: Perplexity = rank #1. ChatGPT = comprehensive coverage.
❌ Mistake 3: Ignoring the Provider Your Audience Actually Uses
Wrong: "I'll optimize for ChatGPT because it has the biggest market share."
Reality: If your B2B SaaS audience uses Perplexity heavily, your multi-query strategy is wasted effort.
Right: Analyze your AI referral traffic → optimize for your actual audience's preferred platform.
FAQ: Understanding Query Fan-Out
Q1: Why does Perplexity generate fewer queries than ChatGPT?
Architecture. Perplexity is built on search-first principles (minimize queries, maximize precision). ChatGPT is conversation-first (explore solution space). Different tools, different behaviors.
Q2: Should I optimize differently for each AI provider?
Yes. Perplexity strategy: Be the #1 definitive source for the single query. ChatGPT strategy: Achieve 100% coverage across all query variations. One-size-fits-all doesn't work.
Q3: How do I know which AI provider my audience uses?
Check your analytics for AI referral traffic. Look at referrer headers, bot user agents, and citation patterns. If you're getting traffic from Perplexity domains (perplexity.ai), that's your answer.
Q4: Will query fan-out patterns stay stable over time?
No. Our data shows single-query prompts declining from 65.4% (Oct) → 40.8% (Nov) in just 1 month. AI search is evolving toward more exploratory, multi-query behavior. Monitor trends quarterly.
Q5: What's more important: ranking or coverage?
Depends on the provider. Perplexity: Ranking is everything (70.5% single-query means coverage advantage is minimal). ChatGPT: Coverage creates the multiplier effect (67.3% multi-query means every variation matters).
What This Means for You
If you've been optimizing for "average" AI behavior, you've been fighting with one hand tied behind your back.
The brands winning in AI search understand one critical truth: AI search isn't one thing. It's a spectrum of behaviors determined by provider architecture.
The question isn't "How do I rank in AI?"
It's "Which AI am I optimizing for?"
Because the answer changes your entire strategy:
- Perplexity users? → Win the single query. Be definitive.
- ChatGPT users? → Win all variations. Be comprehensive.
- Both? → You need two strategies, not one.
The data is clear. The insights are actionable. The competitive advantage belongs to brands who understand that provider architecture determines search behavior.
What will you optimize for?
About This Study
Conducted by: Qwairy Research Team
Dataset: 102,018 queries from 38,418 prompts
Period: September-November 2025
Methodology: Quantitative analysis of AI-generated search queries across multiple providers with full transparency on sampling bias
Key Methodological Notes
Sampling Bias Disclosure:
- Our dataset over-represents Perplexity (67% vs ~5% market share) by 13.4x due to client base
- This affects overall averages but not provider-specific patterns
- Provider-level findings (e.g., "Perplexity is 70.5% single-query") are statistically robust
- Overall statistics presented with bias context
Outlier Sensitivity:
- 10% of prompts are statistical outliers (>50 queries)
- Removing outliers changes mean by −21.3% but median stays stable
- Our findings focus on median and percentages (robust to outliers)
Sources & References
This study is supported by external research:
- Nectiv ChatGPT Search Study (October 2025, 8,500+ prompts)
- Waseda University AI Recency Bias Study (2025)
- ArXiv: "Towards AI Search Paradigm" (2024)
- Seer Interactive AI Bot Traffic Analysis (2025)
- Industry reports from Semrush, Ahrefs, SE Ranking (2024-2025)
Next Report: December 2025 Update (Expected January 2026)
Want to track your query fan-out coverage by provider? Qwairy monitors AI responses across Perplexity, ChatGPT, and other platforms, showing you exactly how each provider generates queries for your topics and where your coverage gaps are.
Is Your Brand Visible in AI Search?
Track your mentions across ChatGPT, Claude, Perplexity and all major AI platforms. Join 1,500+ brands monitoring their AI presence in real-time.
Free trial • No credit card required • Complete platform access
Other Articles
How Tally.so Gets 10% of New Signups from ChatGPT: A Complete GEO Audit
Tally.so, the free online form builder, recently announced that AI search has become their biggest acquisition channel, driving the majority of their new signups.
Introducing Viewer Mode: Share Live Reports, Keep Control
Real-time client dashboards with zero editing risk.