Analysis of 118K+ answers reveals dramatic differences in AI citation behavior: Perplexity averages 21.87 citations per question while ChatGPT uses 7.92, and OpenAI is the only model citing Wikipedia significantly at 4.8%.
Other Articles
Query Fan-Out: ChatGPT Runs 3.5x More Searches Than Perplexity (102K Queries Analyzed)
Analysis of 102,018 AI queries reveals the truth: query fan-out varies by provider. Perplexity: 70.5% single-query. ChatGPT: 32.7% single-query. This provider-specific behavior is the hidden factor determining who wins in AI search.
We analyzed 184,128 queries on ChatGPT, Gemini, Perplexity and Claude. Here is what we learned.
The most comprehensive study ever conducted on LLM ranking factors, analyzing 184,128 queries and 1,479,145 sources across 20 AI models including ChatGPT, Gemini, Perplexity, Claude, Mistral, DeepSeek, and Grok to help you dominate AI-generated results.