Claude Runs on Brave Search. Your Bing Playbook Won't Get You Cited.
Claude uses Brave Search as its backend. Its citations match Brave's top results 86.7% of the time. Your ChatGPT strategy doesn't apply here.
Claude Runs on Brave Search. Your Bing Playbook Won’t Get You Cited.
Most brand teams split their AI search work two ways: optimize for ChatGPT (which pulls from Bing) and optimize for Google AI Overviews. Claude gets thrown in as an afterthought — “it’s basically the same, right?”
It isn’t. Claude doesn’t use Bing. It doesn’t use Google. It uses Brave Search, and the citations barely overlap with either competitor. If you assume the work you’ve done for ChatGPT transfers, you’re wrong about four out of five times.
The backend nobody optimizes for
On March 20, 2025, Anthropic launched web search for Claude.ai. TechCrunch quickly confirmed what engineers suspected from the response headers: Brave Search powers the retrieval layer. Anthropic later added Brave Search to its official subprocessor list, making the partnership formal.
Why does this matter? Because when Claude answers a question using real-time information, it isn’t running its own crawler. It’s querying Brave’s index, taking the top results, and synthesizing an answer from them. Whatever ranks on Brave has a direct shot at being cited by Claude. Whatever doesn’t, basically doesn’t exist to Claude.
That’s a different game than ChatGPT plays.
The 86.7% number that should reshape your strategy
Profound ran the analysis that everyone else is now citing: Claude’s cited results overlap with Brave Search’s top non-sponsored organic results 86.7% of the time (13 of the top 15 results). The statistical significance was overwhelming — p-value below 0.0001, which means this alignment isn’t noise or coincidence. Claude is essentially reading off Brave’s leaderboard.
Compare that to ChatGPT and Bing. Same study pattern, different result: ChatGPT’s citations align with Bing’s top-ranked results only 26.7% of the time. ChatGPT does heavy re-ranking, filtering, and de-duplication before deciding what to cite. Claude, at least today, mostly doesn’t.
The practical implication: ranking #1–5 on Brave is the single highest-leverage thing you can do for Claude visibility. Ranking #1–5 on Bing is only loosely correlated with ChatGPT visibility.
Why the three platforms diverge so hard
Here’s the part that’ll frustrate anyone running a unified GEO program. Claude’s search results overlap with ChatGPT’s results by roughly 20%. That means if you’re optimizing for one, you’re hitting the other by accident at best.
| Platform | Search backend | Citation alignment with backend | Where the real work lives |
|---|---|---|---|
| ChatGPT | Bing | ~27% | Re-ranking layer: entities, authority, structured answers |
| Claude | Brave Search | ~87% | The Brave index itself |
| Google AI Overviews | High (same index) | Traditional SEO + AI-specific content signals | |
| Perplexity | Proprietary + multiple | Variable | Citation-forward content, Reddit, real-time sources |
Three different search graphs. Three different ranking systems. One brand trying to show up in all of them. For a deeper breakdown of the user-behavior differences, see our AI Overviews vs ChatGPT vs Perplexity comparison.
We’ve written before about how grounding queries differ across platforms — that matters here too. Claude’s fan-out queries hit Brave. ChatGPT’s hit Bing. Same user question, entirely different retrieval systems.
What ranks on Brave, and why your site probably doesn’t
Brave Search is built on its own independent index — over 30 billion pages with roughly 100 million daily updates as of early 2026, according to Brave. That’s smaller than Google’s or Bing’s, but it’s not a shell. Brave crawls, ranks, and serves results without depending on other providers.
What this means for your brand:
- Brave often ranks sites that don’t rank well on Bing or Google. Smaller, independent publishers, long-tail editorial, and privacy-focused sources get more weight.
- Brave’s crawler is Brave-bot (or “Brave Search Crawler”), distinct from Googlebot and Bingbot. If your robots.txt or WAF blocks unfamiliar crawlers by default, you may be invisible to Brave without realizing it.
- Brave respects its own user-agent rules. You can’t verify Brave crawl access by checking your Google Search Console.
If your AI visibility is weak on Claude specifically, the first diagnostic isn’t “fix my schema” — it’s “make sure Brave can actually crawl me.”
A practical Claude checklist
Five things to do this quarter if Claude traction matters to you:
- Check your Brave Search rank for your category terms. Open a clean Brave browser session (or use brave.com/search) and run the same queries you run on Google and Bing. Note the gap. If you rank in the top five on Google but page two on Brave, that’s your problem.
- Audit robots.txt and firewall rules for Brave-bot. Specifically look for overly restrictive rules like
User-agent: * Disallow:or Cloudflare Super Bot Fight Mode blocking “likely bots.” Explicitly allow Brave’s crawler. We covered the broader AI bot gating question recently — the logic applies here too. - Submit your site to Brave’s index. Brave accepts site submissions through its webmaster tools. This is worth five minutes, and most teams skip it entirely.
- Test actual Claude responses for brand queries. Claude with web search enabled is available on all plans globally as of 2026. Run your core buyer-intent questions and log which sources Claude cites. If the same three or four domains keep winning, those are your competitors on this platform — and they may not be your Bing or Google competitors.
- Don’t rely on “if it works for ChatGPT, it works for Claude.” The 20% overlap figure means most of your ChatGPT wins don’t transfer. Budget separate effort.
Where this goes from here
Claude is the platform that’s growing fastest among the technical and enterprise segments — developers, researchers, AI engineers. If your buyer personas include those groups, treating Claude as “ChatGPT’s quieter cousin” is a strategy error. The citation distribution is different, the backend is different, and the content signals that win are different.
Two caveats are worth naming. First, Anthropic could change search providers, add providers, or build its own index — any of which would invalidate parts of this playbook. The Brave dependency is a snapshot, not a constant. Second, the 86.7% figure came from a single study. It’s directional evidence, not gospel. Your category may show different behavior, especially for fresh or time-sensitive queries where Claude appears to re-rank more aggressively.
But the bigger point holds: AI search isn’t one market. It’s three or four markets wearing a trench coat, and the retrieval layers underneath are wildly different. Track them separately or miss in ways you can’t diagnose.
RivalHound tracks your brand’s visibility across ChatGPT, Google AI, Perplexity, and more. Start monitoring to see where you stand.