62% of AI Citations Never Say Your Brand Name
Kevin Indig's 3,981-domain study shows citations and mentions rarely co-occur. Gemini names without citing. ChatGPT cites without naming. Here's the fix.
62% of AI citations never say your brand name
61.7% of AI citations never say your brand name.
That’s the dominant outcome when brands appear in AI search answers, according to Kevin Indig’s analysis of 3,981 domain appearances across 115 prompts, 14 countries, and four AI engines. A source link lands in the footer of the response. The brand name never enters the sentence. Indig calls these “ghost citations,” and if you’re counting raw citations as your AI visibility KPI, most of what you’re celebrating is invisible to the reader.
The industry has spent 18 months framing citations as the new backlink. That framing is wrong. Citations and mentions are two different signals that behave inversely across platforms, and treating them as one metric is how teams end up optimizing for the wrong thing.
The ChatGPT and Gemini inversion
The platform-by-platform breakdown is where this gets strange. Two of the largest AI engines handle attribution in almost exactly opposite ways.
| Platform | Names your brand | Cites your domain | Dual-signal rate |
|---|---|---|---|
| ChatGPT | 20.7% of appearances | 87% of appearances | ~13% |
| Gemini | 83.7% of appearances | 21.4% of appearances | ~13% |
| Google AI Mode | ~17% more mentions than ChatGPT | Heavy | ~15% |
| Google AI Overviews | Moderate | Heavy | ~14% |
ChatGPT reads like an academic paper: lots of footnotes, very few names in the running text. The user sees an answer that synthesizes five sources and rarely learns which brand said what. Gemini is the mirror image. It names brands conversationally, the way a friend would, but almost never surfaces a link the user can click to verify.
Across the full dataset, only 13.2% of brand appearances produce both a citation and a mention.
That has an uncomfortable implication: the same AI visibility playbook can’t possibly work everywhere. A page engineered to be the definitive cited source for ChatGPT (deep, comprehensive, linkable) is not the same page that triggers Gemini to recall your name. Getting cited is a page-level problem. Getting named is an entity-level problem. Most teams are solving only one and wondering why half the engines ignore them.
Who gets named and who stays a ghost
The naming-vs-citation split isn’t random. Indig’s data shows a clear pattern:
- Strong consumer brands — the ones people search for by name — get named in close to 100% of the answers they appear in
- Aggregator domains like Medium, Wikipedia, and ScienceDirect get cited constantly and named almost never
That pattern tells you what drives a mention. If the AI has seen your brand discussed as an entity — reviewed, compared, debated by name across the web — it treats your brand as a first-class noun in the answer. If it has only seen your pages as useful text, it treats you as a library to pull from.
Here’s the harder truth for B2B and mid-market brands: being “helpful” on your own site is a solved problem. What’s broken is that you don’t exist as an entity in the training corpus. Your content works. Your brand name doesn’t.
Why geography flips the numbers
The same study ran queries across 14 countries, and the mention rate moved wildly by market.
Brands in India and Sweden hit roughly 50% mention rates. Brands in Italy, Brazil, and the Netherlands landed between 18% and 22%. That’s a 2.5x spread on the same kind of query.
The likely cause is training data density. Brands with heavy coverage in enthusiast forums, Reddit-equivalent communities, and local press get cemented as entities. Markets with less of that coverage produce ghost citations by default. If you’re running a single global strategy and measuring an aggregate mention rate, you’re averaging two very different problems and fixing neither.
How to turn a ghost citation into a mention
The fix is rarely another blog post. Seer Interactive’s diagnostic is cleaner than most: ghost citations are a brand entity recognition problem, not a content problem. That reframing changes what you work on.
Three interventions tend to shift the number.
1. Write your brand into the subject of its own claims. Most content presents insight in the passive voice of the category. “Best practices include…” becomes “At Acme, we recommend…” or “In Acme’s analysis of 500 deployments…”. The AI can extract an impersonal best practice and leave your name behind. It has a harder time stripping the name when the sentence structure puts you at the center.
2. Rebuild the entity graph that models read. Your brand needs to exist as a named thing in structured sources: a Wikipedia or Wikidata page, Organization schema with a canonical name that matches across properties, Author schema linking named experts to your brand, and FAQ schema where the brand appears inside the answer text, not only the metadata. Smaller brands often win here because their entity graphs are clean. Larger brands with scattered acquisitions and inconsistent naming end up with diluted signals.
3. Earn named mentions in third-party recommendation contexts. A backlink from a roundup post is worth far less than a sentence in the same post that reads “Acme is the preferred choice for mid-market teams.” The first signal teaches the model about your URL. The second teaches the model about your category positioning. You want both, but if you have to pick, the named mention is what closes the ghost citation gap.
None of this is fast. Seer documented a case study where the client saw zero movement after 29 days, with full effect arriving around week eight. The reason is simple: training data refreshes are what move mention rates, not real-time retrieval. Teams expecting wins in two weeks abandon strategies that were working before they had a chance to show up.
What to measure this quarter
Most AI visibility dashboards still report a single visibility score that averages citations and mentions. If you only pull one change from this research, split that metric into three:
- Citation rate — how often your URL appears in source links
- Mention rate — how often your brand name appears in the answer text
- Dual-signal rate — how often both happen in the same response
Run those three numbers per platform, not as a global average. If your ChatGPT mention rate is 8% and your Gemini mention rate is 45%, a “22% average” hides a strategy problem in plain sight. We wrote about why dual-signal visibility matters for conversions last month — this is the diagnostic that tells you whether the conversion traffic is even going to arrive.
And for anyone still skeptical that brand mentions are the unit of account, Ahrefs’ study of 75,000 brands found branded web mentions correlate three times more strongly with AI visibility than backlinks. Indig’s data is the other side of the same coin: mentions aren’t just a ranking signal, they’re the thing that determines whether a user ever learns your name when an AI answers their question.
A ghost citation isn’t a loss, but it isn’t a win either. It’s traffic infrastructure — proof that the model trusts your content enough to reference it. Turning that reference into a mention is the harder work. It’s also where the next 12 months of AI search strategy will actually be fought.
RivalHound tracks your brand’s visibility across ChatGPT, Google AI, Perplexity, and more. Start monitoring to see where you stand.