The Queries That Actually Matter for AI Visibility Aren't the Ones Humans Type
AI platforms generate their own 'grounding queries' to find content. Bing just made them visible. Here's what they reveal.
The Queries That Actually Matter for AI Visibility Aren’t the Ones Humans Type
You’re optimizing your content for “best CRM for startups.” Good keyword. Decent volume. Makes sense.
But when someone asks ChatGPT that question, the model doesn’t search for “best CRM for startups.” It breaks the prompt into a series of machine-generated sub-queries: “top-rated CRM software 2026,” “CRM tool comparisons small business,” “startup CRM feature requirements,” “expert reviews CRM platforms.” These sub-queries, called grounding queries or fan-out queries, are what actually pull your content into AI responses.
You’ve been optimizing for the wrong queries. And until last month, you had no way to see the right ones.
What grounding queries are (and why they matter)
When a user asks an AI platform a question, the model doesn’t search the web the way a human would. Instead, it generates multiple machine-optimized queries designed to retrieve the best evidence for building an answer. This process, called query fan-out, happens behind the scenes on ChatGPT, Copilot, Perplexity, and Google AI Mode.
We’ve written before about how ChatGPT reads your content using a four-phase process: search trigger, result analysis, sliding window retrieval, and synthesis. Grounding queries are the search trigger phase made visible. They’re the exact phrases the AI’s retrieval system generates to find pages that can ground its response in real sources.
Here’s what makes them different from traditional search queries:
| Characteristic | User search queries | Grounding queries |
|---|---|---|
| Who creates them | Humans | AI retrieval systems |
| Format | Natural language, often short | Machine-optimized, often specific |
| Volume per prompt | One query per search | Multiple queries per prompt |
| Visibility | Always visible (Search Console) | Hidden until now |
| Optimization target | Click-through rate | Citation probability |
The distinction matters because grounding queries often use language that doesn’t match what you’d expect. A user asking “Is Salesforce worth it?” might trigger grounding queries like “Salesforce pricing enterprise 2026,” “Salesforce alternatives comparison CRM,” and “Salesforce customer satisfaction reviews.” Your content needs to answer those machine-generated queries, not just the human one.
Bing just opened the black box
On February 10, 2026, Microsoft launched the AI Performance dashboard in Bing Webmaster Tools. For the first time, publishers can see exactly which grounding queries trigger citations to their content across Copilot and Bing AI summaries.
Then on March 23, Microsoft added grounding query-to-page mapping: click any grounding query to see which of your pages got cited, or click any page to see which grounding queries triggered its citations. It works in both directions.
The dashboard tracks four things:
- Total citations across Copilot, Bing AI, and partner integrations
- Average cited pages per day from your site
- Grounding queries that triggered those citations
- Page-level citation data showing which URLs get referenced and when
No other search platform offers anything like this. Google Search Console doesn’t show AI Overview citations. OpenAI doesn’t share ChatGPT’s retrieval queries. Perplexity doesn’t expose its internal search behavior. Bing is the only platform giving publishers a direct view into how AI finds and cites their content.
What early data tells us
The reports have been live for about seven weeks now, and practitioners are already finding patterns that challenge conventional GEO wisdom.
Grounding queries don’t match your target keywords. Hive Digital reported that the grounding queries triggering their citations used different phrasing than the keywords they’d optimized for. This makes sense: AI retrieval systems are built to find answers, not match keywords. They search for the information need behind a prompt, and they express that need in their own vocabulary.
One page can be cited for dozens of different queries. Because AI breaks prompts into sub-queries, a single well-structured page might show up across many different grounding queries. Pages with broad topical authority that answer multiple related questions earn citations from more fan-out paths.
Some of your best-cited content might surprise you. Several practitioners found that pages they hadn’t prioritized for SEO were earning the most AI citations. Blog posts, help documentation, and detailed comparison pages often outperform landing pages because they contain the kind of direct, specific answers AI systems want.
A practical workflow for grounding query optimization
Here’s how to use the AI Performance dashboard to improve your citation rates. You’ll need a verified site in Bing Webmaster Tools (free, takes five minutes to set up if you don’t already have one).
Step 1: Identify your top-cited pages
Start with the Pages view in the AI Performance dashboard. Sort by citation count over the last 30 to 90 days. These are the pages AI already considers citation-worthy. They’re your foundation.
Step 2: Map grounding queries to content gaps
Click each top-cited page to see its associated grounding queries. Group these queries by intent:
- Definitional (“what is X,” “X explained”)
- Comparative (“X vs Y,” “best X alternatives”)
- Evaluative (“X reviews,” “is X worth it”)
- Procedural (“how to X,” “X setup guide”)
Now look at your page. Does it explicitly address each intent type near the top and within major subheadings? If a page gets cited for comparison queries but buries its comparison section in paragraph fifteen, you’ve found an optimization opportunity. As we covered in our post on content placement and AI citations, 44% of citations come from the first 30% of a page. Move the relevant content up.
Step 3: Find pages that should be cited but aren’t
Check the Grounding Queries view for queries related to your business that cite competitor pages instead of yours. These are your content gaps. Either create new content targeting those queries, or restructure existing content to better match what the AI retrieval system is looking for.
Step 4: Consolidate overlapping content
If citations for a single grounding query spread across five of your pages, that signals content fragmentation. The AI retrieval system can’t tell which page is the canonical source, so it cites whichever chunk happens to match best. Consolidating overlapping content into a single authoritative page concentrates your citation authority.
Step 5: Repeat monthly
Grounding queries shift as AI models update and as user behavior changes. Check the dashboard monthly and track whether your citation counts grow after each round of optimization.
The bigger picture: why this changes GEO strategy
Grounding queries reveal something most GEO advice misses. The five factors that drive AI citations (answering the question, proving expertise, clarity, trust signals, and quotable information) are still the right framework for what makes content citable. But grounding queries tell you something different: which questions AI is actually asking of your content.
That gap between “what we optimized for” and “what AI searched for” is where most brands lose citations. You built a page targeting “best CRM software.” The AI searched for “CRM tools with pipeline automation under $50/month.” Your page doesn’t mention pricing in the first three paragraphs. Citation lost.
This is also why the Otterly.AI citation economy report, which analyzed over a million citations, found that only 11% of domains get cited by both ChatGPT and Perplexity. Each platform generates its own grounding queries with its own retrieval logic. Content that satisfies Bing’s grounding queries for Copilot may not satisfy the fan-out queries ChatGPT sends to its search backend. Multi-platform visibility requires understanding how each platform’s retrieval system thinks about your topic, not just how users phrase their prompts.
The limits of what Bing shows you
A few caveats before you rebuild your content calendar around grounding queries.
Bing only covers Bing. The AI Performance dashboard tracks Copilot and Bing AI, not ChatGPT, Perplexity, Google AI Mode, or Gemini. Copilot is a meaningful slice of AI usage, but it’s not the whole picture. Use the grounding queries you see in Bing as directional signals for how other platforms might also decompose prompts, but don’t assume the patterns transfer exactly.
The data is still thin for small sites. If your site gets minimal Bing traffic, your AI Performance data will be sparse. You’ll need several weeks of data before patterns become reliable.
Grounding queries are a proxy, not the whole story. Seeing which queries trigger your citations tells you what AI retrieval systems look for. It doesn’t tell you why a competitor got cited instead, or how the model weighted different sources during synthesis. Grounding queries are one piece of the visibility puzzle, not the complete picture.
What to do this week
If you’re doing any kind of GEO work, set up the Bing AI Performance dashboard today. It’s free, it takes minutes, and it gives you data no other platform provides.
Then run through the workflow above for your top five pages. Compare the grounding queries you see against the keywords you’ve been optimizing for. The gap between those two lists is your biggest opportunity for improving AI visibility.
The brands that figure out grounding queries first will have a structural advantage. The rest will keep optimizing for queries that AI never actually searches for.
RivalHound tracks your brand’s visibility across ChatGPT, Google AI, Perplexity, and more. Start monitoring to see where you stand.