Technical

Bing Just Gave You Free AI Citation Data. Here's What to Do With It.

Microsoft's AI Performance dashboard is the first free tool showing which pages get cited in AI answers. A practical guide to using it.

RivalHound Team
8 min read
Bing Just Gave You Free AI Citation Data. Here's What to Do With It.

Bing just gave you free AI citation data. Here’s what to do with it.

For the past two years, AI citation monitoring has been a black box. You could ask ChatGPT about your brand and manually note whether it mentioned you, but systematic tracking required paid tools or custom scraping setups. Nobody offered a Google Search Console equivalent for AI search.

That changed on February 9, 2026, when Microsoft launched the AI Performance dashboard in Bing Webmaster Tools. Then on March 23, they added the feature everyone actually needed: query-to-page mapping, which connects the questions AI systems ask to the specific pages they cite.

It’s the first time any platform has given website owners free, structured data about how their content performs in AI-generated answers. If you haven’t set it up yet, you should. But you also need to understand what it actually tells you, and what it leaves out.

What you’re looking at

The AI Performance dashboard tracks how your pages get cited across three Microsoft surfaces: Copilot conversations, AI-generated summaries in Bing search, and select partner integrations. Four metrics sit at the top:

MetricWhat it measures
Total citationsHow often your pages appear as sources in AI answers during the selected period
Average cited pagesDaily average of unique URLs from your site referenced in AI answers
Grounding queriesThe phrases AI retrieval systems used when they pulled your content
Page-level activityWhich specific URLs get cited most, with trend lines over time

The grounding queries matter most. These aren’t the questions users typed. They’re the reformulated search queries that Copilot’s retrieval system ran internally when constructing its answer. Microsoft calls this process “grounding” because it’s how the AI grounds its response in real web content rather than generating from memory alone.

Think of it this way: a user asks Copilot “what’s the best CRM for small businesses?” Copilot doesn’t just search that exact phrase. It fans out into multiple retrieval queries: “top rated CRM software small business 2026,” “CRM comparison features pricing SMB,” “best CRM reviews enterprise vs small business.” Each of those is a grounding query. If your page gets pulled by one of them, it shows up in the dashboard.

How to use grounding queries for content decisions

Grounding queries function as a new kind of keyword research, but for AI retrieval rather than traditional search. Search Influence analyzed 19,717 Copilot citations across 91 days and found patterns that translate directly into content actions.

Find gaps between what AI asks and what your page answers

Pull up your most-cited pages in the dashboard. Click each one to see which grounding queries drove those citations. Now read the page itself. Does it actually answer those queries in clear, direct language?

Often it won’t. Your page about “project management software pricing” might get cited for grounding queries about “project management tool free trial comparison.” That means the AI thought your page was relevant enough to cite, but you’re not directly addressing the question users are asking through the AI. Add a section that does.

Identify topics where you’re invisible

The grounding queries you don’t see matter as much as the ones you do. If you sell accounting software and your grounding queries are all about “invoicing features” but none about “payroll integration” or “tax compliance automation,” those are content gaps. Copilot’s retrieval system isn’t finding you for those topics, and neither are other AI platforms, most likely.

Watch for citation concentration

Search Influence’s analysis revealed that a single page accounted for 69% of all citations across 86 cited pages. That kind of concentration is risky. If that one page drops in freshness or gets outcompeted, your entire AI visibility collapses overnight.

Check whether your citations spread across multiple pages or cluster on just a few. If they cluster, you need to build out supporting content that can earn citations independently. We’ve covered why this matters in our post on how content placement drives AI citations.

The citation inflation problem

One thing the dashboard doesn’t explain well: the numbers are bigger than they look.

When a user asks Copilot a question, the system doesn’t run one retrieval query. It fans out into three to five variants of the same question, each pulling potentially different sources. Every time your page appears as a source in one of those fan-out queries, it counts as a separate citation.

Search Influence estimated that their 19,717 citations probably represented closer to 4,000-6,000 actual user conversations. That’s not a flaw in the data — it’s how AI retrieval works. But it means you shouldn’t compare these raw numbers to your Google Search Console impressions and draw conclusions about relative traffic. The metrics measure different things.

What the fan-out pattern does tell you is which pages AI systems find consistently relevant. A page that gets cited across multiple variants of the same question is genuinely authoritative for that topic. A page that shows up in just one variant might be a marginal match.

What the dashboard doesn’t tell you

The AI Performance dashboard has real blind spots, and pretending otherwise would waste your time.

The most obvious gap: no click-through data. You can see that your page was cited, but not whether anyone clicked through to read it. For traditional search, that would be a serious limitation. For AI answers, it matters less. Being cited is the visibility event itself, since 93% of AI Mode sessions end without a click. Still, you can’t measure direct traffic impact.

The bigger problem is platform coverage. The dashboard only covers Microsoft’s AI surfaces: Copilot, Bing AI summaries, and a few partner integrations. It tells you nothing about ChatGPT, Perplexity, Google AI Overviews, Claude, or Gemini. ChatGPT alone accounts for the majority of AI referral traffic. And as we’ve written about before, only 11% of websites get cited by both ChatGPT and Perplexity. Each platform has its own citation preferences and source biases. Copilot data is useful, but it’s one slice of a much larger picture.

You also can’t see how competitors perform. If your “total citations” number goes up, is that because you improved, or because the overall query volume in your category grew? Without competitive context, you can’t tell.

Finally, grounding queries are sampled, not exhaustive. For high-volume sites, the sample is probably representative. For smaller sites with fewer citations, individual data points may be noisy.

A practical setup checklist

If you haven’t activated the dashboard yet, the setup takes about ten minutes:

  1. Go to Bing Webmaster Tools and verify your site if you haven’t already (DNS, meta tag, or file verification all work)
  2. Navigate to the AI Performance section in the left sidebar
  3. Set your date range. The data goes back to late January 2026 for most sites
  4. Start with the “Pages” view to see which URLs get cited most
  5. Click into your top pages to examine grounding queries
  6. Cross-reference grounding queries with your actual content to find gaps

Schedule a weekly review. Citation patterns shift faster in AI search than in traditional search. We’ve covered the freshness dynamics in our post on content decay in AI search, and the same principle applies here — what gets cited this week might not get cited next month.

Where this fits in a broader monitoring strategy

Bing’s dashboard is a starting point, not a destination. It confirms that AI citation analytics are becoming a real discipline with real data, not just educated guesses. That matters.

But a single-platform view creates blind spots. Each AI platform weights sources differently, cites different page types, and responds to different optimization signals. Tracking Copilot citations while ignoring ChatGPT, Perplexity, and Google AI Overviews is like monitoring your rankings on Bing but not Google — technically useful, strategically incomplete.

The strongest AI visibility strategy combines the free data Microsoft now provides with broader monitoring that covers the platforms where your audience actually searches. Use the Bing dashboard to understand the mechanics of AI citation: what grounding queries look like, how fan-out works, which pages attract retrieval systems. Then apply those insights across every platform.

RivalHound tracks your brand’s visibility across ChatGPT, Google AI, Perplexity, and more. Start monitoring to see where you stand.

#Bing Webmaster Tools #AI citations #Copilot #GEO #grounding queries

Ready to Monitor Your AI Search Visibility?

Track your brand mentions across ChatGPT, Google AI, Perplexity, and other AI platforms.