Share of voice (SOV) has been a standard brand marketing metric for decades — the percentage of total advertising or media impressions in a category that belong to your brand. In the AI search era, SOV has a new and increasingly important dimension: the proportion of AI-generated responses in your category that mention your brand. This is AI Share of Voice, and it is rapidly becoming one of the most important brand health metrics in digital marketing.

"AI share of voice is to brand marketing what organic share of voice was to content marketing five years ago — the metric that separates leaders from laggards."

What AI share of voice means and why it matters

AI SOV formula: AI SOV = (number of AI responses that mention your brand / total AI responses for category queries) × 100.

For example: if you run 100 category-level prompts across five AI models (500 total responses) and your brand appears in 75 of those responses, your AI SOV is 15%. If your nearest competitor appears in 200 of those responses, their AI SOV is 40%. The gap between you is your AI visibility deficit — and it represents real commercial exposure as more customers rely on AI for brand discovery.

AI SOV matters because it is a leading indicator of AI-influenced revenue. Customers who ask AI assistants for recommendations in your category and receive a competitor's name instead of yours are more likely to consider, trial, and purchase from that competitor. As AI search grows as a discovery channel, AI SOV correlates increasingly closely with market share in discovery-heavy categories. This connects directly to why your competitors may be winning in AI search.

Defining your query universe

The foundation of AI SOV measurement is a well-defined query universe — the set of prompts that represents how your target customers query AI assistants about your category. This is the most important methodological decision in AI SOV measurement, because the query set determines what you're measuring.

A robust query universe for a B2B SaaS company might include: 20 category discovery prompts ("what's the best project management tool for a 50-person team?"), 15 problem-solution prompts ("how do I manage cross-functional projects without email chaos?"), 10 comparison prompts ("compare Notion vs Asana for agile teams"), and 5 brand-specific prompts ("what do you know about [brand]?"). Total: 50 prompts as a minimum viable query set; 100+ for statistical robustness.

Design your query universe by researching how people actually ask AI assistants about your category. Survey your customers, review your support ticket themes, and use keyword research tools to identify high-volume informational queries in your space. Avoid queries that are too generic (no clear commercial intent) or too specific (one-off use cases). The sweet spot is queries that represent genuine category-level consideration moments.

The measurement methodology: prompts, sampling, and scoring

Because AI responses are non-deterministic — the same prompt can produce different responses on different runs — robust AI SOV measurement requires sampling, not single-point measurement. Each prompt in your query universe should be run at least 3-5 times per AI model, and results should be aggregated to get a reliable mention rate.

For each response, record:

  • Brand mention (yes/no): Did your brand appear?
  • Position: If multiple brands were listed, in what position?
  • Sentiment score: Positive (recommended, praised), neutral (mentioned factually), or negative (mentioned with caveats or criticism).
  • Competitor mentions: Which other brands appeared in the same response?

Run the full query universe across all five major AI models: ChatGPT, Perplexity, Gemini, Claude, and Grok. Calculate separate SOV scores per model (your Perplexity SOV may differ dramatically from your Claude SOV) as well as an aggregate SOV across all models. For guidance on the mechanics of manual testing, see our article on how to check your brand's AI visibility.

Benchmark: what does good AI SOV look like?

AI SOV benchmarks vary significantly by category size, competitive intensity, and market maturity. That said, based on analysis across multiple categories, the following rough benchmarks apply:

  • Category leader: 35-60% AI SOV. If you're the dominant brand in your category, AI models should mention you in the majority of category-level queries.
  • Strong challenger: 15-35% AI SOV. Regularly mentioned, often alongside the category leader, with generally positive framing.
  • Emerging player: 5-15% AI SOV. Mentioned in some queries, particularly for specific use cases or comparisons. Significant room to grow.
  • Invisible: Below 5% AI SOV. Either too new for AI training data, too niche for AI models to represent, or has significant entity recognition problems. Foundational GEO work required before SOV growth is possible.

These benchmarks should be calibrated against your specific competitive set. If all brands in your category are below 15%, the category is relatively underrepresented in AI and the first brand to invest in GEO will capture disproportionate share.

Reporting AI SOV to stakeholders

AI SOV is a relatively new metric that many marketing leaders and CFOs are unfamiliar with. The most effective way to communicate its importance is to contextualise it alongside metrics they already track. Frame AI SOV as the digital-era equivalent of traditional brand share of voice, and connect it to the revenue implications of AI-influenced discovery. A one-page monthly dashboard showing: your AI SOV (vs last period), competitor AI SOV trends, and the estimated proportion of category queries going through AI channels provides the context needed for executive buy-in.

Automated SOV tracking with Sight

Manual AI SOV measurement at the scale required for statistical robustness is extremely time-intensive. Sight automates the entire process: you define your query universe and competitor set, Sight runs your prompts across all major AI models on a regular cadence, and the results appear in your dashboard as trended SOV scores with competitive breakdowns.

Sight's AI SOV dashboard shows your overall score, per-model scores, per-query-category scores (discovery vs comparison vs problem-solving), sentiment breakdown, and competitive share of voice trends — all updated automatically. This is the equivalent of having a continuously updated rank tracker for AI search. Start tracking your AI SOV with Sight →

For a broader view of the GEO methodology and how SOV tracking fits into a complete strategy, see our GEO audit guide.