How Geol.ai measures AI visibility — every weight, every metric, fully transparent
AI visibility scoring should not be a black box. Geol.ai publishes its complete scoring methodology so you can understand exactly how your score is calculated, what drives it up or down, and how to improve it.
Our scoring system combines two complementary engines: a Quality Score Engine that evaluates 6 weighted dimensions of your content's AI-readiness, and an AI Visibility Score that measures how well AI models can actually parse and understand your pages.
The Quality Score Engine evaluates your content across 6 weighted dimensions, each measuring a distinct aspect of AI-readiness. The final score is the weighted average of all dimensions, normalized to a 0-100 scale.
Each dimension is scored independently from 0-100, then combined using the weights above. Here's exactly what each dimension measures and how to optimize for it.
Evaluates whether your structured data contains all required Schema.org fields for its type.
Measures data depth and content volume — how much useful information AI models can extract.
Evaluates content recency using temporal signals from structured data and metadata.
Evaluates author and publisher credibility signals that AI models use to assess trustworthiness.
Measures cross-platform format coverage — how many AI-discoverable formats your site provides.
Evaluates how well your content is structured for AI parsing and comprehension.
Separate from the Quality Score, the AI Visibility Score is a mechanical measurement of how well AI models can actually parse your page. It's computed from three sub-components and capped at 80 to prevent score inflation.
Why cap at 80? The hard cap prevents unrealistically high scores from purely mechanical analysis. Scores above 80 are reserved for pages that also pass AI-powered quality validation, ensuring that a high score genuinely reflects real-world AI discoverability.
Raw mechanical scores can be misleading — a page might check every box but still have poor quality content. Geol.ai uses an AI quality validator to calibrate scores against actual content quality.
Final scores are converted to letter grades. Because of our conservative scoring approach, achieving a B or higher indicates genuinely strong AI optimization.
Exceptional. Top-tier AI discoverability across all dimensions.
Strong. Well-optimized with minor areas for improvement.
Average. Functional but missing optimization opportunities.
Below average. Significant gaps in AI readiness.
Poor. Needs immediate attention for AI visibility.
Global maximum: 95
95-100 is reserved for truly exceptional sites only
Mechanical cap: 80
AI Visibility Score cannot exceed 80 without AI validation
Mismatch detection
High mechanical + low AI quality = capped at 70
Degraded fallback
If AI validator is down, multiplier defaults to 0.65
We believe scoring methodology should be public, auditable, and continuously improved. Here's what we commit to.
Run a free scan to see exactly how your site performs across all 6 dimensions. Every report includes per-dimension breakdowns, specific recommendations, and critical issues to fix.
Our AI monitoring system continuously tracks how 8 leading AI platforms evaluate, cite, and discuss your brand. Here's exactly how we collect, classify, and score that data.
Our automated monitoring system continuously queries 8 leading AI platforms with prompts related to your brand, industry, and competitors. Each response is captured, timestamped, and prepared for analysis.
Deduplication: Near-identical answers from the same prompt across multiple runs are collapsed using content hashing, ensuring metrics reflect unique AI assessments rather than repeated outputs.
Our advanced AI analyzes every answer where your brand is mentioned, classifying the sentiment expressed by each AI platform. This classification powers all sentiment metrics across your monitoring dashboard.
The AI expresses a favorable view, recommends, or praises your brand
The AI mentions your brand factually without expressing a positive or negative opinion
The AI expresses criticism, warns against, or identifies drawbacks of your brand
Note: Only answers with evaluated mentions — where the AI actively assesses your brand — are included in sentiment scores. This ensures sentiment data reflects genuine AI opinions, not passing references.
Beyond sentiment, our system classifies how each AI platform references your brand. Understanding the type of mention provides deeper insight into your brand's presence in AI-generated content.
Evaluated
The AI actively assesses, reviews, or recommends your brand
Cited
The AI references your brand as a source or authority
Listed
The AI includes your brand in a list without detailed assessment
Not Mentioned
The AI's response does not reference your brand
Note: Sentiment percentages are calculated exclusively from evaluated mentions to ensure accuracy. Citation and listing data contributes to visibility metrics separately.
Our AI identifies recurring topics across all AI-generated answers mentioning your brand. Related themes are then grouped into semantic families using intelligent clustering, reducing approximately 87 raw themes into around 20 manageable families for clear, actionable insights.
Every classification receives a confidence score reflecting how certain our AI is about the sentiment and mention type assignment. These scores help you distinguish between clear-cut assessments and edge cases.
≥ 80%
Classification confidence
50 – 79%
Classification confidence
< 50%
Classification confidence
Note: Historical answers analyzed before confidence scoring was introduced display without a confidence badge. All new analyses include confidence scores.
Share of Voice (SOV) measures each brand's share of total positive sentiment across all tracked brands and competitors. This metric helps you understand your relative positive perception in AI-generated responses compared to your competitive landscape.
Formula
Brand's Positive Mentions ÷ Total Positive Mentions across All Brands × 100