Research March 19, 2026 10 min read

We Scanned 220 Websites for AI Visibility — 43% Got an F

We analyzed real scan data from 220 websites to find out how AI-ready the average site is. The results: most sites fail basic AI visibility checks, and the fixes are simpler than you think.

The bottom line: We analyzed 220 real website scans from our free AI visibility checker. The average score is 53 out of 100 — a D+ grade. 43% of sites scored an F. Most failures come from missing llms.txt files, weak structured data, and content that AI models can't extract. The good news: the top fixes take less than an hour.

Methodology

This data comes from 220 scans run through LLMGeoKit's free scanner between January and March 2026. Each scan tests 7 dimensions of AI visibility: robots.txt, structured data, metadata, content structure, llms.txt, citation signals, and extractability. Scores range from 0-100 with letter grades A through F.

1 The Overall Numbers

Across 220 scans, the picture is clear: most websites are not ready for AI assistants. ChatGPT, Gemini, Claude, and Perplexity are becoming primary discovery channels, but the average website scores barely above half marks.

53/100
Average AI Visibility Score
58
Median Score
43%
Sites Scoring F (Below 50)
3.2%
Sites Scoring A (Above 90)

Only 7 out of 220 websites earned an A grade. Meanwhile, 94 sites — nearly half — scored below 50, earning an F. The gap between AI-optimized and AI-invisible is enormous.

2 Grade Distribution: The Bell Curve Skews Low

Here's how the 220 sites break down by letter grade:

GradeScore RangeSitesPercentage
A90-10073.2%
B75-89167.3%
C60-744018.2%
D50-594018.2%
F0-499442.7%

The distribution tells an important story. Only 10.5% of sites score B or above. That means roughly 9 out of 10 websites have meaningful gaps in how AI assistants can discover, understand, and recommend them.

What this means for you

If you fix even the basics — robots.txt, structured data, and metadata — you'll likely jump from the F/D cluster into C or B territory. The bar is low because almost nobody is optimizing for AI visibility yet. Early movers have a real advantage.

3 The Most Common Failures

Across all 220 scans, certain patterns repeat. These are the dimensions where sites lose the most points:

Where most sites fail

  • llms.txt — 90%+ of sites have no llms.txt file at all
  • Citation signals — Missing author names, publication dates, or canonical URLs
  • Extractability — No FAQ sections, no data tables, no definitions that AI can quote
  • Structured data — Missing or minimal JSON-LD markup

Where most sites pass

  • Robots.txt — Most sites allow crawling (though few explicitly allow AI bots)
  • Metadata — Title tags and meta descriptions are generally present
  • Content structure — Basic heading hierarchy usually exists

The pattern is clear: websites have the SEO basics covered, but the AI-specific layers are almost universally missing. llms.txt, structured citation signals, and extractable content blocks are the three biggest gaps.

4 Score Distribution by Bucket

Breaking scores into quartiles reveals where the bulk of websites cluster:

39
Sites scoring 0-25
28
Sites scoring 26-50
118
Sites scoring 51-75
35
Sites scoring 76-100

The largest cluster (118 sites) falls in the 51-75 range. These are sites that have some SEO foundation but haven't taken the AI-specific steps. They're one optimization sprint away from a meaningful jump. The 39 sites scoring 0-25 typically have fundamental crawling or structural issues that block AI assistants entirely.

5 What A-Grade Sites Do Differently

The 7 sites that scored 90+ share specific characteristics that separate them from the rest:

DimensionA-Grade SitesF-Grade Sites
Robots.txtExplicitly allow GPTBot, ClaudeBotDefault allow or block all bots
Structured Data3+ schema types (Organization, Article, FAQ)None or only basic WebSite
llms.txtPresent with detailed content guidanceMissing entirely
CitationsAuthor, date, canonical on every pageMissing on most pages
ExtractabilityFAQs, data tables, definitions throughoutWalls of text, no structured blocks

The difference isn't content quality — it's content structure. A-grade sites don't necessarily have better writing. They have better markup, better metadata, and content formatted in ways that AI models can parse and quote directly.

6 The 30-Minute Fix List

Based on the most common failures across 220 scans, here are the highest-impact fixes ranked by effort:

FixTimeExpected Score Impact
Add an llms.txt file10 min+5-10 points
Add JSON-LD Organization schema5 min+3-5 points
Add author + date to key pages10 min+3-7 points
Add FAQ section to top landing pages15 min+3-5 points
Explicitly allow AI bots in robots.txt2 min+2-3 points

Learn how to implement each fix

We have step-by-step guides for every item on this list: creating an llms.txt file, adding JSON-LD structured data, setting up citation signals, and making content extractable.

7 Why This Matters Now

AI assistants are becoming the first touchpoint for business decisions. When someone asks ChatGPT "What's the best tool for X?" or "Which company should I use for Y?", the AI's answer depends on what it can find, understand, and cite from the web.

43% of websites are invisible to this process. They lack the structural signals AI models need to recommend them. As AI usage grows, the gap between AI-visible and AI-invisible businesses will translate directly into lost leads, lost sales, and lost market share.

The companies that optimize now — while the bar is still low and the competition hasn't caught on — will have a compounding advantage that gets harder to close over time.

Check your score. Run a free AI visibility scan to see where your website stands across all 7 dimensions. It takes 30 seconds, no signup required.