Why It Matters
AI agents are becoming a primary way users discover and interact with content. ChatGPT, Claude, Perplexity, and Google's AI Overviews all need to read and understand your site. If your site isn't AI-ready, you're invisible to a growing segment of traffic.
The w2agent score is like a Lighthouse performance audit — but instead of measuring page speed and accessibility for browsers, it measures how well AI systems can access, understand, and represent your content.
The 6 Categories
w2agent evaluates websites across six categories, each covering a different aspect of AI readiness:
Bot Accessibility
20 pointsCan AI crawlers actually reach your pages? This checks robots.txt rules, HTTP status codes, response times, and whether your server blocks known AI User-Agents like GPTBot, ClaudeBot, and PerplexityBot.
Structured Data
20 pointsDoes your site include schema.org markup that helps AI classify content? Checks for Article, Organization, Product, FAQ, and BreadcrumbList schemas — the types AI models most frequently use.
Content Quality
15 pointsAre your pages substantive enough for AI to extract useful information? Checks meta descriptions, heading structure, content length, and whether pages have enough text to summarize.
Agent Discovery
20 pointsCan AI find your site's most important content? Checks for llms.txt, sitemap.xml, and whether key pages are linked from discoverable locations.
Agent Protocols
15 pointsDoes your site support emerging AI protocols? Checks for agent-card.json, .well-known/ai-plugin.json, and other machine-readable capability descriptors.
Technical
10 pointsAre there technical barriers? Checks for client-side-only rendering, JavaScript-required content, excessive redirects, and slow response times.
Scoring
The overall score is 0-100, calculated as a weighted sum of all category scores. Each individual check within a category contributes a specific number of points. The final score maps to a letter grade:
| Grade | Score | Meaning |
|---|---|---|
| A | 90-100 | Excellent — your site is well-prepared for AI agents |
| B | 80-89 | Good — minor improvements possible |
| C | 70-79 | Fair — several areas need attention |
| D | 60-69 | Poor — significant gaps in AI readiness |
| F | 0-59 | Failing — major barriers prevent AI access |
What "Good" Looks Like
A site scoring 90+ typically has:
- ✓ An llms.txt file at the site root
- ✓ robots.txt that explicitly allows AI crawlers
- ✓ Schema.org JSON-LD on key pages
- ✓ Descriptive meta descriptions on every page
- ✓ Server-rendered HTML (not client-side-only)
- ✓ An XML sitemap with all important pages
- ✓ Fast response times (<2s for AI crawlers)
Beyond the Score
A score is just a starting point. The real value of an AI readiness audit is the specific, actionable recommendations — which files to create, which configurations to change, and which structured data to add. w2agent doesn't just score your site — it generates the files you need to fix the issues it finds.
Score Calculation Example
Here's how a real score breaks down for a typical marketing site that has llms.txt and schema.org but hasn't configured AI bot access explicitly:
| Category | Max | Earned | Issue |
|---|---|---|---|
| Bot Accessibility | 20 | 12 | No explicit AI allow rules |
| Structured Data | 20 | 16 | Missing FAQPage schema |
| Content Quality | 20 | 18 | — |
| Agent Discovery | 15 | 10 | No sitemap.xml |
| Agent Protocols | 15 | 5 | No agent-card.json |
| Technical | 10 | 9 | — |
| Total | 100 | 70 | Grade B |
This site earns a B (70) — solid content and tech, but loses points on bot access rules and missing agent protocols. Adding agent-card.json and explicit AI bot allow rules would push it to an A without changing any content.
Why Scores Differ by Site Type
The six categories are weighted to reflect what matters most for AI access. Sites with heavy client-side rendering (SPAs, Next.js without SSR) often fail the Technical category even if everything else is correct. E-commerce sites frequently lose points on Structured Data because product pages lack complete Schema.org Product markup. Documentation sites tend to score highest because they're designed for reading — clean HTML, good headings, and lots of linkable content.
The biggest quick win for most sites is Agent Discovery: add llms.txt and a sitemap.xml, and the 15-point category moves from ~5 to ~13 with under an hour of work. The second biggest is Bot Accessibility — fixing common blocking issues costs nothing and can recover 8-10 points immediately.
Related Articles
- What is llms.txt? — The fastest way to boost your Agent Discovery score.
- Why AI Crawlers Get Blocked — Fix Bot Accessibility issues that silently drop your score.
- Schema.org for AI — Structured data that improves your Structured Data category score.
Score your site now
Get your free w2agent score and generate the files your site needs.
Get Your Score