Is your site ready for
AI-Agents?
Run a comprehensive technical audit to see how LLMs, crawlers, and AI agents interact with your content. We analyze critical protocols like MCP, Markdown accessibility, and bot permissions to ensure seamless agentic interactions.
What do we check?
15 checks across 5 categories covering all the emerging standards AI agents use to discover, access, and interact with your website.
Directs AI crawlers on what they can or cannot access.
Specific allowances or blocks for well-known AI crawlers.
Provides a structured map of your site for automated agents.
Exposes protocol endpoints via HTTP headers.
Checks if your site provides an /llms.txt file containing a Markdown summary specifically formatted for Large Language Models.
Allows agents to request text-only Markdown versions of pages.
Tags indicating if content can be used for AI training.
Machine-readable descriptions of your server's capabilities.
Declarations of tools or actions agents can perform.
A central directory of available APIs for programmatic access.
Authentication mechanisms for verified bots.
Standardized endpoints for establishing delegated access.
How protected endpoints handle automated requests.
Emerging standard for agent-driven commerce.
Universal Commerce Protocol for automated transactions.
Agent Commerce Protocol for seamless AI purchases.
How are you rated?
Your score determines your readiness level. Each level represents a milestone in AI agent compatibility.
Level 1
Basic Web Presence
Score 0–29
Level 2
Agent-Friendly
Score 30–54
Level 3
Agent-Ready
Score 55–79
Level 4
Agent-Native
Score 80–100
Frequently asked questions
As AI agents and custom assistants become the primary way users search and interact with the web, ensuring they can seamlessly discover, parse, and utilize your content is critical for future traffic and visibility.
The highest impact changes are often the simplest: configuring an explicit robots.txt for AI bots, providing structured sitemaps, and ensuring your text content is easily parsable via Markdown negotiation.
We aggregate the results of our core discovery and accessibility tests. Technical failures drop your score, while successful implementations of emerging protocols (like MCP or Web Bot Auth) boost your rating up to 'Agent-Native'.
No. All audits are executed in real-time, and the results live only in your browser session. We do not store your URL or your compatibility score on our servers.
If you want to dive deeper into Generative Engine Optimization (GEO) and ensure your site is perfectly tuned for AI, we highly recommend our comprehensive checklist.
Explore the GEO Checklist →