Loading tools...
Loading tools...
Check whether search engines can crawl a URL based on your robots.txt rules. Test Googlebot, Bingbot, or any custom user-agent instantly.
Enter a URL and user-agent to see crawl rules.
A single robots.txt rule can block your most important pages from search engines.
Test with Googlebot, Bingbot, Yandex, or any custom user-agent to see exactly which rules apply.
See exactly which Allow or Disallow rule matched your URL, helping you debug crawl issues fast.
Automatically detect sitemaps declared in robots.txt to ensure search engines find all your content.
View any crawl-delay directives that might be slowing down how fast bots can crawl your site.
View the complete robots.txt file to understand all rules, not just the one that matched.
No waiting, no sign-up required. Enter a URL and get comprehensive results in seconds.
Everything you need to know about robots.txt
Robots.txt controls crawling, not indexing. A URL can still appear in search results if it is linked from elsewhere, but Google cannot crawl the page content. To prevent indexing, use a noindex meta tag instead.
The longest matching rule wins. If Allow and Disallow rules are the same length, Allow takes precedence per Google's implementation. This tool shows you exactly which rule matched.
Yes. Blocking URL parameters, admin areas, and duplicate content paths helps reduce crawl waste and protects sensitive URLs. Just ensure you don't accidentally block canonical pages.
No. We fetch the robots.txt file from your domain and evaluate rules in real-time without storing your URLs or any data. Your privacy is protected.
Robots.txt can have different rules for different user-agents. A site might block Googlebot from certain paths while allowing other bots. Always test with the specific bot you care about.
Google typically caches robots.txt for up to 24 hours. Changes may not take effect immediately. You can request a re-fetch in Google Search Console for faster updates.
Robots.txt misconfigurations can block revenue-generating pages. We audit crawl rules and ensure search engines index what matters.
Disclaimer: This tool is provided for informational and educational purposes only. Preview renderings are approximations and may differ from actual platform displays due to platform updates, caching, or rendering differences. We fetch publicly available metadata and do not store or share your URLs. Platforms may cache old data even after you update your content—use their official debug tools to refresh caches. ZIRA Software is not liable for any decisions made based on this tool's output.