GPTBot crawl spikes often trace to robots.txt not being served
Reports of GPTBot making thousands of requests commonly stem from misconfigurations where robots.txt isn’t actually served to crawlers. Ensure robots.txt is reachable and returns the intended directives to the GPTBot user-agent; if issues persist, contact gptbot@openai.com. Also verify CDN/host settings and caching so bots receive the same robots.txt as browsers.
calendar_today
2026-01-06
gptbot
openai
robots-txt
web-crawling
rate-limiting