Alan Bleiweiss shared a scree shot of how the Google Sitemaps report in Google Search Console shows over 11,000 URLs that were blocked by robots.txt as an issue and warning. Alan asked why is the new Google Index report in the new Google Search Console not reporting on these errors? … [Read more...] about New Google Index Report Doesn’t Show Blocked URLs From Sitemaps
Google's John Mueller said on Twitter that the URL inspector tool that launched about a month ago shows more recent and up-to-date information that the Google Search Console indexing reports. The indexing reports can be one to two weeks delayed, while the URL inspector tool shows information from the last time the URL was indexed. … [Read more...] about Google Search Console URL Inspection Tool Shows More Recent Data Than Indexing Reports
It seems you are trying to block URL(s) that we see as important. We do not support these blocks through the Bing Webmaster Tools, and we recommend you add a noindex meta-tag to your URL(s).Here is a picture: … [Read more...] about Bing Webmaster Tools Won’t Let You Block Important URLs
You block Googlebot from crawling your site (the most common reasons I see are improper bot-blocking settings or adding a “Disallow: /” rule to the robots.txt file). Googlebot gets a 403 error when it tries to crawl the site or just stops crawling because of the robots rule. After hitting the home page (or robots.txt) a few times, it gets the message and starts demoting the site’s URLs. Traffic drops dramatically within a few hours. In this case, the site saw about a -50% drop within two hours and a -60% drop within 24 hours that held for most of the time Googlebot was blocked. GSC showed that crawl rate dropped from about 400,000 URLs/day (it’s a 5MM URL site) to about 11,000 URLs/day. I haven’t investigated how Googlebot was able to crawl 11,000 blocked URLs yet. That’s for another post. When you unblock Googlebot, it starts to crawl again. In this case it immediately went back to its pre-block levels, but if you don’t have a strong domain, … [Read more...] about How Long Does It Take SEO Traffic To Recover From Blocking Googlebot?
There are three main components to organic search: crawling, indexing and ranking. When a search engine like Google arrives at your website, it crawls all of the links it finds. Information about what it finds is then entered into the search engine’s index, where different factors are used to determine which pages to fetch, and in what order, for a particular search query. … [Read more...] about How to check which URLs have been indexed by Google using Python