Ever puzzled what would occur in case you prevented Google from crawling your web site for a number of weeks? Technical web optimization skilled Kristina Azarenko has revealed the outcomes of such an experiment.
Six stunning issues that occurred. What occurred when Googlebot couldn’t crawl Azarenko’s web site from Oct 5 to Nov. 7:
- Favicon was faraway from Google Search outcomes.
- Video search outcomes took a giant hit and nonetheless haven’t recovered post-experiment.
- Positions remained comparatively steady, besides have been barely extra risky in Canada.
- Visitors solely noticed solely a slight lower.
- A rise in reported listed pages in Google Search Console. Why? Pages with noindex meta robots tags ended up being listed as a result of Google couldn’t crawl the positioning to see these tags.
- A number of alerts in GSC (e.g., “Listed, although blocked by robots.txt”, “Blocked by robots.txt”).
Why we care. Testing is a vital ingredient of web optimization. All modifications (intentional or unintentional) can influence your rankings and visitors and backside line, so it’s good to know how Google may presumably react. Additionally, most corporations aren’t capable of try this kind of an experiment, so that is good data to know.
The experiment. You’ll be able to learn all about it in Sudden Outcomes of My Google Crawling Experiment.
One other comparable experiment. Patrick Stox of Ahrefs has additionally shared outcomes of blocking two high-ranking pages with robots.txt for 5 months. The influence on rating was minimal, however the pages misplaced all their featured snippets.
New on Search Engine Land