SEO

Top 12 SEO Insights of the Year from Google’s John Mueller

For the uninitiated, Google’s Webmaster Trends Analyst John Mueller generously runs Hangouts every other week where SEO professionals can get their burning questions answered.

In 2018 alone, Mueller has provided us with hundreds of noteworthy answers that continue to inform our understanding of SEO.

Read on for 36 of Mueller’s most important insights of the year, categorized into key SEO topic areas.

Mobile-First

1. Sites Still Waiting to Be Switched over to Mobile-First Indexing Aren’t Necessarily Not Ready

Although a large proportion of sites have now been switched over to mobile-first indexing, sites that are yet to be moved over aren’t necessarily not ready for it.

2. After a Site Has Been Switched to Mobile-First Indexing, Content Parity Won’t Be an Issue Because the Desktop Version Won’t Be Used for Indexing

Content parity across mobile and desktop versions of a site is important in order for it to be switched over to mobile-first indexing.

Once the site has been switched over, parity is less of a consideration because it will only be the mobile version that will be used for indexing.

3. Make Sure Mobile Versions Aren’t Oversimplified & Include Relevant Content That Helps Google Understand the Page

The mobile versions of a site shouldn’t be oversimplified to the point it impacts Google’s ability to clearly understand what your site is about.

Make sure to retain relevant content and descriptive elements. Also, maintain alt text on the mobile version of pages on a site.

4. There Are No Differences in How Content in the HTML Is Treated Dependent on Whether It Is Visible or Not by Default

Google doesn’t treat content in the HTML differently dependent on whether it is visible on the page by default.

It is no problem to include secondary content that is hidden on page load as long as it is in the HTML.

5. Use Rel=Canonical over Noindex for Duplicated Content

When a page is noindexed this means all of the signals will be dropped for this page.

When it comes to duplicate content, Mueller recommends using a rel=canonical to the preferred version of the page so that the signals for both page versions can be combined rather than dropping the signals from the noindexed page.

6. Google Can Index Duplicate Pages But Only Shows the Most Relevant One

For duplicate pages that exist across different sites with no canonicalization in place, Google will index both versions of the page but will only show the most relevant one in the search results based on relevancy and personalization factors, like location.

Crawling

7. Ensure That Scripts in the Page Header Don’t Close It Prematurely

Scripts which insert non-head elements (like divs and iframes) into the page header can cause it to close prematurely.

This could mean that Googlebot isn’t able to crawl hreflang links because it assumes the head has already closed.

You can check for this potential issue by testing pages with Google’s Rich Results tool using View Code.

8. Googlebot Doesn’t Use Cookies When It Returns to Crawl a Site

Googlebot won’t replay any cookies provided when it returns to crawl a site.

If you use cookies to group users for A/B testing, make sure that Googlebot is put in the same group so that it sees a consistent version of the pages on your site.

9. Crawl Budget Is Only an Issue for Large Sites

Googlebot is able to crawl sites with “a couple hundred thousand pages” just fine.

Only large sites with a higher volume of pages should be concerned about crawl budget issues.

10. Serving a 410 Response Can Remove Pages from Google’s Index Quicker Than a 404

In the long term, 410 and 404 errors have the same impact, as they both remove pages from the index and cause it to be crawled less frequently.

In the short term, however, a 410 error may cause a page to be removed from the index a couple of days faster compared to a 404.

11. Linked Pages Blocked by Robots.txt Can Still be Indexed

Google can still index pages that are blocked by robots.txt, if they have backlinks.

Mueller recommends using the noindex tag for these pages instead.

12. Indexing Can Be Delayed If Canonical Tag Points to Redirect Instead of the Preferred Version

Make sure that canonical tags point to the preferred version of a page rather than a page which redirects the preferred page.

Canonicalizing to a redirect may mean that it takes longer for Google to index the preferred version of the page.

Leave a Reply