Internal links on a website are a critical organic ranking factor for Google. Links help Google both discover pages and assign rankings based on quantity and location. A page with 100 internal links is presumably a higher priority than one with a single link.
But neither purpose — discovery and rankings — is possible if Googlebot cannot crawl the links. This can occur in three primary ways:
- Links behind JavaScript. Google can usually crawl and render links in JavaScript, such as tabs and collapsible sections. But not always, especially if the JavaScript requires execution first.
- Links on a desktop version but not on a mobile. Google indexes a site’s mobile version by default. However, mobile sites are often downsized desktop versions with much fewer links, preventing Google from discovering and indexing those excluded pages.
- Links with a nofollow attribute or meta tag. Google claims it can follow links with nofollow attributes, but there’s no way to know if that happened. And the meta tag blocks crawls only if Googlebot responds to it. Moreover, many site owners are unaware of active nofollow attributes or meta tags, especially if they use a plugin such as Yoast, which adds those functions with a single click.
Even if a page is indexed, you can never be sure the links to or from that page are crawlable and thus pass link equity.
Here are three ways to ensure Googlebot can crawl links on your website.
Tools to Inspect Links
Google’s text cache. The text-only version of Google Cache represents how Google sees a page with CSS and JavaScript turned off. It is not how Google indexes a page, as it can now understand those pages as humans see them.
Thus a page’s text cache is a stripped-down version. Still, it’s the most reliable way to tell if Google can crawl your links. If those links are in the text-only cache, Google can crawl them.
Beyond text-only, Google Cache contains the indexed version of a page. It’s a handy way of identifying missing elements on the mobile version.
Many search optimizers ignore Google Cache. That’s a mistake. All essential ranking elements are there. There’s no other way to ensure Google has that key info.
To access any page’s text-only version of Google Cache, search Google for cache:[full-URL] and click “Text-only version.”
—Not all pages will appear in Google Cache. If a page is absent, use “Inspect URL” in Search Console or browser extensions for details on how Google renders it.
‘URL Inspection’ in Search Console shows any page as Google understands it. Enter the URL and then click “View crawled page.”
From there, copy the HTML that Google uses to read the page. Paste that HTML in a document such as Google Docs and search (CTRL+F on Windows or CMD+F on Mac) for the linking URLs you are verifying. If the URLs are in the HTML code, Google can see them.
Browser extensions. Once you confirm Google can see the links, make sure they are crawlable. Reviewing the code will identify both the nofollow attribute and the meta tag. Firefox has a native tool to load a page’s HTML via CTRL+U on Windows and CMD+U on Mac. Then search for “nofollow” in the code.
The NoFollow browser extension — available for Firefox and Chrome — highlights nofollow links as a page loads — in an attribute and a meta tag.
Not Definitive
None of these methods definitively informs whether the links impact rankings. Google’s algorithm is highly sophisticated and assigns meaning and weight to links as it chooses, including ignoring them. Nonetheless, accessing and crawling links are Googe’s first step.