How To Analyse Link Position
How To Analyse Link Position
This tutorial explains how to view and analyse the position of links found in a crawl using the Screaming Frog SEO Spider, as well as how to configure the results, and bulk export the data for deeper insight.
First, let’s quickly summarise what we mean by link position.
What Is Link Position?
Link position refers to the location of a link on a page, such as in the navigation, sidebar, main content or footer.
The SEO Spider classifies every links position on a page based upon the HTML, using semantic HTML5 elements like header, nav, footer (or well-named non-semantic elements, such as div id=”nav”) to determine different parts of a web page and the links position within them.
The SEO Spider does not render the web page to analyse where a link appears, so the classification does rely on logical and well named HTML.
Some sites don’t use semantic HTML5 elements or have easily identifiable HTML, so you’re able to configure the link position classification based upon each sites unique set-up. More on that later.
Why Is Link Position Useful?
1) Identify & Fix Links
It can also be helpful when prioritising which links to fix, for example a site wide link in the main navigation might be deemed a bit more important than an in-content link on an obscure page.
2) Improve Internal Linking
As SEOs when we think of links we almost immediately think of the SEO value of them. However, understanding the position of links can help improve internal linking for the user with a view to linking from locations in pages, where it makes sense for a better experience.
By analysing click behaviour in analytics, you can compare it against link position in a crawl to help prioritise important pages in highly clicked positions for users, not just search engines.
For search engines, Google’s Reasonable Surfer Model considers the amount of PageRank a link might pass along based upon the probability that someone might click on a link.
The amount of PageRank that flows through a link is based upon different features associated with a link. One of the features outlined in the patent is link position –
“the position of the link (measured, for example, in a HTML list, in running text, above or below the first screenful viewed on an 800 x 600 browser display, side (top, bottom, left, right) of document, in a footer, in a sidebar, etc.);”
The continuation Reasonable Surfer model patent again discusses link position as a feature.
“For example, model generating unit 410 may generate a rule that indicates that a link positioned under the “More Top Stories” heading on the cnn.com website has a high probability of being selected.”
If the above is used by Google, it’s reasonable to suggest a footer link might be considered less likely to be clicked on than a link at the top of the main body content. Therefore, the amount of PageRank that flows through it ‘might’ be a little less.
While obsessing over how much weight each link position might carry is a step too far, from a user and practical perspective, you may just want to analyse and link more frequently to certain pages from within content, without the ‘noise’ of sifting through all the site wide links.
Therefore, you can identify ‘inlinks’ to a page that are only from in body content for example, ignoring any links in the main navigation or footer, for better internal link analysis and linking.
In extreme circumstances, link position could help raise a larger flag over problematic internal linking, where there are algorithms to stop abuse, such as the footer link penalty we’ve seen historically.
How To Analyse Link Position
To get started, you’ll need to download the SEO Spider which is free in lite form, for up to 500 URLs. You can download via the buttons in the right hand side bar. A licence isn’t required to analyse link position.
1) Crawl The Website
Type or copy in the website you wish to crawl in the ‘Enter URL to spider’ box and hit ‘Start’.
Wait for the crawl progress bar to reach 100%, or you can start analysing in real-time. However, not all link data, such as ‘inlinks’ will be known until the crawl completes.
2) Highlight URLs In The Top Window
Click on a single URL or highlight multiple URLs by holding down ‘control’ on Windows or the ‘command’ key on macOS in that top window (in any tab) that you wish to analyse for inlinks link position.
3) Click ‘Inlinks’ To View Internal Links To URLs
The lower window ‘inlinks’ tab, will show all links found in the crawl to the URLs highlighted in the master view. Filter the link type to ‘Hyperlink’, to only show links within anchor tags.
Scroll through to see which pages (‘From’) link to the URLs highlighted in the master view (‘To’).
There’s lots of columns in the inlinks tab which include more granular data around each link, including anchor text, alt text (if an image is hyperlinked), whether the link is followed, rel and target attributes, status code, path type, and link position.
4) Scroll To View Link Position Column
While still viewing the inlinks tab, scroll to the right to view the ‘link position’ column, which highlights exactly where each link can be found.
You can click any URL in any tab or filter in the top window. For example, ‘Response Codes > Client Error (4XX)’ shows a list of broken links and more. You can click on a broken link, and view the source page, as well as the anchor text and link position to know where to fix it.
This example shows a 404 on https://www.screamingfrog.co.uk/brightonseo-crawling-clinic-2019/ to https://www.brightonseo.com/training/screaming-frogs-seo-spider-training-course/ as an in ‘content’ link within the blog post, with the anchor text ‘few places left’.
The default link position classifications include –
- Navigation – Links contained within the main nav element, usually the main menu.
- Header – Links contained within the header, usually at the top of the page.
- Aside – Links contained outside of the main content, often used for call out boxes and sidebars.
- Footer – Links contained within the footer at the bottom of the page.
- Content – Links contained within the main body content of the page.
The classification is performed by using each links ‘link path’ (as an XPath) for known semantic substrings. The link path column can provide even greater context into exactly where each link is within the HTML.
The filter can be used to only view links to a page from a specific link position. For example, if you only want to view in content links to a page or a group of pages (excluding any links from the navigation, or footer etc), use the filter on the right-hand side for ‘Content’.
This will exclude any other link positions and just show in content inlinks.
You’re also able to view the link position of ‘outlinks’ from any page or group of pages in the same way, using the lower window ‘outlinks’ tab.
5) Bulk Export Inlinks & Link Position
To bulk export internal link data including link position, just use the ‘export’ button on the inlinks tab. This will include all link data for the URLs highlighted in the top window.
You can also export the same data by right-clicking on URLs in the top window and using ‘Export > Inlinks’.
Finally, to export all inlink or outlink data for every URL in the crawl, use the ‘Bulk Export > Links > All Inlinks / All Outlinks’ export.
Little warning, this file can be huge! If you’ve performed a large crawl, and there’s lots of sitewide links, this export will be large. If you have 10k pages, and 200 of them are sitewide across every page, then there will be a minimum of 2m links in the export.
How To Configure Link Position
While links are generally well classified, they won’t always be perfect as websites don’t always use semantic elements or descriptive non-semantic HTML. So you are able to configure the link position classification to improve analysis (which requires a licence). This allows you to use a substring of the link path of any links, to classify them.
To customise link position, click ‘Config > Custom > Link Positions’. The default link position set-up uses the following search terms to classify links.
The Screaming Frog website is a good example of where link classification can be improved. It has mobile menu links outside the nav element that are determined to be in ‘content’ links. This is ‘incorrect’, as they are just an additional site wide navigation on mobile. However, they are not within a nav element, and are not well named such as having ‘nav’ in their class name and therefore classed as within the content.
The ‘mobile-menu__dropdown’ class name (which is in the link path as shown above) can be used to define its correct link position using the Link Positions feature.
When the site is re-crawled, these links will then be correctly attributed as a sitewide navigation link.
This process can be used for any link type, so you can essentially ‘tag’ links based upon their XPath substrings.
The search terms used for link position classification are based upon order of precedence. As ‘Content’ is set as ‘/’ and will match any Link Path, it should always be at the bottom of the configuration.
In the above example, the ‘mobile-menu__dropdown’ class name was added as a link position for ‘Navigation’, and moved above ‘Content’, using the ‘Move Up’ button to take precedence in classification.
You’re able to disable ‘Link Positions’ classification, which means the XPath of each link is not stored and the link position is not determined. This can help save memory and speed up the crawl.
This tutorial will hopefully help you analyse links and improve internal linking using link position within the SEO Spider.
Alternatively, please contact us via support and we can help.