Screaming Frog SEO Spider Tool

The SEO Spider is a small desktop program you can install locally on your PC, Mac or Linux machine which spiders websites’ links, images, CSS, script and apps from an SEO perspective.

SEO Spider Tool

The Screaming Frog SEO Spider is a website crawler, that allows you to crawl websites’ URLs and fetch key onsite elements to analyse from an SEO perspective. Download for free, or purchase a licence for additional features.

Compare Versions Free Download

What can you do with the SEO Spider Tool?

The SEO Spider is lite, flexible and can crawl extremely quickly allowing you to analyse the results in real-time. It gathers key onsite data to allow SEOs to make informed decisions. Some of the common uses include –

Audit Redirects

Find temporary and permanent redirects, identify redirect chains and loops, or upload a list of URLs to audit in a site migration.

Discover Duplicate Content

Discover exact duplicate URLs with an md5 algorithmic check, partially duplicated elements such as page titles, descriptions or headings and find low content pages.

Extract Data with XPath

Collect any data from the HTML of a web page using CSS Path, XPath or regex. This might include social meta tags, additional headings, prices, SKUs or more!

Review Robots & Directives

View URLs blocked by robots.txt, meta robots or X-Robots-Tag directives such as ‘noindex’ or ‘nofollow’, as well as canonicals and rel=“next” and rel=“prev”.

Generate XML Sitemaps

Quickly create XML Sitemaps and Image XML Sitemaps, with advanced configuration over URLs to include, last modified, priority and change frequency.

Integrate with Google Analytics

Connect to the Google Analytics API and fetch user data, such as sessions or bounce rate and conversions, goals, transactions and revenue for landing pages against the crawl.

Features

  • Find Broken Links, Errors & Redirects
  • Analyse Page Titles & Meta Data
  • Review Meta Robots & Directives
  • Discover Duplicate Pages
  • Generate XML Sitemaps
  • Crawl Limit
  • Crawl Configuration
  • Save Crawls & Re-Upload
  • Custom Source Code Search
  • Custom Extraction
  • Google Analytics Integration
  • Search Console Integration
  • Rendering (JavaScript)
  • Free Technical Support

Price per licence

Licences last 1 year. After that you will be required to renew your licence.

Free Version

  • Find Broken Links, Errors & Redirects
  • Analyse Page Titles & Meta Data
  • Review Meta Robots & Directives
  • Discover Duplicate Pages
  • Generate XML Sitemaps
  • Crawl Limit - 500 URLs
  • Crawl Configuration
  • Save Crawls & Re-Upload
  • Custom Source Code Search
  • Custom Extraction
  • Google Analytics Integration
  • Search Console Integration
  • Rendering (JavaScript)
  • Free Technical Support

Paid Version

  • Find Broken Links, Errors & Redirects
  • Analyse Page Titles & Meta Data
  • Review Meta Robots & Directives
  • Discover Duplicate Pages
  • Generate XML Sitemaps
  • Crawl Limit - Unlimited*
  • Crawl Configuration
  • Save Crawls & Re-Upload
  • Custom Source Code Search
  • Custom Extraction
  • Google Analytics Integration
  • Search Console Integration
  • Rendering (JavaScript)
  • Free Technical Support

£99.00 Per Year

Purchase licence

* The maximum number of URLs you can crawl is dependent on memory. Please see our FAQ.

Used By

Some of the biggest brands & agencies use our software.

apple-logo google-logo disney-logo amazon-logo distilled-logo seer-logo

Featured In

The SEO Spider is regularly featured in top publications.

moz-logo search-engine-land-logo smashing-magazine-logo search-engine-watch-logo the-next-web-logo yoast-logo

The SEO Spider Tool Crawls & Reports On...

The Screaming Frog SEO Spider is an SEO auditing tool, built by real SEOs with thousands of users worldwide. A quick summary of some of the data collected in a crawl include -

  1. Errors – Client errors such as broken links & server errors (No responses, 4XX, 5XX).
  2. Redirects – Permanent or temporary redirects (3XX responses).
  3. Blocked URLs – View & audit URLs disallowed by the robots.txt protocol.
  4. External Links – All external links and their status codes.
  5. Protocol – Whether the URLs are secure (HTTPS) or insecure (HTTP).
  6. URI Issues – Non ASCII characters, underscores, uppercase characters, parameters, or long URLs.
  7. Duplicate Pages – Hash value / MD5checksums algorithmic check for exact duplicate pages.
  8. Page Titles – Missing, duplicate, over 65 characters, short, pixel width truncation, same as h1, or multiple.
  9. Meta Description – Missing, duplicate, over 156 characters, short, pixel width truncation or multiple.
  10. Meta Keywords – Mainly for reference, as they are not used by Google, Bing or Yahoo.
  11. File Size – Size of URLs & images.
  12. Response Time.
  13. Last-Modified Header.
  14. Page Depth Level.
  15. Word Count.
  16. H1 – Missing, duplicate, over 70 characters, multiple.
  17. H2 – Missing, duplicate, over 70 characters, multiple.
  18. Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir etc.
  19. Meta Refresh – Including target page and time delay.
  20. Canonical link element & canonical HTTP headers.
  1. X-Robots-Tag.
  2. rel=“next” and rel=“prev”.
  3. Rendering – Crawl JavaScript frameworks like AngularJS by rendering content & executing JS with our in built Chromium library.
  4. AJAX – Select to obey Google’s now deprecated AJAX Crawling Scheme.
  5. Inlinks – All pages linking to a URI.
  6. Outlinks – All pages a URI links out to.
  7. Anchor Text – All link text. Alt text from images with links.
  8. Follow & Nofollow – At page and link level (true/false).
  9. Images – All URIs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters.
  10. User-Agent Switcher – Crawl as Googlebot, Bingbot, Yahoo! Slurp, mobile user-agents or your own custom UA.
  11. Configurable Accept-Language Header – Supply an Accept-Language HTTP header to crawl locale-adaptive content.
  12. Redirect Chains – Discover redirect chains and loops.
  13. Custom Source Code Search – The SEO Spider allows you to find anything you want in the source code of a website! Whether that’s Google Analytics code, specific text, or code etc.
  14. Custom Extraction – You can collect any data from the HTML of a URL using XPath, CSS Path selectors or regex.
  15. Google Analytics Integration – You can connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
  16. Google Search Console Integration – You can connect to the Google Search Analytics API and collect impression, click and average position data against URLs.
  17. XML Sitemap Generator – You can create an XML sitemap and an image sitemap using the SEO spider.

About The Tool

The Screaming Frog SEO Spider allows you to quickly crawl, analyse and audit a site from an onsite SEO perspective. It’s particularly good for analysing medium to large sites, where manually checking every page would be extremely labour intensive (or impossible!) and where you can easily miss a redirect, meta refresh or duplicate page issue. You can view, analyse and filter the crawl data as it’s gathered and updated continuously in the program’s user interface.

The SEO Spider allows you to export key onsite SEO elements (URL, page title, meta description, headings etc) to Excel so it can easily be used as a base for SEO recommendations. Our video above provides a demonstration of what the SEO tool can do.

 

Crawl 500 URLs For Free

The ‘lite’ version of the tool is free to download and use. However, this version is restricted to crawling a maximum of 500 URLs in a single crawl and it does not give you full access to the configuration, saving of crawls, the custom source search or extraction features and Google Analytics integration. You can crawl 500 URLs from the same website, or as many websites as you like, as many times as you like, though!

For just £99 per year you can purchase a licence, which removes the 500 URL crawl limit, allows you to save crawls, opens up the spider’s configuration options and custom source code search, extraction and Google Analytics integration features.

Alternatively hit the ‘buy a licence’ button in the SEO Spider to buy a licence after downloading and trialing the software.

FAQ & User Guide

As default the SEO Spider crawls sites like Googlebot (it obeys allow, disallow directives and wildcard support like Googlebot), but presents its own user-agent ‘Screaming Frog SEO Spider’, which it will obey specific directives for in robots.txt. If there are no directives, it will crawl your site like Googlebot. while still presenting its own UA.

For more guidance and tips on our to use the Screaming Frog SEO crawler –

 

Updates

Keep updated with future releases of the by subscribing to the our RSS feed or following us on Twitter @screamingfrog.

 

Support & Feedback

If you have any technical problems, feedback or feature requests for the SEO Spider, then please just contact us via our support. We regularly update the SEO Spider and currently have lots of new features in development!

Back to top