Download | User Guide | FAQ | Support | Terms | Purchase
The Screaming Frog SEO Spider is a small desktop program you can install locally on your PC, Mac or Linux machine which spiders websites’ links, images, CSS, script and apps from an SEO perspective. It fetches key onsite elements for SEO, presents them in tabs by type and allows you to filter for common SEO issues, or slice and dice the data how you see fit by exporting into Excel. You can view, analyse and filter the crawl data as it’s gathered and updated continuously in the program’s user interface.
The Screaming Frog SEO Spider allows you to quickly analyse, audit and review a site from an onsite SEO perspective. It’s particulary good for analysing medium to large sites where manually checking every page would be extremely labour intensive (or impossible!) and where you can easily miss a redirect, meta refresh or duplicate page issue.
The spider allows you to export key onsite SEO elements (url, page title, meta description, headings etc) to Excel so it can easily be used as a base to make SEO recommendations from. Our video below provides a demonstration of what the tool can do -
The Screaming Frog SEO Spider Tool Reports On The Following
A quick summary of some of the data collected -
- Errors – Client & server errors (No responses, 4XX, 5XX)
- Redirects – (3XX, permanent or temporary)
- External Links – All followed links and their subsequent status codes
- URI Issues – Non ASCII characters, underscores, uppercase characters, dynamic uris, long over 115 characters
- Duplicate Pages – Hash value / MD5checksums lookup for pages with duplicate content
- Page Title – Missing, duplicate, over 65 characters, short, same as h1, or multiple
- Meta Description – Missing, duplicate, over 156 characters, short, or multiple
- Meta Keywords – Mainly for reference as it’s only (barely) used by Yahoo.
- H1 – Missing, duplicate, over 70 characters, multiple
- H2 – Missing, duplicate, over 70 characters, multiple
- Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir etc
- Meta Refresh – Including target page and time delay
- Canonical link element & canonical HTTP headers
- rel=“next” and rel=“prev”
- File Size
- Page Depth Level
- Inlinks – All pages linking to a URI
- Outlinks – All pages a URI links out to
- Anchor Text – All link text. Alt text from images with links
- Follow & Nofollow – At link level (true/false)
- Images – All URIs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters
- User-Agent Switcher – Crawl as Googlebot, Bingbot, or Yahoo! Slurp
- Custom Source Code Search – The spider allows you to find anything you want in the source code of a website! Whether that’s analytics code, specific text, or code etc. (Please note – This is not a data extraction or scraping feature yet.)
- XML Sitemap Generator – You can create a basic XML sitemap using the SEO spider.
It is a SEO auditing tool built by real SEOs. Check out what SEOs have said about the SEO Spider!
Download Now – For Free!
By downloading, installing and using the Screaming Frog SEO Spider, you agree to the terms and conditions.
The Screaming Frog SEO Spider Is Free For Up To 500 URLs
The standard ‘Lite’ version of the tool is completely free, however this version is limited to crawling a maximum of 500 URI and it does not give you full access to the configuration options of the spider, saving of crawls or the custom source code feature. You can crawl 500 URLs from as many websites as you like as many times as you like though!
For just £99 per annum you can purchase an individual licence which removes restrictions on the 500 URI maximum crawl and opens up the spider’s configuration options and custom source code search feature . Alternatively hit the ‘buy a licence’ button in the spider to buy a licence after downloading and testing the software.
How The Screaming Frog SEO Spider Works
As default the SEO spider crawls sites like Googlebot does (it obeys allow, disallow directives and wildcard support like Googlebot) but presents its own user-agent ‘Screaming Frog SEO Spider’ which it will obey specific directives for in robots.txt.
If there are no directives for the spider, it will crawl your site like Googlebot while still presenting its own UA. For more tips -
- Please read our user guide and FAQ as they are very comprehensive. Please also watch the video above which provides an overview of the tool.
- The tool spiders (only) the subdomain you enter and treats other subdomains it encounters as external links.
- The tool spiders from directory path onwards. Hence, if you only want to crawl the blog of your website which is in its own subfolder/sub directory, simply spider the URI with file path for example – http://www.example.com/blog/. If you have a more complicated set up like subdomains and subfolders you can specify both. For example – http://de.example.com/uk/ to spider the .de subdomain and UK sub folder etc.
- Intelligent Spidering – If you have a site with thousands of pages (you’ll need the premium version ideally!), it’s best to spider the site in sections using the above method. This helps to avoid information overload first of all, but also speeds up the process and uses less memory. Under ‘configuration’ you can also choose to include or exclude crawling of URLs or files (such as images, JS, CSS etc). If you have a big site you should increase the SEO spider’s memory.
SEO Spider Bugs, Feedback & Liability
If you have any problems with the spider please see our support page.
Using the Screaming Frog SEO spider is entirely at your own risk, we will not held responsible or be liable for any costs, damages or actions from its use – that is entirely down to the user. Please read our terms and conditions before use.
Keep updated with future release of the tool by checking for updates in the interface, subscribing to the Screaming Frog Blog RSS, signing up to our e-mail above or following us on Twitter @screamingfrog.