SEO Spider General

Installation

The Screaming Frog SEO Spider can be downloaded by clicking on the appropriate download button for your operating system and then running the installer. The SEO Spider is available for Windows, Mac and Linux.

The minimum specification is a machine able to run Java 8 with at least 512Mb of RAM. The number of URLs you can crawl, is based upon how much memory you can allocate to the tool.

We generally recommend the SEO Spider is used on 64bit machines, with 8Gb RAM or more.

For more details of how to install Java view our Java installation guide.

Crawling

The Screaming Frog SEO Spider is free to download and use for crawling up to 500 URLs at a time. For £149 per annum you can purchase a licence which removes the 500 URL crawl limit and opens up the Spider’s configuration options.

In regular crawl mode, the SEO Spider will crawl the subdomain you enter and treat all other subdomains it encounters as external links by default. In the licenced version of the software, you can adjust the configuration to choose to crawl all sub domains of a website.

One of the most common uses of the SEO Spider is to find errors on a website, such as broken links, redirects and server errors. Please read our guide on how to find broken links, which explains how to view the source of 404 errors, and export the source data in bulk.

For better control of your crawl, use the URI structure of your website, the SEO Spiders configuration options such as crawling only HTML (images, CSS, JS etc), the exclude function, the include function or alternatively change the mode of the SEO Spider and upload a list of URI to crawl (as discussed further below in this guide).

Crawling A Sub Folder

The SEO Spider tool crawls from sub folder path forwards by default, so if you wish to crawl a particular subfolder on your site, simply enter the URI with file path. For example, if it’s a blog, it might be – https://www.screamingfrog.co.uk/blog/, like our own blog. By entering this directly into the SEO Spider, it will crawl all URI contained within the /blog/ sub directory.

You may notice some URLs which are not within the /blog/ sub folder are crawled as well by default. This will be due to the ‘check links outside of start folder‘ configuration. This configuration allows the SEO Spider to focus it’s crawl within the /blog/ directory, but still crawl links that are not within this directory, when they are linked to from within side it. However, it will not crawl any further onwards. This is useful as you may wish to find broken links that sit within the /blog/ sub folder, but don’t have /blog/ within the URL structure. To only crawl URLs with /blog/, simply untick this configuration.

Please note, that if there isn’t a trailing slash on the end of the sub folder, for example ‘/blog’ instead of ‘/blog/’, the SEO Spider won’t currently recognise it as a sub folder and crawl within it. If the trailing slash version of a sub folder redirects to a non trailing slash version, then the same applies.

To crawl this sub folder, you’ll need to use the include feature and input the regex of that sub folder (.*blog.* in this example). If you have a more complicated set up like subdomains and subfolders you can specify both. For example – http://de.example.com/uk/ to Spider the .de subdomain and UK sub folder etc.

Crawling A List Of URLs

As well as crawling a website by entering a URL and clicking ‘Start’, you can switch to list mode and either paste or upload a list of specific URLs to crawl.

This can be particularly useful for site migrations when auditing redirects for example. I recommend reading our ‘How To Audit Redirects In A Site Migration‘ guide.

Crawling Larger Websites

If you wish to perform a particularly large crawl, we recommend increasing the RAM memory allocation in the SEO Spider first.

If you receive ‘you are running out of memory for this crawl’ warning, then you will need to save the crawl, increase the RAM allocation, open the crawl and resume the crawl. The number of URLs the SEO Spider can crawl is down to the amount of memory available on the machine and whether it’s allocated.

For really large crawls, you may wish to consider breaking up crawls into smaller sections and using the configuration to control your crawl. Some options include –

These should all help save memory and focus the crawl on the important areas you require.

Saving & uploading crawls

In the licensed version of the tool you can save your crawls and open them back into the SEO Spider. The files are saved as a .seospider file type specific to the Screaming Frog SEO Spider.

You can save crawls part way through by stopping the SEO Spider and selecting ‘File > Save’.

To open a crawl, simply double click on the relevant .seospider filder, choose ‘File > Open’ or choose one of your recent crawls under ‘File > Open Recent’. You can then resume the crawl if saved part way through.

Please note, saving and opening crawls can take a number of minutes, depending on the size of the crawl and amount of data.

Default configuration

In the licensed version of the tool you can save a default crawl configuration.

To save the current configuration as default choose ‘File > Default Config > Save Current Configuration As Default’.

To reset back to the original SEO Spider default configuration choose ‘File > Default Config > Clear Default Configuration’.

Exporting

The export function in the top window section works with your current field of view in the top window. Hence, if you are using a filter and click ‘export’ it will only export the data contained within the filtered option.

There are three main methods to export data –

  • Exporting Top Window Data – Simply click the ‘export’ button in the top left hand corner to export data from the top window tabs.
  • Exporting Lower Window Data (URL Info, In Links, Out Links, Image Info) – To export any of this data, simply right click on the URL that you wish to export data from in the top window, then click ‘export’ and either ‘URL Info’, ‘In Links’, ‘Out Links’ or ‘Image Info’.
  • Bulk Export – This is located under the top level menu and allows bulk exporting of data. You can export all instances of a link found in a crawl via the ‘all in links’ option, or export all in links to URLs with specific status codes such as 2XX, 3XX, 4XX or 5XX responses. For example, selecting the ‘Client Error 4XX In Links’ option will export all in links to all error pages (such as 404 error pages). You can also export all image alt text, all images missing alt text and all anchor text across the site.

You can also view our video guide about exporting from the SEO Spider –

Bulk Export Options

  • All Inlinks: Links to every page the SEO Spider crawled. This will contain every link to every URI shown under the Response Codes tab in the All filter.
  • All OutLinks: All links the SEO Spider saw during crawling. This will contain every link contained in every URI in the Response Codes tab in the All filter.
  • All Anchor Text: All HREF links to URIs in the All filter in the Response Codes tab.
  • XXX Inlinks: All links to the XXX filter and tab.
  • All Image Alt Text: All links to all Images in the All filter in the Images tab.
  • Images Missing Alt Text: All IMG links to images in the Missing Alt Text filter in the Images tab.

Robots.txt

The Screaming Frog SEO Spider is robots.txt compliant. It obeys robots.txt in the same way as Google.

It will check the robots.txt of the subdomain(s) and follow (allow/disallow) directives specifically for the Screaming FrogSEO Spider user-agent, if not Googlebot and then ALL robots. It will follow any directives for Googlebot currently as default. Hence, if certain pages or areas of the site are disallowed for Googlebot, the SEO Spider will not crawl them either. The tool supports URL matching of file values (wildcards * / $), just like Googlebot, too.

You can choose to ignore the robots.txt (it won’t even download it) in the paid (licenced) version of the software by selecting ‘Configuration > Spider > Ignore robots.txt’.

You can also view URLs blocked by robots.txt under the ‘Response Codes’ tab and ‘Blocked by Robots.txt’ filter. This will also show the matched robots.txt line of the disallow against each blocked URL.

A few things to remember here  –

  • The SEO Spider only follows one set of user agent directives as per robots.txt protocol. Hence, priority is the Screaming FrogSEO Spider UA if you have any. If not, the SEO Spider will follow commands for the Googlebot UA, or lastly the ‘ALL’ or global directives.
  • To reiterate the above, if you specify directives for the Screaming Frog SEO Spider, or Googlebot then the ALL (or ‘global’) bot commands will be ignored. If you want the global directives to be obeyed, then you will have to include those lines under the specific UA section for the SEO Spider or Googlebot.
  • If you have conflicting directives (ie an allow and disallow to the same file path) then a matching allow directive beats a matching disallow if it contains equal or more characters in the command.

User agent

The SEO Spider obeys robots.txt protocol. Its user agent is ‘Screaming Frog SEO Spider’ so you can include the following in your robots.txt if you wish to block it –

User-agent: Screaming Frog SEO Spider

Disallow: /

Or alternatively if you wish to exclude certain areas of your site specifically for the SEO Spider, simply use the usual robots.txt syntax with our user-agent. Please note – There is an option to ‘ignore robots.txt’, which is down to the responsibility of the user entirely.

Memory

The Screaming Frog SEO Spider as standard allocates 512mb of RAM. If you are crawling particularly large sites, you will need to increase the memory allocation of the SEO Spider.

There is not a set number of URLs the SEO Spider can crawl at the standard memory allocation, it is dependent on the complexity of the site and a number of other factors. Generally speaking with the standard memory allocation of 512mb the SEO Spider can crawl between 5K-100K URI of a site. If you have received the following ‘high memory usage’ warning message when performing a crawl –

high-memory-use

Or if you are experiencing slow down in a crawl or of the program itself on a large crawl, this will be due to reaching the memory allocation.

This is warning you that the SEO Spider has reached the current memory allocation and it needs to be increased to crawl more URLs. To do this, you should save the crawl via the ‘File > Save’ menu. You can then follow the instructions below to increase your memory allocation, before opening the saved crawl and resuming it again.

Increasing memory on Windows 32 & 64-bit

First of all, if you have a 64-bit machine, please ensure you download and install the 64-bit version of Java or you will not be able to allocate anymore memory than a 32-bit machine and you’ll receive a ‘Could not create the Java virtual machine’ error message on start-up.

To update the memory, simply navigate to the folder the SEO Spider is installed in (default is C:Program Files/Screaming Frog SEO Spider), there should be 4 files: two application files (install & uninstall), a .jar file and then the file we need to edit, which is a configuration file called ‘ScreamingFrogSEOSpider.l4j’, a .ini file.

If you open this file up in notepad you will notice this has a line and number ‘-Xmx512M’ which reflects the total memory assigned for the SEO Spider ‘512Mb’.

When editing the file please ensure that it is saved a .ini file and not a .l4j file, as in this case the updated settings will then not be read. If known file extensions are hidden in Windows then notepad++ will automatically save this as a .l4j file and not an .ini file unless specified.

The default number is ‘512M’, which we recommend increasing to as much RAM as you have available on your machine. Hence, to double the memory you can simply replace ‘512’ with ‘1024’ (please leave the -Xmx and M text, so 1,024 would appear as -Xmx1024M in the file), or to allocate 6GB simply replace with ‘6g’, 8GB would be ‘8g’ and 16GB as ’16g’ for example. Here is a screenshot of around 9GB –

memory allocation

Please note, this is RAM (rather than hard disk memory). As explained above, if you received a ‘Could not create the Java virtual machine’ error message after increasing memory –

could not create java virtual machine

Then it will be due to either –

1) You’re using the 32-bit version of Java. You need to manually choose and install the 64-bit version of Java. If you already have the 64-bit version of Java, then uninstall all versions of Java and reinstall the 64-bit version again manually.

2) You have allocated more memory then you actually have available. If this happens, edit the file again to a more realistic memory allocation based on your machines memory available.

You can read about memory limits for Windows here, but essentially 32-bit Windows machines are limited to 4GB of RAM. This generally means the maximum memory allocation will be between 1,024mb and 1,500mb, as this is all that will actually be available.

For 64-bit machines, you will be able to allocate significantly more, obviously dependent on how much memory your machine has. The SEO Spider is built for 32 and 64-bit machines, but please remember to install the 64-bit version of Java to be able to allocate as much memory as your system will allow on a 64-bit machine.

If Windows will not allow you to edit the file directly (you will probably need administration rights), try copying the file to your desktop, editing and then pasting back into the folder and replacing the original file.

You can verify you setting have taken affect by following the guide here.

If changing the ScreamingFrogSEOSpider.l4j.ini file has no effect, check you are not overriding it via the_JAVA_OPTIONS environment variable. Details on viewing and editing environment variables can be found here.

memory in debug

Increasing memory on Mac OS X

If you are using SEO Spider version 2.40 then please follow these instructions.

Open a ‘Terminal’ (found in the ‘Utilities’ folder in the ‘Applications’ folder, or directly using spotlight and typing: ‘terminal’) and type:

defaults write uk.co.screamingfrog.seo.spider Memory 1g

This allocates 1GB of memory to the SEO Spider. To allocate 8GB:

defaults write uk.co.screamingfrog.seo.spider Memory 8g

You can also specify a memory figure in megabytes, using the m suffix:

defaults write uk.co.screamingfrog.seo.spider Memory 2048m

These memory settings will be remembered over upgrades and you’ll only need to do it once. You can view the currently assigned memory value by issuing the read variant of the defaults command:

defaults read uk.co.screamingfrog.seo.spider Memory

Which will return something like:

8g

By default no value is set, so you will get a message like this:

The domain/default pair of (uk.co.screamingfrog.seo.spider, Memory) does not exist

and the SEO Spider will use 512MB.

You can verify you setting have taken affect by following the guide here.

Increasing memory on Mac OS X 10.7.2 and earlier

If you’re using not using SEO Spider version 2.40 then please follow these instructions.

Open ‘Finder’ and navigate to the ‘Applications’ folder, probably listed under ‘Favourites’, as below. Select ‘Screaming Frog SEO Spider’, right click and choose ‘Show Package Contents’.

show package contents osx

Then expand the ‘Contents’ folder, select ‘Info.plist’, right click and choose ‘Open With’ and then ‘Other’.

other text edit

In the resulting prompt menu, choose ‘TextEdit’.

text edit

Now find the following section and change the values appropriately (on line 30′ish) :

VMOptions
Edit the value below to change memory settings – -Xmx1024M for 1GB etc
-Xmx512M

Choose ‘File’ then ‘Save’, then ‘TextEdit’ and ‘Quit TextEdit’. Then re-launch the SEO Spider and your new memory settings will now be active.

You can verify you setting have taken affect by following the guide here.

Increasing memory on Ubuntu

To increase the memory for crawling larger websites, change the number in the ~/.screamingfrogseospider file. If this file does not exist, it will be created when the SEO Spider runs. The default contents of the file is: -Xmx512M.

Note: To avoid any typos, please copy and paste the examples below.

To amend this file, open a terminal and type the following to allocate 1GB of memory.

echo "-Xmx1g" > ~/.screamingfrogseospider

2GB can be allocated as follows:

echo "-Xmx2g" > ~/.screamingfrogseospider

You can also allocated memory in megabytes, rather than gigabytes. Here we’re allocating 1.5 Gigabytes.

echo "-Xmx1500M" > ~/.screamingfrogseospider

You can verify you setting have taken affect by following the guide here.

Checking memory allocation

After updating your memory settings you can verify the changes have taken affect by going to Help->Debug and looking at the Memory line.

The SEO Spider uses 512MB by default, so you the line will look something like this:

Memory: Used 41MB Free 250 MB Total 292MB Max 455 MB Using 9%

The Max figure will always be a little less than the amount allocated. Allocating 2GB will look like this:

Memory: Used 33MB Free 263 MB Total 296MB Max 1820 MB Using 1%

Please note, the figures shown here aren’t exact as the VM overhead varies between Operating system and Java version.

Cookies

By default the Screaming Frog SEO Spider does not accept cookies, just like search engine bots.

However, under ‘Configuration > Spider’ in the ‘Advanced’ tab, there is an option to accept cookies. This is useful for crawling sites that mandate the client accepts cookies to be able to crawl it.

Cookies are stored per crawl and shared between crawler threads. Cookies are not stored when a crawl is saved, so resuming crawls from a saved .seospider file will not maintain the cookies used previously. Cookies are reset at the start of new crawl. It is not possible to modify or set cookies manually.

XML sitemap creation

The Screaming Frog SEO Spider allows you to create an XML sitemap or a specific image XML sitemap, located under ‘Sitemaps’ in the top level navigation.

create an xml sitemap

The ‘Create XML Sitemap’ feature allows you to create an XML Sitemap with all HTML 200 response pages discovered in a crawl, as well as PDFs and images. The ‘Create Images Sitemap’ is a little bit different to the ‘Create XML Sitemap’ option and including ‘images’. This option includes all images with a 200 response and ONLY pages that have images on them.

If you have over 49,999 urls the SEO Spider will automatically create additional sitemap files and create a sitemap index file referencing the sitemap locations. The SEO Spider conforms to the standards outlined in sitemaps.org protocol.

Read our detailed tutorial on how to use the SEO Spider as an XML Sitemap Generator, or continue below for a quick overview of each of the XML Sitemap configuration options.

Adjusting Pages To Include

By default, only HTML pages with a ‘200’ response from a crawl will be included in the sitemap, so no 3XX, 4XX or 5XX responses. Pages which are ‘noindex’, ‘canonicalised’ (the canonical URL is different to the URL of the page), paginated (URLs with a rel=“prev”) or PDFs are also not included as standard, but this can be adjusted within the XML Sitemap ‘pages’ configuration.

xml sitemap options

If you have crawled URLs which you don’t want included in the XML Sitemap export, then simply highlight them in the user interface, right click and ‘remove’ before creating the XML sitemap. Alternatively you can export the ‘internal’ tab to Excel, filter and delete any URLs that are not required and re-upload the file in list mode before exporting the sitemap. Alternatively, simply block them via the exclude feature or robots.txt before a crawl.

Last Modified

It’s optional whether to include the ‘lastmod’ attribute in a XML Sitemap, so this is also optional in the SEO Spider. This configuration allows you to either use the server response, or a custom date for all URLs.

XML Sitemap lastmod

Change Frequency

It’s optional whether to include the ‘changefreq’ attribute and the SEO Spider allows you to configure these based from the ‘last modification header’ or ‘level’ (depth) of the URLs. The ‘calculate from last modified header’ option means if the page has been changed in the last 24 hours, it will be set to ‘daily’, if not, it’s set as ‘monthly’.

xml sitemap changefreq

Images

It’s entirely optional whether to include images in the XML sitemap. If the ‘include images’ option is ticked, then all images under the ‘Internal’ tab (and under ‘Images’) will be included by default. As shown in the screenshot below, you can also choose to include images which reside on a CDN and appear under the ‘external’ tab within the UI.

xml sitemap options

Typically images like logos or social profile icons are not included in an image sitemap, so you can also choose to only include images with a certain number of source attribute references to help exclude these. Often images like logos are linked to sitewide, while images on product pages for example might only be linked to once of twice. There is a IMG Inlinks column in the ‘images’ tab which shows how many times an image is referenced to help decide the number of ‘inlinks’ which might be a suitable for inclusion.

Reports

There’s a variety of reports which can be accessed via the ‘reports’ top level navigation. These include, as follows –

Crawl Overview Report

This report provides a summary of the crawl, including data such as, the number of URLs encountered, those blocked by robots.txt, the number crawled, the content type, response codes etc. The ‘total URI description’ provides information on what the ‘Total URI’ column number is for each individual line to (try and) avoid any confusion.

Redirect Chains Report

This report maps out chains of redirects, the number of hops along the way and will identify the source, as well as if there is a loop.

The redirect chain report can also be used in list mode, alongside the ‘Always follow redirects‘ option which is very useful in site migrations. When you tick this box, the SEO Spider will continue to crawl redirects in list mode and ignore crawl depth, meaning it will report back upon all hops until the final destination. Please see our guide on auditing redirects in a site migration.

Please note – If you only perform a partial crawl, or some URLs are blocked via robots.txt, you may not receive all response codes for URLs in this report.

Canonical Errors Report

This report highlights errors and issues with canonicals. In particular, this report will show any canonicals which have a no response, 3XX redirect, 4XX or 5XX error (anything other than a 200 ‘OK’ response). This report also provides data on any URLs which are discovered only via a canonical and are not linked to from the site (in the ‘unlinked’ column when ‘true’).

Insecure Content Report

The insecure content report will show any secure (HTTPS) URLs which have insecure elements on them, such as internal HTTP links, images, JS, CSS, SWF or external images on a CDN, social profiles etc. When you’re migrating a website to secure (HTTPS) from non secure (HTTP), it can be difficult to pick up all insecure elements and this can lead to warnings in a browser –

Firefox insecure content warning

Here’s a quick example of how a report might look (with insecure images in this case) –

insecure content report

This report does not at this time does not consider canonicals, so if a HTTPS URL has a HTTP canonical, this will not be included in this report. However, these can be seen as usual under the ‘canonicalised’ filter in the ‘Directives’ tab.

SERP Summary Report

This report allows you to quickly export URLs, page titles and meta descriptions with their respective character lengths and pixel widths. This report can also be used for a template to re-upload back into the SEO Spider in ‘SERP’ mode.

GA & GSC Not Matched

The ‘GA & GSC Not Matched’ report provides a list of URLs collected from the Google Analytics API and the Google Search Console (Search Analytics API), that were not matched against URLs discovered within the crawl. Hence, this report will be blank, unless you have connected to Google Analytics or Search Console and collected data during a crawl.

The ‘source’ column shows exactly which API the URL was discovered, but not matched against a URL in the crawl. These include –

  • GA – The URL was discovered via the Google Analytics API.
  • GSC – The URL was discovered in Google Search Console, by the Search Analytics API.
  • GA & GSC – The URL was discovered in both Google Analytics & Google Search Console.

This report can include any URLs returned by Google Analytics for the query you select in your Google Analytics configuration. Hence, this can include logged in areas, or shopping cart URLs, so often the most useful data for SEOs is returned by querying the landing page path dimension and ‘organic traffic’ segment. This can then help identify –

  • Orphan Pages – These are pages that are not linked to internally on the website, but do exist. These might just be old pages, those missed in an old site migration or pages just found externally (via external links, or referring sites). This report allows you to browse through the list and see which are relevant and potentially upload via list mode.
  • Errors – The report can include 404 errors, which sometimes include the referring website within the URL as well (you will need the ‘all traffic’ segment for these). This can be useful for chasing up websites to correct external links, or just 301 redirecting the URL which errors, to the correct page! This report can also include URLs which might be canonicalised or blocked by robots.txt, but are actually still indexed and delivering some traffic.
  • GA or GSC URL Matching Problems – If data isn’t matching against URLs in a crawl, you can check to see what URLs are being returned via the GA or GSC API. This might highlight any issues with the particular Google Analytics view, such as filters on URLs, such as ‘extended URL’ hacks etc. For the SEO Spider to return data against URLs in the crawl, the URLs need to match up. So changing to a ‘raw’ GA view, which hasn’t been touched in anyway, might help.

Crawl Path Report

This report is not under the ‘reports’ drop down in the top level menu, it’s available upon right-click of a URL in the top window pane and within the ‘export’ option. For example –

crawl path report

This report shows you the path the SEO Spider crawled to discover the URL which can be really useful for deep pages, rather than viewing ‘inlinks’ of lots of URLs to discover the original source URL (for example, for infinite URLs caused by a calendar).

The crawl path report should be read from bottom to top. The first URL at the bottom of the ‘source’ column is the very first URL crawled (with a ‘0’ level). The ‘destination’ shows which URLs were crawled next, and these make up the following ‘source’ URLs for the next level (1) and so on, upwards. The final ‘destination’ URL at the very top of the report will be the URL of the crawl path report!

Command line & scheduling

You can use the command line to start a crawl. Please see our post How To Schedule A Crawl By Command Line In The SEO Spider for more information on scheduling a crawl.

Supplying no arguments starts the application as normal. Supplying a single argument of a file path, tries to load that file in as a saved crawl. Supplying the following:

--crawl http://www.example.com/

starts the Spider and immediately triggers the crawl of the supplied domain. This switches the Spider to crawl mode if its not the last used mode and uses your default configuration for the crawl.

Note: If your last used mode was not crawl, “Ignore robots.txt” and “Limit Search Depth” will be overwritten.

Windows

Open a command prompt (Start button, then search programs and files for ‘Windows Command Processor’)

Move into the SEO Spider directory:

cd "C:\Program Files\Screaming Frog SEO Spider"

To start normally:

ScreamingFrogSEOSpider.exe

To open a crawl file (Only available to licensed users):

ScreamingFrogSEOSpider.exe C:\tmp\crawl.seospider

To auto start a crawl:

ScreamingFrogSEOSpider.exe --crawl http://www.example.com/

windows cli

MAC OS X

Open a terminal, found in the Utilities folder in the Applications folder, or directly using spotlight and typing: ‘terminal’.

To start normally:

open "/Applications/Screaming Frog SEO Spider.app"

To open a saved crawl file:

open "/Applications/Screaming Frog SEO Spider.app" /tmp/crawl.seospider

To auto start a crawl:

open "/Applications/Screaming Frog SEO Spider.app" --args --crawl http://www.example.com/

mac cli

Linux

The following commands are available from the command line:

To start normally:

screamingfrogseospider

To open a saved crawl file:

screamingfrogseospider /tmp/crawl.seospider

To auto start a crawl:

screamingfrogseospider --crawl http://www.example.com/

  • Like us on Facebook
  • +1 us on Google Plus
  • Connect with us on LinkedIn
  • Follow us on Twitter
  • View our RSS feed

Download.

Download

Purchase a licence.

Purchase