Every URL discovered in a crawl is classified as either ‘Indexable' or ‘Non-Indexable'.
'Indexable' means a URL that can be crawled, responds with a ‘200’ status code and is permitted to be indexed.
'Non-Indexable' is a URL that can't be crawled, doesn't respond with a '200' status code, or has an instruction not to be indexed.
Every non-indexable URL has an 'Indexability Status' associated with it, which explains quickly why it isn't indexable.
Non-indexable can include URLs that are the following -
This is triggered by a local font issue, normally caused by having duplicate Arial fonts installed.
To investigate open the "FontBook" application. Go to "Edit->Look for Enabled Duplicates..." to remove any duplicates. After resolving these try restarting the SEO Spider. If you still have an issue, go back to FontBook and take a look at your Arial fonts, are there any messages about them needing repairing? If so, repair them and restart the SEO Spider. If you still have an issue go to "File->Restore Standard Fonts...". The fonts that are removed by this will got into a separate folder in Font Book so you'll be able to add them back in as needed.
If you wish to export data in list mode in the same order it was uploaded, then use the ‘Export’ button which appears next to the ‘upload’ and ‘start’ buttons at the top of the user interface.
The data in the export will be in the same order and include all of the exact URLs in the original upload, including duplicates or any fix-ups performed.
The most common reasons for this are:
If you are receiving the following error when trying to connect to Google Analytics or Search Console:
It means that the libraries Google provide for the SEO Spider to connect to GA/GSC can't validate accounts.google.com.
We've seen this a number of times from customers that have proxies configured that are modifying SSL traffic. This error means the library has detected an insecure connection - we are not able to remove this security, it's there for a reason.
We have also seen this error being triggered by Covenant Eyes and Kaspersky. It is recommended that you contact your IT department and have them add an exception for accounts.google.com.
The SEO Spider finds pages by scanning the HTML code of the entered starting URL for
<a href> links, which it will then crawl to find more links. Therefore to find a page there must be a clear linking path to it from the starting point of a crawl for the SEO Spider to follow.
If there is a clear path, then these links or the pages the links are on must exist in a way the SEO Spider either cannot 'see' or crawl. Hence please make sure of the following:
When the licence expires, the SEO Spider returns to the restricted free lite version. The Spider’s configuration options are unavailable, there is a 500 URI maximum crawl limit and previously saved crawls cannot be opened.
To remove the crawl limit, use all the features and configuration options and open up saved crawls, simply purchase a licence upon expiry.
In the same way as the free ‘lite’ version, there are no restrictions on the number of websites you can crawl with a licence. Licences are however, individual per user. If you have five members of the team who would like to use the licenced version, you will need five licences.
The SEO Spider stores the licence in a file called licence.txt in the users home directory in a ‘.ScreamingFrogSEOSpider’ folder. You can see this location by going to Help->Debug and looking at the line labeled “Licence File”. Please check the following to resolve this issue:
user.homevalue supplied to Java is incorrect. Ideally your IT team should fix this. As a work around you can add:
-Duser.home=DRIVE_LETTER:\path\to\new\directory\to the ScreamingFrogSEOSpider.l4j.ini file that controls memory settings.
Yes, please take a note of your licence key (you can find this under ‘Licence’ and ‘Enter Licence...’ in the software), then uninstall the SEO Spider on the old computer, before installing and entering your licence on the new machine. If you experience any issues during this move, please contact our support.Back to top
As standard you download the lite version of the tool which is free. However, without a licence the SEO Spider is limited to crawling a maximum of 500 URIs each crawl. The configuration options of the Spider and the custom source code search feature are also only available in the licensed version.
For £149 per annum you can purchase a licence which opens up the Spider’s configuration options and removes restrictions on the 500 URI maximum crawl. A licence is required per individual using the tool. When the licence expires, the SEO Spider returns to the restricted free lite version.
We accept PayPal and most major credit and debit cards. The price of the SEO Spider is in pound sterling (GBP). If you are outside of the UK, please take a look at the current exchange rate to work out the cost. (The automatic currency conversion will be dependent on the current foreign exchange rate and perhaps your card issuer). We do not accept cheques (or checks!)Back to top
Yes, if you are not in the UK. To do this you must have a valid VAT number and enter this on the Billing page during checkout. Select business and enter your VAT number as shown below: Your VAT number will be checked against the VIES system and VAT removed if it is valid. The VIES system does go down from time to time, so if this happens please try again later. Unfortunately we cannot refund VAT once a purchase has been made.Back to top
Absolutely! If you are not completely satisfied with the SEO Spider you purchased from this website, you can get a full refund if you contact us within 14 days of purchasing the software. To obtain a refund, please follow the procedure below.
Contact us via email@example.com or support and provide the following information:
The software needs to be downloaded from our website, the licence key is delivered electronically by email.Back to top
We do not offer discounted rates for resellers. The price is GBP at £149 per year, per user.Back to top
No, we only sell in GBP.Back to top
This will generally be due to the SEO Spider reaching its memory limit. Please read how to increase memory.Back to top
Connection refused is displayed in the Status column when the SEO Spiders connection attempt has been refused at some point between the local machine and website. If this happens for all sites consistently then it is an issue with the local machine/network. Please check the following:
Connection timeout occurs when the SEO Spider struggles to receive an HTTP response at all and the request times out. It can often be due to a slow responding website or server when under load, or it can be due to network issues. We recommend the following –
When a website requires cookies this often appears in the SEO Spider as if the starting URL is redirecting to itself or to another URL and then back to itself (any necessary cookies are likely being dropped along the way). This can also be seen when viewing in a browser with cookies disabled:
The easiest way to work around this issue is to first load up the page using forms based authentication.
‘Configuration > Authentication > Forms Based’
Select ‘Add’, then enter the URL that is redirecting, and wait for the page to load before clicking ‘OK’.
The SEO Spider's in-built Chromium browser has thus accepted the cookies, and you should now be able to crawl the site normally.
A secondary method to bypass this kind of redirect is to ensure the ‘Allow Cookies’ configuration is set:
'Configuration > Spider > Advanced > Allow Cookies'
To bypass the redirect behaviour, as the SEO Spider only crawls each URL once, a parameter must be added the to the starting URL:
A URL rewriting rule that removes this parameter when the spider is redirected back to the starting URL must then be added:
Configuration > URL Rewriting > Remove Parameters
The SEO Spider should then be able crawl normally from the starting page now it has any required Cookies.
If the site or URL in question has page titles and meta descriptions, but one (or both!) are not showing in the SEO Spider this is generally due to the following reasons -
1) The SEO Spider reads up to a maximum of 20 meta tags. So, if there are over 20 meta tags and the meta description is after the 20th meta tag, it will be ignored.
The SEO Spider will check links to PDF documents. These URLs can be seen under the PDF filter in the Internal and External tabs. It does not parse PDF documents to find links to crawl.Back to top
First ensure the Spider is still crawling the site and if so what the URLs it has been finding look like. Depending on the URLs the Spider has been finding will explain why the crawl percentage is not increasing:
Because Windows Defender is running a security scan on it, this can take up to a couple of minutes. Unfortunately when downloading the file using Google Chrome it gives no indication that it is running the scan. Internet Explorer does give an indication of this, and Firefox does not scan at all. If you go directly to your downloads folder and run the installer from there you don’t have to wait for the security scan to run.Back to top
Yes, by issuing the following command:
By default this will install the SEO Spider to:
C:\Program Files (x86)\Screaming Frog SEO Spider
You can choose an alternative location by using the following command:
ScreamingFrogSEOSpider-VERSION.exe /S /D=C:\My Folder
From version 2.50 the SEO Spider requires a version of Java not supported by this version of macOS. This means older 32-bit Macs (the last of which we understand were made 8-9 years ago) will not be able to use the latest version of the SEO Spider. Newer 64-bit Macs which haven’t yet updated their version of macOS will need to update their OS before installing Java.
We do still support version 2.40 for macOS versions below 10.7.3 (and 32-bit) Macs which can be downloaded here. This version has considerably less features than the current version, as described in our release history.
Unfortunately we are at the mercy of Oracle to update their Mac look and feel to more closely match the new style introduced in macOS Yosemite. There is a Java bug related to this at JDK-8052173. This will be updated in a future Java release.Back to top
Feedback is welcome, please just follow the steps on the support page to submit feedback. Please note we will try to read all messages but might not be able to reply to all of them. We will update this FAQ as we receive additional questions and feedback.Back to top
You cannot use the configuration options in the lite version of the tool. You will need to buy a licence to open up this menu, you can do this by clicking the ‘buy a licence’ option in the Spider’s interface under ‘license’.Back to top
You can bulk export data via the ‘bulk export’ option in the top level navigation menu. Simply choose the ‘images missing alt text’ option to export all references of images without alt text. Please see more on exporting in our user guide.Back to top
You can simply view URLs blocked via robots.txt in the UI (within the ‘Internal’ and ‘Response Codes’ tabs for example). Ensure you have the ‘Show internal URLs blocked by robots.txt’ configuration ticked under 'Configuration > Robots.txt > Settings'.
You can view external URLs blocked by robots.txt within the 'External' and 'Response Codes' tabs by ticking the ‘Show External URLs blocked by robots.txt’ configuration under 'Configuration > Robots.txt > Settings'.
Disallowed URLs will appear with a ‘status’ as ‘Blocked by Robots.txt’ and there’s a ‘Blocked by Robots.txt’ filter under the ‘Response Codes’ tab, where these can be viewed.
The ‘Blocked by Robots.txt’ filter also displays a ‘Matched Robots.txt Line’ column, which provides the line number and disallow path of the robots.txt entry that’s excluding each URL. If multiple lines in robots.txt block a URL, the SEO Spider will just report on the first encountered, similar to Google within Search Console.
Please see our guide on using the SEO Spider as a robots.txt tester.
If you’re using the older 2.40 Mac version of the SEO Spider, you can view the ‘Total Blocked by robots.txt’ for a crawl on the right-hand side of the user interface in the ‘Summary’ section of the overview tab. This count includes both internal and external URLs. Currently, there isn’t a way of seeing which URLs have been blocked in the user interface. However, it is possible to get this information from the SEO Spider log file, after a crawl. Each time a URL is blocked by robots.txt, it will be reported like this:
2015-02-18 08:56:09,652 [RobotsMain 1] INFO - robots.txt file prevented the spider of 'http://www.example.com/page.html', reason 'Blocked by line 2: Disallow: http://www.example.com/'. You can choose to ignore robots.txt files in the Spider configuration.
You can view the log file(s) by either going to the location shown for ‘Log File’ under Help->Debug, or downloading and unzipping the log files from Help->Debug->Save Logs.
The SEO Spider supports two forms of authentication, standards based which includes basic and digest authentication, and web forms based authentication.
The spider obeys robots.txt protocol. Its user agent is ‘Screaming Frog SEO Spider’ so you can include the following in your robots.txt if you wish the Spider not to crawl your site – User-agent: Screaming Frog SEO Spider Disallow: / Please note – There is an option to ‘ignore’ robots.txt and change user-agent, which is down to the responsibility of the user entirely.Back to top
There’s a number of reasons why the number of URLs found in a crawl might not match the number of results indexed in Google (via a site: query) or errors reported in the SEO Spider match those in Google Search Console.
First of all, crawling and indexing are quite separate, so there will always be some disparity. URLs might be crawled, but it doesn’t always mean they will actually be indexed in Google. This is an important area to consider, as there might be content in Google’s index which you didn’t know existed, or no longer want indexed for example. Equally, you may find more URLs in a crawl than in Google’s index due to directives used (noindex, canonicalisation) or even duplicate content, low site reputation etc.
Secondly, the SEO Spider only crawls internal links of a website at that moment of time of the crawl. Google (more specifically Googlebot) crawls the entire web, so not just the internal links of a website for discovery, but also external links pointing to a website.
Googlebot’s crawl is also not a snapshot in time, it’s over the duration of a site’s lifetime from when it’s first discovered. Therefore, you may find old URLs (perhaps from discontinued products or an old section on the site which still serve a 200 ‘OK’ response) that isn't linked to anymore, or content that is only linked to via external sources in their index still. The SEO Spider won’t be able to discover URLs which are not linked to internally, like orphan pages or URLs only accessible by external links.
There are other reasons as well, these may include –
Yes. There are two ways you can do this:
1) Open up a multiple instances of the SEO Spider, one for each domain you want to crawl. Mac users check here.
2) Use list mode (Mode->List). Remove the search depth limit (Configuration->Spider->Limits and untick “Limit Search Depth”, untick “Ignore robots.txt” (Configuration->Robots.txt->Settings) then upload your list of domains to crawl.
Canonicalised, robots.txt blocked, noindex and paginated URIs are not included in the sitemap by default. You may choose to include these in your site map by ticking the appropriate checkbox(s) in the 'Pages' tab when you export the site map.
Please read our user guide on XML Sitemap Creation.
If you are using a regex like
.* that contains a greedy quantifier you may end up matching more than you want. The solution to this is to use a regex like
For example if you are trying to extract the id from the following JSON:
"id":"(.*)" you will get:
007", "name":"James Bond
If you use
"id":"(.*?)" you will extract:
The URLs in your chosen Google Analytics view have to match the URLs discovered in the SEO Spider crawl exactly, for data to be matched and populated accurately. If they don’t match, then GA data won’t be able to be matched and won’t populate. This is the single most common reason.
If Google Analytics data does not get pulled into the SEO Spider as you expected, then analyse the URLs under ‘Behaviour > Site Content > Landing Pages’ and ‘Behaviour > Site Content > All Pages’ depending on which dimension you choose in your query. Try clicking on the URLs to open them in a browser to see if they load correctly.
You can also export the ‘orphan pages’ report which shows a list of URLs returned from the Google Analytics & Search Analytics (from Search Console) API’s for your query, that didn’t match URLs in the crawl. Check the URLs with source as ‘GA’ for Google Analytics specifically (those marked as ‘GSC’ are Google Search Analytics, from Google Search Console). The URLs here need to match those in the crawl, for the data to be matched accurately.
If they don’t match, then the SEO Spider won’t be able to match up the data accurately. We recommend checking your default Google Analytics view settings (such as ‘default page’) and filters such as ‘extended URL’ hacks, which all impact how URLs are displayed and hence matched against a crawl. If you want URLs to match up, you can often make the required amends within Google Analytics or use a ‘raw’ unedited view (you should always have one of these ideally).
Please note – There are some very common scenarios where URLs in Google Analytics might not match URLs in a crawl, so we cover these by matching trailing and non-trailing slash URLs and case sensitivity (upper and lowercase characters in URLs). Google doesn’t pass the protocol (HTTP or HTTPS) via their API, so we also match this data automatically as well.
When using Database Storage mode the SEO Spider monitors how much disk space you have and will automatically pause if you have less than 5GB remaining. If you receive this warning you can free up some disk space to continue the crawl.
If you are unable to free up any disk space you can either configure the SEO Spider to use another drive with more space by going to Configuration->System->Storage and selecting a folder on another disk, or switch to Memory Storage by going to Configuration->System->Storage and selecting Memory Storage. Changing either of these settings requires a restart, so if you'd like to continue the current crawl you will have to save it and reload it in after restarting.
No, we do not have an affiliate program for the SEO Spider software at this time.Back to top
If you don't have an internal SSD and you'd like to crawl large websites using database storage mode, then an external SSD can help.
There are a few things to remember with this set-up. It's important to ensure your machine has USB 3.0 and your system supports UASP mode. Most new systems do automatically, if you already have USB 3.0 hardware. When you connect the external SSD, ensure you connect to the USB 3.0 port, otherwise reading and writing will be slow.
USB 3.0 ports generally have a blue inside (as recommended in their specification), but not always; and you will typically need to connect a blue ended USB cable to the blue USB 3.0 port. Simple!
After that, you need to switch to database storage mode ('Configuration > System > Storage'), and then select the database location on the external SSD (the 'D' drive in the example below). You will then need to restart the SEO Spider, before beginning the crawl.
This is normally triggered by some third-party software, such as a firewall or antivirus. Please try disabling this or adding an exception. The exception you need to add varies depending on what operating system you are using:
You can prevent this initialisation happening by going to Configuration->System->Embedded Browser.
This is most likely a Java 8 bug relating to fonts. To resolve this please open the Font Book application, choose "File->Restore Standard Fonts" then try to start the SEO Spider. The fonts that are removed will go into a separate folder in Font Book so you'll be able to add them back in one by one to find the offending font.Back to top
After allowing the SEO Spider access to your Google account you should be redirected to a screen that looks like this: However, if you receive an error like this: There are a few things to check:
In short: For crawls under 100-200k URLs, a 64bit OS and 8GB of RAM should be sufficient. To be able to crawl millions of URLs, an SSD and 16gb of RAM is recommended.
Hard Disk: We highly recommend having an SSD and switching the SEO Spider to database storage mode to crawl large websites.
Memory: The SEO Spider stores all crawl data in memory by default, but it can be configured to store data within a database to crawl more URLs. The more memory you have allocated, the more URLs you will be able to crawl in both regular memory storage mode and database storage mode. To be able to allocate more than 1gb of memory you need a 64-bit operating system. Most PCs purchased in the last five years will be running a 64-bit OS. So the most important thing is to make sure you have plenty of memory available. Each website is unique in terms of how much memory it requires, so we cannot give exact figures on how much memory is required to crawl a certain number of URLs. As a very rough guide, a 64-bit machine with 8GB of RAM will generally allow you to crawl about 200,000 URLs in memory storage mode. In database storage mode, this should allow you to crawl approx. 5 million URLs.
CPU: The speed of a crawl will normally be limited by the website itself, rather than the SEO Spider, as most sites limit the number of concurrent connections they will accept from a single IP. When crawling hundreds of thousand URLs some operations will be limited by CPU, such as sorting and searching, so a fast CPU will help minimise these slowdowns.
The spider is opening off screen, possibly due to a multi monitor setup that has recently changed. To move the spider on to the active monitor use Alt + Tab to select the spider, then hold in the Windows key and use the arrow keys to move the Spider window into view.Back to top
The SEO Spider runs from the machine it is installed on, so the IP address is simply that of this machine/network. You can find out what this is by typing “IP Address” into Google.
The local port used for the connection will be from the ephemeral range. The port being connected to will generally be port 80, the default http port or port 443, the default https port. Other ports will be connected to if the site being crawled or any of its links specify a different port. For example: http://www.example.com:8080/home.html
Licences are individual per user. A single licence key is for a single assigned user. If you have five people from your team that wish to use the SEO Spider, you will require 5 user licences.
Discounts are available for 5 users or more, as shown in our pricing.
Please see section 3 of our terms and conditions for full details.
If the SEO Spider says your ‘licence key is invalid’, then please check the following, as the licence keys we provide always work.
Licence keys are displayed on screen when you check out, sent in an email with the subject "Screaming Frog SEO Spider licence details" and are available at any time by logging into your account.
If you have lost your a licence key or invoice from the 22nd of September 2014 onwards, please login to your account to retrieve the details.
If you have lost your account password, then simply request a new password via the form.
If you purchased a licence before the 22nd of September 2014, then please contact firstname.lastname@example.org with your username or e-mail you used to pay for the premium version.
Simply click on the ‘buy a licence’ option in the SEO Spider ‘licence’ menu or visit our purchase a licence page directly.
You can then create an account & make payment. When this is complete, you will be provided with your licence key to open up tool & remove the crawl limit. If you have just purchased a licence and have not received your licence, please check your spam / junk folder. You can also view your licence(s) details and invoice(s) by logging into your account.
Please note, the account login has only been active from the 22nd of September 2014. If you purchased before this date, it won’t be available and you can contact us for any information.
No, the Screaming Frog SEO Spider is a separate product to the Log File Analyser. They have different licences, which will need to be purchased individually. You can purchase a Log File Analyser licence here.Back to top
Yes, please see our SEO Spider licence page for more details on discounts.Back to top
If you have just purchased a licence and have not received your licence, please check your spam / junk folder. Licences are sent immediately upon purchase. You can also view your licence(s) details and invoice(s) by logging into your account.Back to top
There are a few reasons this could happen:
Resellers can purchase an SEO Spider licence online on behalf of a client. Please be aware that licence usernames are automatically generated from the account name entered during checkout. If you require a custom username, then please request a PayPal invoice in advance.
For resellers who are unable to purchase online with PayPal or a credit card and encumber us with admin such as vendor forms, we reserve the right to charge an administration fee of £50.
There is no part number or SKU.Back to top
Screaming Frog is a UK based company, so this is not applicable.Back to top
This could be for a number of reasons:
X-Robots-Tagin the HTTP header. These can be seen in the “Directives” tab in the “Nofollow” filter. To ignore the NoFollow directive go to Configuration -> Spider -> and tick "Follow Internal 'No Follow'" and recrawl.
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
This will generally be due to the SEO Spider reaching its memory limit. Please read how to increase memory.Back to top
Connection error, or connection timeout is a message when there is an issue in receiving a response at all. This is generally due to network issues or proxy settings. Please check that you can connect to the internet. If you have changed the SEO Spider proxy settings (under configuration, proxy), please ensure that these are correct (or they are switched off).Back to top
The 403 forbidden status codes occurs when a web server denies access to the SEO Spider’s request for some reason.
If this happens consistently and you can see the website in a browser, it could be the web server behaves differently depending on User Agent. In the premium version try adjusting the User Agent setting under Configuration->HTTP Header->User Agent. For example, try crawling as a bot, such as ‘Googlebot Regular’, or as a browser, such as ‘Chrome’.
If this happens intermittently during a crawl, it could be due to the speed the Spider is requesting pages overwhelming the server. In the premium version of the SEO Spider you can reduce the speed of requests. If you are running the ‘lite’ version you may find that right clicking the URL and choosing re-spider will help.
The SEO Spider HTTP request is often different to a traditional browser and other tools, so you can sometimes experience a different response than if you visit the page or use a different tool to check the response.
The SEO Spider simply reports on the response given to it by the server when it makes a request, which won’t be incorrect, but can differ from what might be experienced elsewhere. Some of the common factors that can cause servers to give a different response, that are configurable in the SEO Spider are -
The SEO Spider determines the character encoding of a web page by the “charset=” parameter in the http Content-Type header, eg:
You can see this in the SEO Spider’s interface in the ‘Content’ columns (in various tabs). If this is not present in the http header, the SEO Spider will then read the first 2048 bytes of the html page to see if there is a charset within the html.
For example –
“meta http-equiv=”Content-Type” content=”text/html; charset=windows-1255″
If this is not the case, we continue assuming the page is UTF-8.
The Spider does log any character encoding issues. If there is a specific page that is causing problems, perform a crawl of only that page by setting the maximum number of URLs to crawl to be 1, then crawling the URL. You may see a line in the trace.txt log file (the location is – C:UsersYourprofile.ScreamingFrogSEOSpidertrace.txt):
20-06-12 20:32:50 INFO seo.spider.net.InputStreamWrapper:logUnsupportedCharset Unsupported Encoding ‘windows-‘ reverting to ‘UTF-8’ on page ‘http://www.example.com’ java.io.UnsupportedEncodingException: windows-‘. This could be an error on the site or you may need to install an additional language pack.
The solution to fix this is to specify the format of the data by either the Content-Type field of the accompanying HTTP header or ensuring the charset parameter in the source code is within the first 2048 bytes of the html within the head element.
There are generally two reasons for this:
This means the crawl did not save completely, which is why it can’t be opened. EOF stands for ‘end of file’, which means the SEO Spider was unable to read to the expect end of the file. This can be due to the SEO Spider crashing during save, which is normally due to running out of memory. This can also happen if you exit the SEO Spider during save, or your machine crashes for example. Unfortunately there is no way to open or retrieve the crawl data, as it’s incomplete and therefore lost. Please also consider increasing your memory allocation, which will help reduce any problems saving a crawl in the future.Back to top
Please note Include/Exclude are case sensitive so any functions need to match the URL exactly as it appears.
Functions will only be applied to URLs that have not yet been discovered by the Spider. Any URLs that have been discovered and queued for crawling will to be affected, hence it is recommended the crawl is restarted between updates to ensure the results are accurate.
Functions will not be applied to the starting URL of a crawl or URLs in list mode.
.* is a the regex wildcard
Try running the file as administrator by right clicking the installer and choosing “Run as administrator”. Alternatively log in to an administrator account. You may need to request assistance from your IT department depending on your company setup.Back to top
If you get a message every time you start up that looks like this: Then you are most likely running MacOS Yosemite (10.10.x) which has a bug in it's Java Runtime. Installing this patch from Apple will resolve the issue.Back to top
To open additional instances of the SEO Spider open a Terminal and type the following:
open -n /Applications/Screaming\ Frog\ SEO\ Spider.app/
Please follow the steps on the support page so we can help you as quickly as possible. Please note, we only offer full support for premium users of the tool although we will generally try and fix any issues.Back to top
The SEO Spider runs on Windows, Mac and Linux. It’s a Java application and requires a Java 8 runtime environment or later to be to run. You can check here to see the system requirements to run Java. You can download the SEO Spider for free and try it.
Mac: If you are using macOS 10.7.2 or lower please see this faq.
Linux: We provide an Ubuntu package for Linux. If you would like to run the SEO Spider on a non-Debian based distribution please extract the jar file from the .deb and run it manually.
Windows: The SEO Spider can also be run on the server variants and Windows 10. From version 9.0 onwards, the SEO Spider doesn't run on Windows XP.
Please note that the rendering feature is not available on older operating systems.
You can bulk export data via the ‘bulk export’ option in the top level navigation menu. Simply choose the ‘all images’ option to export all images and associated alt text found in our crawl. Please see more on exporting in our user guide.Back to top
The Screaming Frog SEO Spider is robots.txt compliant. It checks robots.txt in the same way as Google. So it will check robots.txt of the (sub) domain and follow directives for all robots and specifically any for Googlebot. The tool also supports URL matching of file values (wildcards * / $) like Googlebot. Please see the above document for more information or our robots.txt section in the user guide. You can turn this feature off in the premium version.Back to top
The SEO Spider uses a configurable hybrid storage engine, which enables it to crawl millions of URLs. However, it does require configuration (explained below) and the correct hardware.
By default the SEO Spider will crawl using RAM, rather than saving to disk. This has advantages, but it cannot crawl at scale, without lots of RAM allocated.
In standard memory storage mode there isn't a set number of pages it can crawl, it is dependent on the complexity of the site and the users machine specifications. The SEO Spider sets a maximum memory of 1gb for 32-bit and 2gb for 64-bit machines, which enables it to crawl between 5k-100k URLs of a site.
You can increase the SEO Spider’s memory allocation, and crawl into hundreds of thousands of URLs purely using RAM. A 64-bit machine with 8gb of RAM will generally allow you to crawl a couple of hundred thousand URLs, if the memory allocation is increased.
The SEO Spider can be configured to save crawl data to disk, which enables it to crawl millions of URLs. However, we recommend using this option with a Solid State Drive (SSD), as hard disk drives are significantly slower at writing and reading data. This can be configured by selecting ‘Database Storage’ mode (under ‘Configuration > System > Storage’).
As a rough guide, an SSD and 8gb of RAM in database storage mode, should allow the SEO Spider to crawl approx. 5 million URLs.
Please see our guide on crawling large websites for more information.
The ‘Completed’ URI total is the number of URIs the SEO Spider has encountered. This is the total URI crawled, plus any ‘Internal’ and ‘External’ URI blocked by robots.txt.
Depending on the settings in the robots.txt section of the ‘Configuration > Spider >Basic’ menu, these blocked URI may not be visible in the SEO Spider interface.
If the ‘Respect Canonical’ or ‘Respect Noindex’ options in the ‘Configuration > Spider > Advanced’ tab are checked, then these URI will count towards the ‘Total Encountered’ (Completed Total) and ‘Crawled’, but will not be visible within the SEO Spider interface.
The ‘Response Codes’ Tab and Export will show all URLs encountered by the Spider except those hidden by the settings detailed above.
We cannot see what you are crawling or the data you have crawled. All crawl data is stored on your machine. You crawl from your machine and we don’t see it.
Google APIs use the OAuth 2.0 protocol for authentication and authorisation, and obviously, the data provided via Google Analytics and other APIs is only accessible locally on your machine.
The software does not contain any spyware, malware or adware (as verified by Softpedia).
First of all, the free ‘lite’ version is restricted to a 500 URLs crawl limit and obviously a website might be significantly larger. If you have a licence, the main reason an SEO Spider crawl might discover more or less links (and indeed broken links etc), than another crawler is simply down to the different default configuration set-ups of each.
As default the SEO Spider will respect robots.txt, respect ‘nofollow’ of internal and external URLs & crawl canonicals. But other crawlers sometimes don’t respect these as default and hence why there might be differences. Obviously these can all be adjusted to your own preferences within the configuration.
While crawling more URLs might seem to be a good thing, actually it might be completely unnecessary and a waste of time and effort. So please choose wisely what you want to crawl.
We believe the SEO Spider is the most advanced crawler available and it will often find more URLs than other crawls as it crawl canonicals and AJAX similar to Googlebot which other crawlers might not have as standard, or within their current capability. There are other reasons as well, these may include –
Read our ‘How To Create An XML Sitemap‘ tutorial, which explains how to generate an XML Sitemap, include or exclude pages or images and runs through all the configuration settings available.Back to top
If you want all the H1s from the following HTML:
Then we can use:
There are a number of reasons why you might be experiencing slow crawl rate or slow down of the SEO Spider. These include –
There’s a number of reasons why data fetched via the Google API into the SEO Spider, might be different to the data reported within the Google Analytics Interface. First of all, we recommend triple checking that you’re viewing the exact same account, property, view, segment, date range and metrics and dimensions. LandingPagePath and PagePath will of course provide very different results for example! If data still doesn’t match, then there are some common reasons why –
We don't have a Chromebook version of the SEO Spider. However, you can install Crouton, set up Ubuntu and download and install the Ubuntu version of the SEO Spider.
Please note, Chromebook's are not very powerful and are generally limited to 4GB of RAM. This will mean memory is restricted, and the number of URLs that can be crawled will also be limited. You can read more about SEO Spider memory in our user guide.
Image sitemap protocol require the HTML page the image is referenced on to be included in the sitemap. A list of images only does not have this information, so a sitemap cannot be generated.
Details on Google's requirements for image sitemaps can be seen at - https://support.google.com/webmasters/answer/178636.