SEO Spider FAQ

Why Do I Receive An Error When Granting Access To My Google Account?

After allowing the SEO Spider access to your Google account you should be redirected to a screen that looks like this: However, if you receive an error like this: There are a few things to check:

  • Is there any security software running on your machine preventing the SEO Spider listening on the port specified in the URL? The port is the number after localhost: in the address bar, 63212 in the screenshots above.
  • Is your browser sending the request, intended for localhost, to a proxy instead? You can sometimes tell this if the failure screen mentions the name of a proxy server, such as Squid for example.

Back to top

Why Does My Connection To Google Analytics Fail?

If you are receiving the following error when trying to connect to Google Analytics or Search Console: It means that the libraries Google provide for the SEO Spider to connect to GA/GSC can't validate accounts.google.com.

We've seen this a number of times from customers that have proxies configured that are modifying SSL traffic. This error means the library has detected an insecure connection - we are not able to remove this security, it's there for a reason.

We have also seen this error being triggered by Covenant Eyes and Kaspersky. It is recommended that you contact your IT department and have them add an exception for accounts.google.com.

Back to top

Why is the SEO Spider not finding a particular page or set of pages?

The SEO Spider finds pages by scanning the HTML code of the entered starting URL for <a href> links, which it will then crawl to find more links. Therefore to find a page there must be a clear linking path to it from the starting point of a crawl for the SEO Spider to follow. If there is a clear path, then these links or the pages the links are on must exist in a way the SEO Spider either cannot 'see' or crawl. Hence please make sure of the following:

  • The link is an HTML anchor tag, the SEO Spider does not execute JavaScript in the standard configuration, so links that exist only in JavaScript will not be ‘seen’ or crawled. If the site is built in a JavaScript framework, or has dynamic content, adjust the rendering configuration to 'JavaScript' under 'Configuration > Spider > Rendering tab > JavaScript' to crawl the website.
  • The link or linking page do not have a ‘nofollow’ directive preventing the SEO Spider from following the link. By default the SEO Spider obeys ‘nofollow’ directives unless the Follow internal nofollow option is checked.
  • The expected page(s) are on the same subdomain as your starting page. By default links to different subdomains are treated as external unless the Crawl all subdomains option is checked.
  • If the expected page(s) are in a different subfolder to the starting point of the crawl the Crawl outside start folder option is checked.
  • The linking page(s) are not blocked by Robots.txt. By default the robots.txt is obeyed so any links on a blocked page will not be seen unless the Ignore robots.txt option is checked. If the site uses JavaScript and the rendering configuration is set to 'JavaScript', ensure JS and CSS are not blocked by robots.txt.
  • You do not have an Include or Exclude function set up that is limiting the crawl.
  • Ensure category pages (or similar) were not temporarily unreachable during the crawl, giving a connection timeout, server error etc. preventing linked pages from being discovered.

Back to top

What happens when the licence expires?

When the licence expires, the SEO Spider returns to the restricted free lite version. The Spider’s configuration options are unavailable, there is a 500 URI maximum crawl limit and previously saved crawls cannot be opened.

To remove the crawl limit, use all the features and configuration options and open up saved crawls, simply purchase a licence upon expiry.

Back to top

What additional features does a licence provide?

A licence removes the 500 URI crawl limit, allows you to save and upload crawls, opens up all the configuration options and the custom source code search, custom extraction, Google Analytics integration, Google Search Console integration and JavaScript rendering features. We also provide support for technical issues related to the SEO Spider for licensed users.

In the same way as the free ‘lite’ version, there are no restrictions on the number of websites you can crawl with a licence. Licences are however, individual per user. If you have five members of the team who would like to use the licenced version, you will need five licences.

Back to top

Can I use my licence on more than one device?

Yes. The licence allows you to install the SEO Spider on multiple computers. However, licences are individual per user.

Please see section 3 of our terms and conditions for full details.

Back to top

Why can’t my Licence Key be saved (Unable to update licence file)?

The SEO Spider stores the licence in a file called licence.txt in the users home directory in a ‘.ScreamingFrogSEOSpider’ folder. You can see this location by going to Help->Debug and looking at the line labeled “Licence File”. Please check the following to resolve this issue:

  • Ensure you are able to create the licence file in the correct location.
  • If you are using a Mac, see the answer to this stackoverflow question.
  • If you are using Windows is could be the default user.home value supplied to Java is incorrect. Ideally your IT team should fix this. As a work around you can add:
    -Duser.home=DRIVE_LETTER:\path\to\new\directory\
    to the ScreamingFrogSEOSpider.l4j.ini file that controls memory settings.

Back to top

Is it possible to move my licence to a new computer?

Yes, please take a note of your licence key (you can find this under ‘Licence’ and ‘Enter Licence...’ in the software), then uninstall the SEO Spider on the old computer, before installing and entering your licence on the new machine. If you experience any issues during this move, please contact our support.

Back to top

How do I renew my licence?

Login to your existing account and purchase another licence upon expiry. Licences do not auto renew – so if you do not want to renew your licence you will not be charged and need to take no action.

Back to top

How much does the Screaming Frog SEO Spider cost?

As standard you download the lite version of the tool which is free. However, without a licence the SEO Spider is limited to crawling a maximum of 500 URIs each crawl. The configuration options of the Spider and the custom source code search feature are also only available in the licensed version.

For £149 per annum you can purchase a licence which opens up the Spider’s configuration options and removes restrictions on the 500 URI maximum crawl. A licence is required per individual using the tool. When the licence expires, the SEO Spider returns to the restricted free lite version.

Back to top

What payment methods do you accept & from which countries?

We accept PayPal and most major credit and debit cards. The price of the SEO Spider is in pound sterling (GBP). If you are outside of the UK, please take a look at the current exchange rate to work out the cost. (The automatic currency conversion will be dependent on the current foreign exchange rate and perhaps your card issuer). We do not accept cheques (or checks!)

Back to top

I’m a business in the EU, can I pay without VAT?

Yes, if you are not in the UK. To do this you must have a valid VAT number and enter this on the Billing page during checkout. Select business and enter your VAT number as shown below: enter_vat_number Your VAT number will be checked against the VIES system and VAT removed if it is valid. The VIES system does go down from time to time, so if this happens please try again later. Unfortunately we cannot refund VAT once a purchase has been made.

Back to top

Do you have a refund policy?

Absolutely! If you are not completely satisfied with the SEO Spider you purchased from this website, you can get a full refund if you contact us within 14 days of purchasing the software. To obtain a refund, please follow the procedure below.

Contact us via support@screamingfrog.co.uk or support and provide the following information:

  • Your contact information (last name, first name and email address).
  • Your order number.
  • Your reason for refund! If there's an issue, we can generally help.
  • For downloaded items, please provide proof that the software has been uninstalled from all your computers and will never be installed or used any more (screenshots will suffice).
If you have purchased your item by credit card the refund is re-credited to the account associated with the credit card used for the order.

If you have purchased your item by PayPal the refund is re-credited to the same PayPal account used to purchase the software.

If you have purchased your item using any other payment method, we will issue the refund by BACS, once approved by our Financial Department.

For any questions concerning this policy, please contact us at support.

Back to top

How is the software delivered?

The software needs to be downloaded from our website, the licence key is delivered electronically by email.

Back to top

What is the reseller price?

We do not offer discounted rates for resellers. The price is GBP at £149 per year, per user.

Back to top

Where can I get licensing terms?

Licensing details can be found here.

Back to top

Can I get a quote in a currency other than GBP?

No, we only sell in GBP.

Back to top

Why won’t the SEO Spider crawl my website?

This could be for a number of reasons:

  • The site is blocked by robots.txt. The 'status code' column in the internal tab will be a '0' and the 'status' column for the URL will say 'Blocked by Robots.txt'. You can configure the SEO Spider to ignore robots.txt under 'Configuration > Robots.txt > Settings'.
  • The site behaves differently depending on User-Agent. Try changing the User-Agent under Configuration->HTTP Header->User Agent.
  • The site requires JavaScript. Try looking at the site in your browser with JavaScript disabled after clearing your cache. The SEO Spider does not execute JavaScript by default, however it does have JavaScript rendering functionality in the paid version of the tool. If the site is built in a JavaScript framework, or has dynamic content, adjust the rendering configuration to 'JavaScript' under 'Configuration > Spider > Rendering tab > JavaScript' to crawl it. Remember to ensure JS and CSS files are not blocked by robots.txt. Please see our guide on how to crawl JavaScript websites.
  • The site requires Cookies. Can you view the site with cookies disabled in your browser after clearing your cache? Licenced users can enable cookies by going to Configuration->Spider and ticking “Allow Cookies” in the “Advanced” tab.
  • The ‘nofollow’ attribute is present on links not being crawled. There is an option in Configuration->Spider under the “Basic” tab to follow ‘nofollow’ links.
  • The page has a page level ‘nofollow’ attribute. The could be set by either a meta robots tag or an X-Robots-Tag in the HTTP header. These can be seen in the “Directives” tab in the “Nofollow” filter. To ignore the NoFollow directive go to Configuration -> Spider -> and tick "Follow Internal 'No Follow'" and recrawl.
  • The website is using framesets. The SEO Spider does not crawl the frame src attribute.
  • The website requires an Accept-Language header (Configuration->HTTP Header->Accept Language).
  • The Content-Type header did not indicate the page is HTML. This is shown in the Content column and should be either text/html or application/xhtml+xml. JavaScript rendering mode will additionally inspect the page content to see if it's specified, eg:

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">

Back to top

Why does the SEO Spider freeze?

This will generally be due to the SEO Spider reaching its memory limit. Please read how to increase memory.

Back to top

Why do I get a “Connection Refused” response when connecting to a secure site?

You may get connection refused on sites that use stronger crypto algorithms than are supported by default in Java. Before going any further, check you have the latest version of Java installed as this fixes a lot of issues connecting to secure sites. If the problem persists, read on.

You will see a “Connection Refused” in the Status column on the SEO Spider interface. The log file will show a line like this:

2015-01-19 09:10:03,218 [SpiderWorker 1] WARN - IO Exception for url: 'https://www.example.com/' reason: 'javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure'

You can view the log file(s) by either going to the location shown for ‘Log File’ under Help->Debug, or downloading and unzipping the log files from Help->Debug->Save Logs.

Due to import restrictions Java cannot supply this stronger crypto support by default. You can however, install the Java higher strength crypto support by downloading the following Security Fix.

If you download, unzip and follow the instructions in the README.txt file you should be able to crawl your site successfully. Note that you can find where your <java-home> directory is set to by running up the SEO Spider and going to Help->Debug and looking at the Java section.

On OS X: Open a new Finder window, choose Go->Go To Folder... then enter: /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/lib/security/. This is the location to copy the local_policy.jar and US_export_policy.jar into from the downloaded jce_policy-8.zip file.

For more background information see here and scroll down to the section “Adding stronger algorithms: JCE Unlimited Strength”.

Back to top

Why do I get a “Connection Error” response?

Connection error, or connection timeout is a message when there is an issue in receiving a response at all. This is generally due to network issues or proxy settings. Please check that you can connect to the internet. If you have changed the SEO Spider proxy settings (under configuration, proxy), please ensure that these are correct (or they are switched off).

Back to top

Why do I get a “403 Forbidden” error response?

The 403 forbidden status codes occurs when a web server denies access to the SEO Spider’s request for some reason.

If this happens consistently and you can see the website in a browser, it could be the web server behaves differently depending on User Agent. In the premium version try adjusting the User Agent setting under Configuration->HTTP Header->User Agent. For example, try crawling as a bot, such as ‘Googlebot Regular’, or as a browser, such as ‘Chrome’.

If this happens intermittently during a crawl, it could be due to the speed the Spider is requesting pages overwhelming the server. In the premium version of the SEO Spider you can reduce the speed of requests. If you are running the ‘lite’ version you may find that right clicking the URL and choosing re-spider will help.

Back to top

Why Am I Experiencing A Different Response In A Browser?

The SEO Spider HTTP request is often different to a traditional browser, so you can sometimes experience a different response than if you visit the page. Some of the common reasons and things to check include -

  • User-Agent - The SEO Spider uses it's own user-agent as default, and so do browsers. You can find the User-Agent configuration under ‘Configuration > HTTP Header > User-Agent’. If you adjust this to a browser user-agent (Chrome etc), you may experience a different response.
  • Cookies - By default the SEO Spider doesn't accept cookies (similar to Google). However, browsers do. If you disable cookies in your browser, you may see that the page doesn't load anymore, issues session IDs into the URL, or redirects to itself. You can 'allow cookies' under 'Configuration > Spider > Advanced'.
  • JavaScript - Browsers will execute JavaScript, and by default the SEO Spider does not. So you may experience small changes in page content, to much larger differences if the site is built using a JavaScript framework, or be redirected to a new location completely in a browser. Similar to Google, the SEO Spider can render web pages, and crawl them after JavaScript has come into play. You can turn this on, by navigating to 'Configuration > Spider > Rendering' and choosing 'JavaScript Rendering'. The 'rendered page' tab at the bottom will help debug any differences between what the SEO Spider can see, in comparison to a browser. If your site is built using a JavaScript framework, then please read our 'How To Crawl JavaScript Websites' guide.
  • Accept-Language Header - Your browser will supply an accept language header with your language. Similar to Googlebot, the SEO Spider doesn't supply an Accept-Language header for requests by default. However, you can adjust the Accept-Language configuration under 'Configuration > HTTP Header > Accept-Language'.
  • Speed - Servers can respond differently when under stress and load. Their responses can be less stable. We recommend reducing the crawl speed and seeing if the responses then change, and using WireShark to verify responses independently.

Back to top

Why is the character encoding incorrect?

The SEO Spider determines the character encoding of a web page by the “charset=” parameter in the http Content-Type header, eg:

“text/html; charset=UTF-8”

You can see this in the SEO Spider’s interface in the ‘Content’ columns (in various tabs). If this is not present in the http header, the SEO Spider will then read the first 2048 bytes of the html page to see if there is a charset within the html.

For example –

“meta http-equiv=”Content-Type” content=”text/html; charset=windows-1255″

If this is not the case, we continue assuming the page is UTF-8.

The Spider does log any character encoding issues. If there is a specific page that is causing problems, perform a crawl of only that page by setting the maximum number of URLs to crawl to be 1, then crawling the URL. You may see a line in the trace.txt log file (the location is – C:UsersYourprofile.ScreamingFrogSEOSpidertrace.txt):

20-06-12 20:32:50 INFO seo.spider.net.InputStreamWrapper:logUnsupportedCharset Unsupported Encoding ‘windows-‘ reverting to ‘UTF-8’ on page ‘http://www.example.com’ java.io.UnsupportedEncodingException: windows-‘. This could be an error on the site or you may need to install an additional language pack.

The solution to fix this is to specify the format of the data by either the Content-Type field of the accompanying HTTP header or ensuring the charset parameter in the source code is within the first 2048 bytes of the html within the head element.

Back to top

Why is the SEO Spider not finding images?

There are generally two reasons for this:

  • The images are loaded using JavaScript. Try viewing the page in your browser with JavaScript disabled to see if this is the case. The SEO Spider does not execute JavaScript by default. If the site is built in a JavaScript framework, or has dynamic content, adjust the rendering configuration to 'JavaScript' under 'Configuration > Spider > Rendering tab > JavaScript' to crawl it. Remember to ensure JS and CSS files are not blocked.
  • The images are blocked by robots.txt. You can either ignore robots.txt or customise the robots.txt to allow crawling.

Back to top

Why do I get a ‘Project open failed java.io.EOFException’ when attempting to open a saved crawl?

This means the crawl did not save completely, which is why it can’t be opened. EOF stands for ‘end of file’, which means the SEO Spider was unable to read to the expect end of the file. Java end of file error This can be due to the SEO Spider crashing during save, which is normally due to running out of memory. This can also happen if you exit the SEO Spider during save, or your machine crashes for example. Unfortunately there is no way to open or retrieve the crawl data, as it’s incomplete and therefore lost. Please also consider increasing your memory allocation, which will help reduce any problems saving a crawl in the future.

Back to top

Why isn’t my Include/Exclude function working?

Please note Include/Exclude are case sensitive so any functions need to match the URL exactly as it appears.

Functions will only be applied to URLs that have not yet been discovered by the Spider. Any URLs that have been discovered and queued for crawling will to be affected, hence it is recommended the crawl is restarted between updates to ensure the results are accurate.

Functions will not be applied to the starting URL of a crawl or URLs in list mode.

.* is a the regex wildcard

Back to top

Why do I get “error opening file for writing” when installing?

Try running the file as administrator by right clicking the installer and choosing “Run as administrator”. Alternatively log in to an administrator account. You may need to request assistance from your IT department depending on your company setup.

Back to top

Why does the SEO Spider quit unexpectedly on startup?

If you get a message every time you start up that looks like this: Then you are most likely running OS X Yosemite (10.10.x) which has a bug in it's Java Runtime. Installing this patch from Apple will resolve the issue.

Back to top

How can I open multiple instances of the SEO Spider?

To open additional instances of the SEO Spider open a Terminal and type the following: open -n /Applications/Screaming\ Frog\ SEO\ Spider.app/

Back to top

How do I submit a bug / receive support?

Please follow the steps on the support page so we can help you as quickly as possible. Please note, we only offer full support for premium users of the tool although we will generally try and fix any issues.

Back to top

What operating systems does the SEO Spider run on?

The SEO Spider runs on Windows, Mac and Linux. It’s a Java application and requires a Java 8 runtime environment or later to be to run. You can check here to see the system requirements to run Java. You can download the SEO Spider for free and try it.

Mac: If you are using OS X 10.7.2 or lower please see this faq.

Linux: We provide an Ubuntu package for Linux. If you would like to run the SEO Spider on a non-Debian based distribution please extract the jar file from the .deb and run it manually.

Windows: The SEO Spider can also be run on the server variants and Windows 10.

Please note that the rendering feature is not available on older operating systems.

Back to top

How do I bulk export all image alt text?

You can bulk export data via the ‘bulk export’ option in the top level navigation menu. Simply choose the ‘all images’ option to export all images and associated alt text found in our crawl. Please see more on exporting in our user guide.

Back to top

How does the Spider treat robots.txt?

The Screaming Frog SEO Spider is robots.txt compliant. It checks robots.txt in the same way as Google. So it will check robots.txt of the (sub) domain and follow directives for all robots and specifically any for Googlebot. The tool also supports URL matching of file values (wildcards * / $) like Googlebot. Please see the above document for more information or our robots.txt section in the user guide. You can turn this feature off in the premium version.

Back to top

How many URI can the Spider crawl?

The SEO Spider cannot crawl an unlimited number of URLs, it is restricted by memory allocated. There is not a set number or pages it can crawl, it is dependent on the complexity of the site and a number of other factors. Generally speaking with the standard memory allocation of 512MB, the SEO Spider can crawl between 5K-50K URLs of a site. However, you can increase the SEO Spider’s memory allocation and crawl into hundreds of thousands of URLs. As a very rough guide, a 64bit machine with 8GB of RAM will generally allow you to crawl a couple of hundred thousand URLs.

We recommend crawling large sites in sections. You can use the configuration menu to just crawl HTML (rather than images, CSS or JS), avoid crawling external links, exclude certain sections of the site or only 'include' others. Alternatively, if you have a nicely structured IA you can crawl by directory (/holidays/, /blog/ etc). Please see our 'crawling larger websites' section in the user guide. The tool was not built to crawl entire sites with many hundreds of thousands or millions of pages, as it currently uses RAM rather than a hard disk database for speed and flexibility.

Back to top

Why does the URI completed total not match what I export?

The ‘Completed’ URI total is the number of URIs the SEO Spider has encountered. This is the total URI crawled, plus any ‘Internal’ and ‘External’ URI blocked by robots.txt.

Depending on the settings in the robots.txt section of the ‘Configuration > Spider >Basic’ menu, these blocked URI may not be visible in the SEO Spider interface.

If the ‘Respect Canonical’ or ‘Respect Noindex’ options in the ‘Configuration > Spider > Advanced’ tab are checked, then these URI will count towards the ‘Total Encountered’ (Completed Total) and ‘Crawled’, but will not be visible within the SEO Spider interface.

The ‘Response Codes’ Tab and Export will show all URLs encountered by the Spider except those hidden by the settings detailed above.

Back to top

Do you collect data & can you see the websites I am crawling?

No. The Screaming Frog SEO Spider does not communicate any data back to us. All data is stored locally on your machine in its memory. The software does not contain any spyware, malware or adware (as verified by Softpedia) and it does not ‘phone home’ in anyway. You crawl from your machine and we don’t see it! Google APIs use the OAuth 2.0 protocol for authentication and authorisation, and obviously the data provided via Google Analytics is only accessible locally on your machine. We don’t (and technically can’t!) see or store any data ourselves.

Back to top

Why does the number of URLs crawled (or errors discovered) not match another crawler?

First of all, the free ‘lite’ version is restricted to a 500 URLs crawl limit and obviously a website might be significantly larger. If you have a licence, the main reason an SEO Spider crawl might discover more or less links (and indeed broken links etc), than another crawler is simply down to the different default configuration set-ups of each.

As default the SEO Spider will respect robots.txt, respect ‘nofollow’ of internal and external URLs & crawl canonicals. But other crawlers sometimes don’t respect these as default and hence why there might be differences. Obviously these can all be adjusted to your own preferences within the configuration.

While crawling more URLs might seem to be a good thing, actually it might be completely unnecessary and a waste of time and effort. So please choose wisely what you want to crawl.

We believe the SEO Spider is the most advanced crawler available and it will often find more URLs than other crawls as it crawl canonicals and AJAX similar to Googlebot which other crawlers might not have as standard, or within their current capability. There are other reasons as well, these may include –

  • User-agent, speed or time of the crawl may play a part.
  • Some other crawlers may use XML sitemaps for discovery and crawling. The SEO Spider does not currently crawl XML sitemaps by default, you currently have to upload them in list mode. The reason we decided against crawling XML sitemaps by default is that it shouldn’t make up for a site’s architecture. If a page is not linked to in the sites internal link structure, and only in an XML sitemap, it will help it be discovered and indexed, but the chances are it won’t perform very well organically. This is obviously because it won’t be passed any real PageRank, like a proper internal link. So we believe, it’s useful to analyse websites via the natural crawling and indexing process of internal links to get a better idea of a sites set-up. There are some scenarios where it does make sense to crawl XML sitemaps though and we may make this possible in the future as an option.
  • Some other crawlers might crawl analytics landing pages, or URLs in Google Search Console Tools top pages. Again, this is not the natural crawling and indexing process, but might be something we consider in the future.

Back to top

How do I create an XML sitemap?

Read our ‘How To Create An XML Sitemap‘ tutorial, which explains how to generate an XML Sitemap, include or exclude pages or images and runs through all the configuration settings available.

Back to top

How can I extract all tags matching my XPath?

From version 6.0, by default the SEO Spider will collect all XPath values, without the need to use multiple extractors and index selectors. Please read our web scraping guide for more details and XPath examples.

Back to top

How do I extract multiple matches of a regex?

If you want all the H1s from the following HTML:
<html>
<head>
<title>2 h1s</title>
</head>
<body>
<h1>h1-1</h1>
<h1>h1-2</h1>
</body>
</html>

Then we can use: <h1>(.*?)</h1>

Back to top

Why am I experiencing slow down?

There are a number of reasons why you might be experiencing slow crawl rate or slow down of the SEO Spider. These include –

  • If you’re performing a large crawl, you might be reaching the 512mb memory capacity of the Spider. Learn how to increase the Spider’s memory to crawl more.
  • Slow response of the site or server (or specific directives for hitting them too hard)
  • Internet connection
  • Problems with the site you are crawling
  • Large pages or files
  • Crawling a large number of URIs

Back to top

Do you have an affiliate program?

No, we do not have an affiliate program for the SEO Spider software at this time.

Back to top

Why do the results change between crawls?

The most common reasons for this are:

  • Crawl settings are different, which can lead to different pages being crawled or different responses being given, leading to different results.
  • The site has changed, meaning the different elements of the crawl are reported differently.
  • The SEO Spider receives different responses, specific URLs timing out or giving server errors. This could mean less pages are discovered overall as well as these being inconsistent between crawls. Remember to double check under 'Response Codes > No Responses' and right click on URLs and click to 're-spider' on URLs that might have intermittent issues (such as timing out or server errors).
Another point that could affect crawl results, is the order in which pages are found. If allowing cookies, a page that drops a cookie that leads to certain URLs being treated different (such as redirecting to a different language version after a language selector is used) could lead to wildly different results depending on which cookies are picked up when during the crawl. In these situations multiple crawls may need to be undertaken, excludingexcluding particular sections so that only a single cookie behaviour is set at a time.

Back to top

Why does the spider show in the task bar but not on screen?

The spider is opening off screen, possibly due to a multi monitor setup that has recently changed. To move the spider on to the active monitor use Alt + Tab to select the spider, then hold in the Windows key and use the arrow keys to move the Spider window into view.

Back to top

What IP address and ports does the SEO Spider use?

The SEO Spider runs from the machine it is installed on, so the IP address is simply that of this machine/network. You can find out what this is by typing “IP Address” into Google.

The local port used for the connection will be from the ephemeral range. The port being connected to will generally be port 80, the default http port or port 443, the default https port. Other ports will be connected to if the site being crawled or any of its links specify a different port. For example: http://www.example.com:8080/home.html

Back to top

How many users are permitted to use one licence?

Licences are individual per user. A single licence key is for a single assigned user. If you have five people from your team that wish to use the SEO Spider, you will require 5 user licences.

Discounts are available for 5 users or more, as shown in our pricing.

Please see section 3 of our terms and conditions for full details.

Back to top

Why is my Licence Key saying it’s invalid?

If the SEO Spider says your ‘licence key is invalid’, then please check the following, as the licence keys we provide always work.

  • Ensure you are using the username we provided for your licence key, as this isn't always the same as your account username (only lowercase characters, only alphanumeric characters, no spaces, and/or added number suffix). This is by far the most common issue we see.
  • Copy and paste the username and licence key, they are not designed to be entered manually.
  • Please also double check you have inserted the provided ‘Username’ in the ‘Username’ field and the provided ‘Licence Key’, in the ‘Licence Key’ field.
  • You are entering a Log File Analyser licence into the SEO Spider.
  • You are entering a SEO Spider licence into the Log File Analyser.
If your licence key still does not work, then please contact support with the details.

Back to top

I have lost my licence or invoice, how do I get another one?

If you have lost your a licence key or invoice from the 22nd of September 2014 onwards, please login to your account to retrieve the details.

If you have lost your account password, then simply request a new password via the form.

If you purchased a licence before the 22nd of September 2014, then please contact support@screamingfrog.co.uk with your username or e-mail you used to pay for the premium version.

Back to top

How do I buy a licence?

Simply click on the ‘buy a licence’ option in the SEO Spider ‘licence’ menu or visit our purchase a licence page directly.

You can then create an account & make payment. When this is complete, you will be provided with your licence key to open up tool & remove the crawl limit. If you have just purchased a licence and have not received your licence, please check your spam / junk folder. You can also view your licence(s) details and invoice(s) by logging into your account.

Please note, the account login has only been active from the 22nd of September 2014. If you purchased before this date, it won’t be available and you can contact us for any information.

Back to top

Will an SEO Spider licence work in the Log File Analyser?

No, the Screaming Frog SEO Spider is a separate product to the Log File Analyser. They have different licences, which will need to be purchased individually. You can purchase a Log File Analyser licence here.

Back to top

Do you offer discounts on bulk licence purchases?

Yes, please see our SEO Spider licence page for more details on discounts.

Back to top

I have purchased a licence, why have I not received it?

If you have just purchased a licence and have not received your licence, please check your spam / junk folder. Licences are sent immediately upon purchase. You can also view your licence(s) details and invoice(s) by logging into your account.

Back to top

Why is my credit card payment being declined?

There are a few reasons this could happen:

  • Incorrect card details: Double check you have filled out your card details correctly.
  • Incorrect billing address: Please check the billing address you provided matches the address of the payment card.
  • Blocked by payment provider: Please contact your card issuer. Screaming Frog does not have access to failure reasons. It’s quite common for a card issuer to block international purchases.

Back to top

Do you work with resellers?

Resellers can purchase an SEO Spider licence online on behalf of a client. Please be aware that licence usernames are automatically generated from the account name entered during checkout. If you require a custom username, then please request a PayPal invoice in advance.

For resellers who are unable to purchase online with PayPal or a credit card and encumber us with admin such as vendor forms, we reserve the right to charge an administration fee of £50.

Back to top

What is the part number?

There is no part number or SKU.

Back to top

Where can I get company information?

On our contact page.

Back to top

Where can I get Form W-9 information?

Screaming Frog is a UK based company, so this is not applicable.

Back to top

Why won’t the SEO Spider start?

This is nearly always due to an out of date version of Java. If you are running the PC version, please make sure you have the latest version of Java and you choose to install the 64 bit version, if you're on a 64bit machine (which is very likely today).

If you are running the Mac version, please make sure you have the most up to date version of the OS which will update Java or download Java manually.

Please uninstall, then reinstall the SEO Spider and try again.

Back to top

Why am I experiencing slow down or hanging upon exports & saving crawls?

This will generally be due to the SEO Spider reaching its memory limit. Please read how to increase memory.

Back to top

Why am I experiencing a ‘Could not create the Java virtual machine’ message on start up?

If you have just increased your memory allocation or updated Java and now receive a ‘Could not create the Java virtual machine’ error message like this – could not create java virtual machine It will be due to one of the two following reasons –

  • 1) You’re using the 32-bit version of Java (which is the most common reason). You need to manually choose and install the 64-bit version of Java. If you already have the 64-bit version of Java, then uninstall all versions of Java and reinstall the 64-bit version again manually.
  • 2) You have allocated more memory than you actually have available. Please check how much RAM you have available and lower it accordingly. If you have an older 32-bit machine, you probably won’t be able to increase memory much more than 1,024MB anyway.
  • 3) You have a typo in your memory setting - double check against the examples in the file.
Please note, this is covered in the memory section of the user guide as well.

Back to top

Why do I get a “Connection Refused” response?

Connection refused is displayed in the Status column when the SEO Spiders connection attempt has been refused at some point between the local machine and website. If this happens for all sites consistently then it is an issue with the local machine/network. Please check the following:

  • You can view websites in your browser.
  • Make sure you have the latest version of Java installed.
  • That software such as ZoneAlarm, anti-virus or firewall protection software (such as the premium version of Avira Antivirus) are not blocking your machine/SEO Spider from making requests. The SEO Spider needs to be trusted / accepted. We recommend your IT team is consulted on what might be the cause.
  • The proxy is not accidentally ‘on’, under Configuration->Proxy. Ensure the box is not ticked, or the proxy details are accurate and working.
  • If you are trying to crawl a secure site (https://) please see here.
If this is preventing you from crawling at all on a particular site, please try the following:
  • Changing the User Agent under Configuration->HTTP Header->User Agent.
If this is happening intermittently during a crawl then please try the following:
  • Adjusting the crawl speed / number of threads under Configuration->Speed.
  • In the ‘lite’ version where you cannot control the speed, try right clicking on the URL and choosing re-spider.

Back to top

Why do I get a “Connection Timeout” response?

Connection timeout occurs when the SEO Spider struggles to receive an HTTP response at all and the request times out. It can often be due to a slow responding website or server when under load, or it can be due to network issues. We recommend the following –

  • Ensure you can view the website (or any websites) in your browser and check their loading time for any issues. Hard refresh your browser to ensure you’re not seeing a cached version.
  • Increase the default response timeout configuration of 10 seconds, up to 20 or 30 seconds if the website is slow responding.
  • Decrease the speed of the crawl in the SEO Spider configuration to decrease load on any servers struggling to respond. Try 1 URL per second for example.
  • Ensure the proxy settings are not enabled accidentally and if enabled that the details are accurate.
  • Ensure that ZoneAlarm, anti virus or firewall protection software (such as the premium version of Avira Antivirus) are not blocking your machine from making requests. The SEO Spider needs to be trusted / accepted. We generally recommend your IT team who know your systems are consulted on what might be the cause.

Back to top

Why do I get a “503 Service Unavailable” error response?

The 503 Service Unavailable status code occurs when a web server denies access to the SEO Spider’s request for some reason.

If this happens consistently and you can see the website in a browser, it could be the web server behaves differently depending on User Agent. In the premium version try adjusting the User Agent setting under Configuration->HTTP Header->User Agent. For example, try crawling as a bot, such as ‘Googlebot’, or as a browser, such as ‘Chrome’.

If this happens intermittently during a crawl, it could be due to the speed the Spider is requesting pages overwhelming the server. In the premium version of the SEO Spider you can reduce the speed of requests. If you are running the ‘lite’ version you may find that right clicking the URL and choosing re-spider will help.

Back to top

Why Do URLs Redirect to Themselves?

When a website requires cookies this often appears in the Spider as if the starting URL is redirecting to itself or to another URL and then back to itself (any necessary cookies are likely being dropped along the way). This can also be seen when viewing in a browser with cookies disabled:

redirect_warning

​​For the spider to be able to crawl websites like this the ‘Allow Cookies’ option must first be set:

Configuration > Spider > Advanced > Allow Cookies

To bypass the redirect behaviour, as the Spider only crawls each URL once, a parameter must be added the to the starting URL:

http://www.example.com/?rewrite-me

​ ​A URL rewriting rule that removes this parameter when the spider is redirected back to the starting URL must then be added:

Configuration > URL Rewriting > Remove Parameters​

The Spider should then be able crawl normally from the starting page now it has any required Cookies.

Back to top

Why are page titles &/or meta descriptions not being displayed/displayed incorrectly?

If the site or URL in question has page titles and meta descriptions, but one (or both!) are not showing in the SEO Spider this is generally due to invalid HTML markup between the opening html element and the close head element. The HTML markup between these elements in the source code has to valid, without errors, for page titles and meta descriptions to be parsed and collected by the SEO Spider.

The SEO Spider reads up to a maximum of 20 meta tags. So if there are over 20 meta tags and the meta description is after the 20th meta tag, it will be ignored.

The SEO Spider does not execute JavaScript by default. Modifications to any HTML elements via JavaScript will not be seen by the SEO Spider. If the site uses JavaScript, amend the rendering configuration to 'JavaScript' under 'Configuration > Spider > Rendering tab > JavaScript' to crawl it. Remember to ensure JS and CSS files are not blocked.

We recommend validating the html using the free W3C markup validation tool. A really nice feature here is the ‘Show Source’ button, which can be very insightful to identify specific errors.

We recommend fixing any html markup errors and then crawling the URL(s) again for these elements to be collected.

Back to top

Does the SEO Spider crawl PDFs?

The SEO Spider will check links to PDF documents. These URLs can be seen under the PDF filter in the Internal and External tabs. It does not parse PDF documents to find links to crawl.

Back to top

Why won’t my crawl complete?

First ensure the Spider is still crawling the site and if so what the URLs it has been finding look like. Depending on the URLs the Spider has been finding will explain why the crawl percentage is not increasing:

  • URLS seem normal – The Spider keeps finding new URLs on a very large site. Consider splitting the crawl up into sections.
  • Many similar URLs parameters – The Spider keeps finding the same URLs with different parameters, possibly from faceted navigation. Try setting the query string limit to 0 (Configuration->Spider, “Limit Number of Query Strings” in the “Limits” tab).
  • There are many long URLs with parts that repeat themselves – There is a relative linking error where the Spider keeps finding URLs that cause a never ending loop. Use the exclude feature to exclude the offending URLs.

Back to top

Why does the Installer take a while to start?

Because Windows Defender is running a security scan on it, this can take up to a couple of minutes. Unfortunately when downloading the file using Google Chrome it gives no indication that it is running the scan. Internet Explorer does give an indication of this, and Firefox does not scan at all. If you go directly to your downloads folder and run the installer from there you don’t have to wait for the security scan to run.

Back to top

Can I do a silent install?

Yes, by issuing the following command:

ScreamingFrogSEOSpider-VERSION.exe /S

By default this will install the SEO Spider to:
C:\Program Files (x86)\Screaming Frog SEO Spider

You can choose an alternative location by using the following command:

ScreamingFrogSEOSpider-VERSION.exe /S /D=C:\My Folder

Back to top

Do you support Macs below OS X Version 10.7.3 (& 32-Bit Macs)?

From version 2.50 the SEO Spider requires a version of Java not supported by this version of OS X. This means older 32-bit Macs (the last of which we understand were made 8-9 years ago) will not be able to use the latest version of the SEO Spider. Newer 64-bit Macs which haven’t yet updated their version of OS X will need to update their OS before installing Java.

We do still support version 2.40 for OS X versions below 10.7.3 (and 32-bit) Macs which can be downloaded here. This version has considerably less features than the current version, as described in our release history.

Back to top

The Spider GUI doesn’t have the latest flat style used in Yosemite

Unfortunately we are at the mercy of Oracle to update their Mac look and feel to more closely match the new style introduced in Mac OS X Yosemite. There is a Java bug related to this at JDK-8052173. This will be updated in a future Java release.  

Back to top

How do I provide feedback?

Feedback is welcome, please just follow the steps on the support page to submit feedback. Please note we will try to read all messages but might not be able to reply to all of them. We will update this FAQ as we receive additional questions and feedback.

Back to top

How do I use the configuration options?

You cannot use the configuration options in the lite version of the tool. You will need to buy a licence to open up this menu, you can do this by clicking the ‘buy a licence’ option in the Spider’s interface under ‘license’.

Back to top

What do each of the configuration options do?

Please read our user guide, specifically the configuration options section.

Back to top

How do I bulk export all images missing alt text?

You can bulk export data via the ‘bulk export’ option in the top level navigation menu. Simply choose the ‘images missing alt text’ option to export all references of images without alt text. Please see more on exporting in our user guide.

Back to top

How is the response time calculated?

It is calculated from the time it takes to issue an HTTP request and get the full HTTP response back from the server. The figure displayed on the SEO Spider interface is in seconds. Note that this figure may not be 100% reproducible as it depends very much on server load and client network activity at the time the request was made. This figure does not include the time taken to download additional resources when in JavaScript rendering mode. Each resource appears separately in the user interface with its own individual response time.

Back to top

How do I increase memory?

Please see the how to increase memory section in our user guide.

Back to top

Where can I see the pages blocked by robots.txt?

You can simply view URLs blocked via robots.txt in the UI (within the ‘Internal’ and ‘Response Codes’ tabs for example). Ensure you have the ‘Show internal URLs blocked by robots.txt’ configuration ticked under the ‘Configuration > Spider > ‘Basic’ tab.

Disallowed URLs will appear with a ‘status’ as ‘Blocked by Robots.txt’ and there’s a ‘Blocked by Robots.txt’ filter under the ‘Response Codes’ tab, where these can be viewed.

The ‘Blocked by Robots.txt’ filter also displays a ‘Matched Robots.txt Line’ column, which provides the line number and disallow path of the robots.txt entry that’s excluding each URL. If multiple lines in robots.txt block a URL, the SEO Spider will just report on the first encountered, similar to Google within Search Console. URLs blocked by robots.txt

Please see our guide on using the SEO Spider as a robots.txt tester.

If you’re using the older 2.40 Mac version of the SEO Spider, you can view the ‘Total Blocked by robots.txt’ for a crawl on the right-hand side of the user interface in the ‘Summary’ section of the overview tab. This count includes both internal and external URLs. Currently, there isn’t a way of seeing which URLs have been blocked in the user interface. However, it is possible to get this information from the SEO Spider log file, after a crawl. Each time a URL is blocked by robots.txt, it will be reported like this:

2015-02-18 08:56:09,652 [RobotsMain 1] INFO - robots.txt file prevented the spider of 'http://www.example.com/page.html', reason 'Blocked by line 2: Disallow: http://www.example.com/'. You can choose to ignore robots.txt files in the Spider configuration.

You can view the log file(s) by either going to the location shown for ‘Log File’ under Help->Debug, or downloading and unzipping the log files from Help->Debug->Save Logs.

Back to top

Can the SEO Spider crawl staging or development sites that are password protected or behind a login?

The SEO Spider supports two forms of authentication, standards based which includes basic and digest authentication, and web forms based authentication.

Basic & Digest Authentication

There is no set-up required for basic and digest authentication, it is detected automatically during a crawl of a page which requires a login. If you visit the website and your browser gives you a pop-up requesting a username and password, that will be basic or digest authentication. If the login screen is contained in the page itself, this will be a web form authentication, which is discussed in the next section.

Often sites in development will also be blocked via robots.txt as well, so make sure this is not the case or use the ‘ignore robot.txt configuration'. Then simply insert the staging site URL, crawl and a pop-up box will appear, just like it does in a web browser, asking for a username and password. authentication Enter your credentials and the crawl will continue as normal. You cannot pre-enter login credentials – they are entered when URLs that require authentication are crawled. This feature does not require a licence key. Try to following pages to see how authentication works in your browser, or in the SEO Spider.

Web Form Authentication

There are other web forms and areas which require you to login with cookies for authentication to be able to view or crawl it. The SEO Spider allows users to log in to these web forms within the SEO Spider’s built in Chromium browser, and then crawl it. This feature requires a licence to use it.

To log in, simply navigate to ‘Configuration > Authentication’ then switch to the ‘Forms Based’ tab, click the ‘Add’ button, enter the URL for the site you want to crawl, and a browser will pop up allowing you to log in.

Please read about crawling web form password protected sites in our user guide, before using this feature. Some website’s may also require JavaScript rendering to be enabled when logged in to be able to crawl it.

Please note – This is a very powerful feature, and should therefore be used responsibly. The SEO Spider clicks every link on a page; when you’re logged in that may include links to log you out, create posts, install plugins, or even delete data.

Back to top

How do I block the SEO Spider from crawling my site?

The spider obeys robots.txt protocol. Its user agent is ‘Screaming Frog SEO Spider’ so you can include the following in your robots.txt if you wish the Spider not to crawl your site – User-agent: Screaming Frog SEO Spider Disallow: / Please note – There is an option to ‘ignore’ robots.txt and change user-agent, which is down to the responsibility of the user entirely.

Back to top

Why does the number of URLs crawled not match the number of results indexed in google or errors reported within Google Search Console?

There’s a number of reasons why the number of URLs found in a crawl might not match the number of results indexed in Google (via a site: query) or errors reported in the SEO Spider match those in Google Search Console.

First of all, crawling and indexing are quite separate, so there will always be some disparity. URLs might be crawled, but it doesn’t always mean they will actually be indexed in Google. This is an important area to consider, as there might be content in Google’s index which you didn’t know existed, or no longer want indexed for example. Equally, you may find more URLs in a crawl than in Google’s index due to directives used (noindex, canonicalisation) or even duplicate content, low site reputation etc.

Secondly, the SEO Spider only crawls internal links of a website at that moment of time of the crawl. Google (more specifically Googlebot) crawls the entire web, so not just the internal links of a website for discovery, but also external links pointing to a website.

Googlebot’s crawl is also not a snapshot in time, it’s over the duration of a site’s lifetime from when it’s first discovered. Therefore, you may find old URLs (perhaps from discontinued products or an old section on the site which still serve a 200 ‘OK’ response) that isn't linked to anymore, or content that is only linked to via external sources in their index still. The SEO Spider won’t be able to discover URLs which are not linked to internally, like orphan pages or URLs only accessible by external links.

There are other reasons as well, these may include –

  • The set-up of the SEO Spider crawl. As default the SEO Spider will respect robots.txt, respect ‘nofollow’ of internal and external URLs & crawl canonicals but not execute JavaScript. So please check your configuration. Please remember, Google may have been able to previously access these URLs which are now blocked, nofollowed etc.
  • The SEO Spider does not execute JavaScript by default. If the site is built in a JavaScript framework, or has dynamic content, adjust the rendering configuration to 'JavaScript' under 'Configuration > Spider > Rendering tab > JavaScript' to crawl it. Remember to ensure JS and CSS files are not blocked.
  • Google include URLs which are blocked via robots.txt in their search results number. Don’t forget, robots.txt just stops a URL from being crawled, it doesn’t stop the URL from being indexed and appearing in Google.
  • Google crawl XML sitemaps. The SEO Spider does not currently crawl XML sitemaps by default, you currently have to upload them in list mode. The reason we decided against crawling XML sitemaps by default is that it shouldn’t make up for a site’s architecture. If a page is not linked to in the site’s internal link structure, and only in an XML sitemap, it will help it be discovered and indexed, but the chances are it won’t perform very well organically. This is obviously because it won’t be passed any real PageRank, like a proper internal link. So we believe, it’s useful to analyse websites via the natural crawling and indexing process of internal links to get a better idea of a site’s set-up. There are some scenarios where it does make sense to crawl XML sitemaps though and we may make this possible in the future as an option.
  • Google’s results number via a site: query can be pretty unreliable!
  • Google’s error reporting can be pretty slow and outdated!

Back to top

Can I crawl more than one site at a time?

Yes. There are two ways you can do this:

1) Open up a multiple instances of the SEO Spider, one for each domain you want to crawl. Mac users check here.

2) Use list mode (Mode->List). Remove the search depth limit (Configuration->Spider->Limits and untick “Limit Search Depth”, untick “Ignore robots.txt” (Configuration->Spider->Basic) then upload your list of domains to crawl.

Back to top

Why is my sitemap missing some URIs?

Canonicalised, robots.txt blocked, noindex and paginated URIs are not included in the sitemap by default. You may choose to include these in your site map by ticking the appropriate checkbox(s) in the 'Pages' tab when you export the site map.

Please read our user guide on XML Sitemap Creation.

Back to top

Why is my regex extracting more than expected?

If you are using a regex like .* that contains a greedy quantifier you may end up matching more than you want. The solution to this is to use a regex like .*?.

For example if you are trying to extract the id from the following JSON:

"agent": { "id":"007", "name":"James Bond" }

Using "id":"(.*)" you will get:

007", "name":"James Bond

If you use "id":"(.*?)" you will extract:

007

Back to top

Why doesn’t GA data populate against my URLs?

The URLs in your chosen Google Analytics view have to match the URLs discovered in the SEO Spider crawl exactly, for data to be matched and populated accurately. If they don’t match, then GA data won’t be able to be matched and won’t populate. This is the single most common reason.

If Google Analytics data does not get pulled into the SEO Spider as you expected, then analyse the URLs under ‘Behaviour > Site Content > Landing Pages’ and ‘Behaviour > Site Content > All Pages’ depending on which dimension you choose in your query. Try clicking on the URLs to open them in a browser to see if they load correctly.

You can also export the ‘GA & GSC Not Matched’ report which shows a list of URLs returned from the Google Analytics & Search Analytics (from Search Console) API’s for your query, that didn’t match URLs in the crawl. Check the URLs with source as ‘GA’ for Google Analytics specifically (those marked as ‘GSC’ are Google Search Analytics, from Google Search Console). The URLs here need to match those in the crawl, for the data to be matched accurately.

If they don’t match, then the SEO Spider won’t be able to match up the data accurately. We recommend checking your default Google Analytics view settings (such as ‘default page’) and filters such as ‘extended URL’ hacks, which all impact how URLs are displayed and hence matched against a crawl. If you want URLs to match up, you can often make the required amends within Google Analytics or use a ‘raw’ unedited view (you should always have one of these ideally).

Please note – There are some very common scenarios where URLs in Google Analytics might not match URLs in a crawl, so we cover these by matching trailing and non-trailing slash URLs and case sensitivity (upper and lowercase characters in URLs). Google doesn’t pass the protocol (HTTP or HTTPS) via their API, so we also match this data automatically as well.  

Back to top

Why doesn’t the GA API data in the SEO Spider match what’s reported in the GA interface?

There’s a number of reasons why data fetched via the Google API into the SEO Spider, might be different to the data reported within the Google Analytics Interface. First of all, we recommend triple checking that you’re viewing the exact same account, property, view, segment, date range and metrics and dimensions. LandingPagePath and PagePath will of course provide very different results for example! If data still doesn’t match, then there are some common reasons why –

  • The Google API can just return slightly different metrics – We’ve tested this and sometimes the data from the API, can just be a little different to what’s reported in the interface.
  • We use default sampling, and your settings in Google Analytics might be different.
  • We use ga:hostname dimension and a ga:hostname==www.yourdomain.co.uk filter, to remove other domains which might be using the same GA tracking code as your core domain. Google does not do this by default in the interface, so landing page sessions for your homepage, might be inflated for example.
We actually recommend using the Google Analytics API query explorer and viewing the data that comes back, with the following query parameters which we use as default (obviously using the account, property and view of the site you’re testing) – Google API Explorer new You should see that data returned via the API matches pretty closely to what is reported within the SEO Spider.

Back to top

Can The SEO Spider Work On A Chromebook?

We don't have a Chromebook version of the SEO Spider. However, you can install Crouton, set up Ubuntu and download and install the Ubuntu version of the SEO Spider.

Please note, Chromebook's are not very powerful and are generally limited to 4GB of RAM. This will mean memory is restricted, and the number of URLs that can be crawled will also be limited. You can read more about SEO Spider memory in our user guide.

Back to top