This section covers some frequently asked questions about the Screaming Frog SEO Spider. The FAQ includes -
1) What Additional Features Does A Licence Provide?
2) How Do I Buy A Licence?
3) How Much Does The SEO Spider Cost?
4) Do You Offer Discounts On Bulk Licence Purchases?
5) What Payment Methods Do You Accept & From Which Countries?
6) Can I Use My Licence On More Than One Device?
7) Do You Work With Resellers?
8) I Have Purchased A Licence, Why Have I Not Received It?
9) Why Is My Licence Key Saying It’s Invalid?
10) I Have Lost My Licence, How Do I Get Another One?
11) Is It Possible To Move My Licence To A New Computer?
12) How Do I Renew My Licence?
13) Why Won’t The SEO Spider Start?
14) Why Won’t The SEO Spider Crawl My Website?
15) Why Am I Experiencing Slow Down?
16) Why Am I Experience Slow Down Or Hanging Upon Exports & Saving Crawls?
17) Why Does The SEO Spider Freeze?
18) Why Do I Get A “Connection Refused” Response?
19) Why Do I Get A “Connection Error” Response?
20) Why Do I Get A “Connection Timeout” Response?
21) Why Do I Get A “403 Forbidden” Error Response?
22) Why Is The Character Encoding Incorrect?
23) Why Are Page Titles &/Or Meta Desciptions Not Being Displayed?
24) How Do I View Alt Text Of Images Hosted On A CDN?
25) Why Can’t I View The Graphs?
26) Why Do I Receive A Warning Saying Chrome Does Not Support Java 7 On Mac OS X?
27) Do You Support Macs below OS X Version 10.7.3 (& 32-Bit Macs)?
28) Why Does The SEO Spider User Interface Run Slowly On My MacBook Pro?
31) How Do I Use The Configuration Options?
32) How Do I Check For Broken Links (404 Errors)?
33) What Do Each Of The Configuration Options Do?
34) How Do I Bulk Export All Inlinks To 3XX, 4XX (404 error etc) or 5XX pages?
35) How Do I Bulk Export All Images Missing Alt Text?
36) How Do I Bulk Export All Images?
37) What’s The Difference Between ‘Crawl Outside Of Start Folder’ & ‘Check Links Outside Folder’ Options?
38) How Do I Increase Memory?
39) How Does The Spider Treat Robots.txt?
40) How Many URI Can The Spider Crawl?
41) Why Does The URI Completed Total Not Match What I Export?
42) Can The SEO Spider Crawl Password Protected Sites Or Those Behind A Login?
43) How Do I Block The SEO Spider From Crawling My Site?
44) Do You Collect Data & Can You See The Websites I Am Crawling?
45) Why Does The Number of URLs Crawled Not Match The Number Of Results Indexed In Google Or Errors Reported Within Google Webmaster Tools?
46) Why Does The Number of URLs Crawled (Or Errors Discovered) Not Match Another Crawler?
We will be adding to the FAQ as we get more feedback.
1) What Additional Features Does A Licence Provide? Top ↑
A licence removes the 500 URI crawl limit, allows you to save and upload crawls, opens up all the configuration options and the custom source code search feature. We also provide support for technical issues related to the SEO spider for licensed users.
In the same way as the free ‘lite’ version, there are no restrictions on the number of websites you can crawl with a licence. Licences are however, individual per user. If you have five members of the team who would like to use the licenced version, you will need five licences.
2) How Do I Buy A Licence? Top ↑
Simply click on the ‘buy a licence’ option in the SEO Spider ‘licence’ menu. This will take you to the page where you can make payment via Paypal. You will then be sent a licence key via e-mail immediately upon payment.If you have just purchased a licence and have not received your licence, please check your spam / junk folder. Alternatively you can visit the SEO Spider licence page.
3) How Much Does The Screaming Frog SEO Spider Cost? Top ↑
As standard you download the lite version of the tool which is free. However, without a licence the SEO spider is limited to crawling a maximum of 500 URIs each crawl. The configuration options of the spider and the custom source code search feature are also only available in the licensed version.
For £99 per annum you can purchase a licence which opens up the spider’s configuration options and removes restrictions on the 500 URI maximum crawl. A licence is required per individual using the tool. When the licence expires, the SEO spider returns to the restricted free lite version.
4) Do You Offer Discounts On Bulk Licence Purchases? Top ↑
Yes, please see our SEO spider licence page for more details on discounts.
5) What Payment Methods Do You Accept & From Which Countries? Top ↑
We use Paypal as our payment system. So you can make payment via a Paypal account, or simply by credit or debit card. In some countries, Paypal no longer accepts Amex cards, so please use another card or method if you experience difficulties. If none of the above methods are suitable, then please contact us and you can set-up to pay via bank transfer.
We accept payments from most countries worldwide, the price of the SEO spider is in pound sterling (GBP). If you are outside of the UK, please take a look at the current exchange rate to work out the cost. (The automatic currency conversion will be dependent on the current foreign exchange rate and perhaps your card issuer).
We do not accept cheques (or checks!)
6) Can I Use My Licence On More Than One Device? Top ↑
The licence allows you to install 1 instance of the Software on one desktop computer and 1 portable (laptop) computer on the condition that only the single authorised licence holder will operate the products. Licences are individual per user. Please see section 3 of our terms and conditions for full details.
7) Do You Work With Resellers? Top ↑
Resellers can purchase a SEO spider licence online on behalf of a client. We do not offer discounted rates for resellers. Please be aware that licence usernames are automatically generated from PayPal details. If you require a custom username, then please request a PayPal invoice in advance.
For resellers who are unable to purchase online with PayPal or a credit card and encumber us with admin such as vendor forms, we reserve the right to charge an administration fee of £50. To answer common reseller questions, the software needs to be downloaded from our website, the licence key is delivered electronically by email, there is no part number or SKU and the price is GBP at £99 per year per user.
8) I Have Purchased A Licence, Why Have I Not Received It? Top ↑
If you have just purchased a licence and have not received your licence, please check your spam / junk folder. Licences are sent immediately upon purchase. However, please check your payment method. If you have paid via an e-cheque, then the licence will only be sent when it has cleared. Paypal explains this as well.
9) Why Is My Licence Key Saying It’s Invalid? Top ↑
Please ensure you have copied your licence details into the username and licence key fields correctly, without any additional blank spaces on the end. If your licence key still does not work, then please contact support with the details.
10) I Have Lost My Licence, How Do I Get Another One? Top ↑
If you have lost your licence or can’t remember it, please contact support[at]screamingfrog.co.uk with your username or e-mail you used to pay for the premium version.
11) Is It Possible To Move My Licence To A New Computer? Top ↑
Yes, please take a note of your licence key (you can find this under ‘licence’ and ‘enter licence key’ in the software), then uninstall the SEO spider on the old computer, before installing and entering your licence on the new machine. If you experience any issues during this move, please contact our support.
12) How Do I Renew My Licence? Top ↑
At the moment the best way is to simply purchase another licence upon expiry.
13) Why Won’t The SEO Spider Start? Top ↑
This is nearly always due to an out of date version of Java. If you are running the PC version, please make sure you have the latest version of Java. If you are running the Mac version, please make sure you have the most up to date version of the OS which will update Java. Please uninstall, then reinstall the spider and try again.
14) Why Won’t The SEO Spider Crawl My Website? Top ↑
15) Why Am I Experiencing Slow Down? Top ↑
There are a number of reasons why you might be experiencing slow crawl rate or slow down of the spider. These include -
16) Why Am I Experience Slow Down Or Hanging Upon Exports & Saving Crawls? Top ↑
This will generally be due to the SEO spider reaching it’s memory limit. Please read how to increase memory.
17) Why Does The SEO Spider Freeze? Top ↑
This will generally be due to the SEO spider reaching it’s memory limit. Please read how to increase memory.
18) Why Do I Get A “Connection Refused” Response? Top ↑
Connection refused or connection error is a message from the server when it has refused an http request from the SEO spider.
For individual site cases this is generally due to too many concurrent requests at a rate that the site/server does not allow.
In the premium version of the SEO spider you can reduce the speed of requests and hence, not receive the connection refused errors. In the free ‘lite’ version of the tool, you can try the ‘re-spider’ option upon right clicking the URL to re-try crawling it.
If you’re experiencing a ‘connection refused’ error on every website you attempt to crawl, then there is a high possibility that something is blocking the SEO spider from making requests. Please ensure –
19) Why Do I Get A “Connection Error” Response? Top ↑
Connection error, or connection timeout is a message when there is an issue in receiving a response at all.
This is generally due to network issues or proxy settings.
Please check that you can connect to the internet. If you have changed the SEO spider proxy settings (under configuration, proxy), please ensure that these are correct (or they are switched off).
20) Why Do I Get A “Connection Timeout” Response? Top ↑
Connection timeout occurs when the SEO Spider struggles to receive an http response at all and the request times out. It can often be due to a slow responding website or server when under load, or it can be due to network issues. We recommend the following –
21) Why Do I Get A “403 Forbidden” Error Response? Top ↑
The 403 forbidden status codes occurs when your server denies access to the SEO spider’s request for some reason. This is generally down to too many concurrent requests from the same IP (ie, the SEO spider is crawling the website faster than the server likes).
In the premium version of the SEO spider you can reduce the speed of requests and hence, not receive the 403 forbidden errors.
This can also be due to the user-agent of the requests, so we also recommend changing the user-agent to Googlebot, or other UA’s in the premium version of the SEO spider.
22) Why Is The Character Encoding Incorrect? Top ↑
The SEO spider determines the character encoding of a web page by the “charset=” parameter in the http Content-Type header, eg:
You can see this in the SEO spider’s interface in the ‘Content’ columns (in various tabs). If this is not present in the http header, the SEO spider will then read the first 512 bytes of the html page to see if there is a charset within the html.
For example -
“meta http-equiv=”Content-Type” content=”text/html; charset=windows-1255″
If this is not the case, we continue assuming the page is UTF-8.
The spider does log any character encoding issues. If there is a specific page that is causing problems, perform a crawl of only that page by setting the maximum number of URLs to crawl to be 1, then crawling the URL. You may see a line in the trace.txt log file (the location is – C:UsersYourprofile.ScreamingFrogSEOSpidertrace.txt):
20-06-12 20:32:50 INFO seo.spider.net.InputStreamWrapper:logUnsupportedCharset Unsupported Encoding ‘windows-’ reverting to ‘UTF-8′ on page ‘http://www.example.com’ java.io.UnsupportedEncodingException: windows-’. This could be an error on the site or you may need to install an additional language pack.
The solution to fix this is to specify the format of the data by either the Content-Type field of the accompanying http header or ensuring the charset parameter in the source code is within the first 512 bytes of the html within the head element.
23) Why Are Page Titles &/Or Meta Desciptions Not Being Displayed? Top ↑
If the site or URL in question has page titles and meta descriptions, but one (or both!) are not showing in the SEO Spider this is generally due to invalid html mark-up between the opening html element and the close head element. The html mark-up between these elements in the source code has to valid, without errors, for page titles and meta descriptions to be parsed and collected by the SEO Spider.
We recommend validating the html using the free W3C mark-up validation tool. A really nice feature here is the ‘Show Source’ button, which can be very insightful to identify specific errors.
We recommend fixing any html mark-up errors and then crawling the URL(s) again for these elements to be collected.
24) How Do I View Alt Text Of Images Hosted On A CDN? Top ↑
A content delivery network (or CDN) will either operate from an external domain or a sub domain and hence if one is being used to serve images they will typically appear under the ‘External’ tab, rather than the ‘Images’ tab. Please ensure robots.txt is not blocking the SEO Spider from crawling the CDN. To view alt text of the images, you can still use the ‘image info’ tab in the lower window pane still.
To export all image alt text, simply use the ‘bulk export’ and ‘all in links’ export. When you have the data in a spread sheet, simply filter for ‘type’ as ‘IMG’ and the destination URL to ‘does not contain’ ‘yourwebsite.com’. This will then display all images and their alt text on the CDN.
25) Why Can’t I View The Graphs? Top ↑
If you’re using Ubuntu and are unable to view the graphs, this is because the SEO Spider makes use of the JavaFX library for its graphing function. This requires java 7 from Oracle to be installed. This is an optional step, but required if you would like the spider to display graphs/charts etc. Click here to view our Java 7 Installation Guide
26) Why Do I Receive A Warning Saying Chrome Does Not Support Java 7 On Mac OS X? Top ↑
This warning is about running Java in your web browser, not Java Script. If you download Java 7 and need to run Java content (Java Applets for example) you will have to visit those web sites using either Safari or Firefox. Google are working on a 64 bit version of Chrome for MAC OSX that will alleviate this issue.
27) Do You Support Macs below OS X Version 10.7.3 (& 32-Bit Macs)? Top ↑
Version 2.50 requires Java 7 to run which which is only available from version 10.7.3 and above. This means older 32-bit Macs (the last of which we understand were made 7-8 years ago) will not be able to use the latest version of the SEO Spider and newer 64-bit Macs which haven’t updated their OS X.
We do still support version 2.40 for OS X versions below 10.7.3 (and 32-bit) Macs which can be downloaded here. The only difference is the graphs, which require Java 7 for the feature.
28) Why Does The SEO Spider User Interface Run Slowly On My MacBook Pro?Top ↑
There is a bug in the Java graphics library (JavaFX) that the SEO Spider uses for the graphs. This only affects the latest MacBook Pro – Late 2013 model with Intel Iris Pro GPU. A bug has been raised with Oracle who are looking into this, we currenly don’t have any information on when a fix will be released. We will up this FAQ when we have that information.
In the mean time there are two work arounds available:
1) – Set the SEO Spider to be opened in “Low Resolution Mode”. To do this:
You can read more about this on the Apple website here.
2) – We do still support version 2.40 which you can download and use, which simply doesn’t have the graphs.
29) How Do I Submit A Bug / Receive Support? Top ↑
Please follow the steps on the support page so we can help you as quickly as possible. Please note, we only offer full support for premium users of the tool although we will generally try and fix any issues.
30) How Do I Provide Feedback? Top ↑
Feedback is welcome, please just follow the steps on the support page to submit feedback. Please note we will try to read all messages but might not be able to reply to all of them. We will update this FAQ as we receive additional questions and feedback.
31) How Do I Use The Configuration Options? Top ↑
You cannot use the configuration options in the lite version of the tool. You will need to buy a licence to open up this menu, you can do this by clicking the ‘buy a licence’ option in the spider’s interface under ‘license’.
32) How Do I Check For Broken Links (404 Errors)? Top ↑
Read our ‘How To Find Broken Links Using The SEO Spider‘ tutorial, which explains how to find broken links, view the source of the errors and export them in bulk.
33) What Do Each Of The Configuration Options Do? Top ↑
34) How Do I Bulk Export All Inlinks To 3XX, 4XX (404 error etc) or 5XX pages? Top ↑
You can bulk export data via the ‘bulk export’ option in the top level navigation menu. You can then choose to export all links discovered or all in links to specific status codes such as 2XX, 3XX, 4XX or 5XX responses. For example, selecting the ‘Client Error 4XX In Links’ option will export all in links to all error pages (such as 404 error pages). Please see more on exporting in our user guide.
35) How Do I Bulk Export All Images Missing Alt Text? Top ↑
You can bulk export data via the ‘bulk export’ option in the top level navigation menu. Simply choose the ‘images missing alt text’ option to export all references of images without alt text. Please see more on exporting in our user guide.
36) How Do I Bulk Export All Image Alt Text? Top ↑
You can bulk export data via the ‘export’ option in the top level navigation menu. Simply choose the ‘all links’ option to export all images and associated alt text found in our crawl. This export actually includes data of all link instances found in our crawl, so please filter for images using the ‘type’ column in Excel. Please see more on exporting in our user guide.
37) What’s The Difference Between ‘Crawl Outside Of Start Folder’ & ‘Check Links Outside Folder’? Top ↑
‘Crawl outside of start folder’ under the ‘include’ feature means you can crawl the entire website from anywhere. As an example, if you crawl www.example.com/example/ it will crawl the whole website. So this just provides nice flexibility on where you start, or if some (sometimes poor!) set-ups have ‘homepages’ as sub folders.
The ‘check links outside of folder’ option is different. It provides the ability to crawl ‘within’ a sub folder, but still see details on any URLs that they link out to which are outside of that sub folder. But it won’t crawl any further than this! An example –
If you started a crawl at www.example.com/example/ and it linked to www.example.com/different/ which returns a 404 page.
If you unticked the ‘check links outside of folder’ option, it wouldn’t crawl this 404 page as it sits outside the start folder. With it ticked, this page will be included under the ‘internal’ tab as a 404.
We felt users sometimes need to know about potential issues which start within the start folder, but which link outside. But at the sametime, didn’t need to crawl the entire website! This option now provides that flexibility.
38) How Do I Increase Memory?? Top ↑
39) How Does The Spider Treat Robots.txt? Top ↑
The Screaming Frog SEO Spider is robots.txt compliant. It checks robots.txt in the same way as Google. So it will check robots.txt of the (sub) domain and follow directives for all robots and specifically any for Googlebot. The tool also supports URL matching of file values (wildcards * / $) like Googlebot. Please see the above document for more information or our robots.txt section in the user guide. You can turn this feature off in the premium version.
40) How Many URI Can The Spider Crawl? Top ↑
The spider cannot crawl an unlimited number of URIs, it is restricted by memory allocated. There is not a set number or pages it can crawl, it is dependent on the complexity of the site and a number of other factors. Generally speaking with the standard memory allocation of 512mb the spider can crawl between 10K-100K URI of a site. You can increase the SEO spider’s memory.
We recommend crawling large sites in sections. You can use the configuration menu to just crawl html (rather than images, CSS or JS) or exclude certain sections of the site. Alternatively if you have a nicely structured IA you can crawl by directory (/holidays/, /blog/ etc). The tool was not built to crawl entire sites with hundreds of thousands of pages to pick up every single issue as it currently uses RAM rather than a hard disk database.
41) Why Does The URI Total Not Match What I Export? Top ↑
No single tab allows you to export all URI. So the completed total will be internal and external URI for example. The completed URI total is the number of URI the spider has encountered aswell, so this might include URI that it has not included in the interface such as those ignored due to robots.txt or external links with ‘nofollow’ for example.
42) Can The SEO Spider Crawl Sites That Are Password Protected Or Behind a Login? Top ↑
The SEO spider supports basic and digest authentication, upon crawling a website a pop up box will appear asking for a username and password if it’s supported.
43) How Do I Block The SEO Spider From Crawling My Site? Top ↑
The spider obeys robots.txt protocol. Its user agent is ‘Screaming Frog SEO Spider’ so you can include the following in your robots.txt if you wish the spider not to crawl your site -
User-agent: Screaming Frog SEO Spider
Please note – There is an option to ‘ignore’ robots.txt and change user-agent, which is down to the responsibility of the user entirely.
44) Do You Collect Data & Can You See The Websites I Am Crawling? Top ↑
No. The Screaming Frog SEO Spider does not communicate any data back to us. All data is stored locally on your machine in its memory. The software does not contain any spyware, malware or adware (as verified by Softpedia) and it does not ‘phone home’ in anyway. You crawl from your machine and we don’t see it!
45) Why Does The Number of URLs Crawled Not Match The Number Of Results Indexed In Google Or Errors Reported Within Google Webmaster Tools? Top ↑
There’s a number of reasons why the number of URLs found in a crawl might not match the number of results indexed in Google (via a site: query) or errors reported in the SEO Spider match those in Google WMT.
First of all, crawling and indexing are quite separate, so there will always be some disparity. URLs might be crawled, but it doesn’t always mean they will actually be indexed in Google. This is an important area to consider, as there might be content in Google’s index which you didn’t know existed, or no longer want indexed for example. Equally, you may find more URLs in a crawl than in Google’s index due to directives used (noindex, canonicalisation) or even duplicate content, low site reputation etc.
First of all, the SEO Spider only crawls internal links of a website at that moment of time of the crawl. Google (more specifically Googlebot) crawls the entire web, so not just the internal links of a website for discovery, but also external links pointing to a website. Googlebot’s crawl is also not a snapshot in time, it’s over the duration of a site’s lifetime from when it’s first discovered. Therefore, you may find old URLs (perhaps from discontinued products or an old section on the site which still serve a 200 ‘OK’ response) or content that is only linked to via external sources in their index. The SEO Spider won’t be able to discover URLs which are not linked to internally, like orphan pages or URLs only accessible by external links.
There are other reasons as well, these may include –
46) Why Does The Number of URLs Crawled (Or Errors Discovered) Not Match Another Crawler? Top ↑
First of all, the free ‘lite’ version is restricted to a 500 URLs crawl limit and obviously a website might be significantly larger. If you have a licence, the main reason an SEO Spider crawl might discover more or less links (and indeed broken links etc), than another crawler is simply down to the different default configuration set-ups of each.
As default the SEO Spider will respect robots.txt, respect ‘nofollow’ of internal and external URLs & crawl canonicals. But other crawlers sometimes don’t respect these as default and hence why there might be differences. Obviously these can all be adjusted to your own preferences within the configuration.
While crawling more URLs might seem to be a good thing, actually it might be completely unnecessary and a waste of time and effort. So please choose wisely what you want to crawl.
We believe the SEO Spider is the most advanced crawler available and it will often find more URLs than other crawls as it crawl canonicals and AJAX just like Googlebot which other crawlers might not have as standard, or within their current capability.
There are other reasons as well, these may include –