SEO Spider Configuration

Download | User Guide | FAQ | Support | Terms | Purchase
General | Configuration | Tabs

 

Spider Basic Tab

These settings can be accessed from the menu under Configuration->Spider.

Check Images – Check Images – Untick this box if you do not want to crawl images. (Please note, we check the link but don’t crawl the content). This prevents the spider checking images linked to using the image tag (img src=”image.jpg”). Images linked to via any other means will still be checked, for example, using an anchor tag (a href=”image.jpg”).
 
Check CSS – Untick this box if you do not want to crawl CSS. 

Check JavaScript – Untick this box if you do not want to crawl JavaScript. 

Check SWF – Untick this box if you do not want to crawl flash files. 

Check External Links – Untick this box if you do not want to crawl any external links. 

Check Links Outside Of Start Folder – Untick this box if you do not want to crawl links outside of a sub folder you start from. This option provides you the ability to crawl within a start sub folder, but still crawl links that those URLs link to which are outside of the start folder. 

Follow Internal or External ‘nofollow’ – By default the spider will not crawl internal or external links with the ‘nofollow’ attribute or external links from pages with the meta nofollow tag. If you would like the spider to crawl these, simply tick the relevant option. 

Crawl All Subdomains – By default the SEO spider will only crawl the subdomain you crawl from and treat all other subdomains encountered as external sites. To crawl all subdomains of a root domain, use this option.  

Crawl Outside Of Start Folder – By default the SEO spider will only crawl the subfolder (or sub directory) you crawl from forwards. However, if you wish to start a crawl from a specific sub folder, but crawl the entire website, use this option. 

Crawl Canonicals – By default the SEO spider will crawl canonicals (canonical link elements or http header) and use the links contained within for discovery. If you do not wish to crawl canonicals, then please untick this box. Please note, that canonicals will still be reported and referenced in the SEO Spider, but they will not be crawled for discovery. 

Ignore robots.txt – By default the spider will obey robots.txt protocol. The spider will not be able to crawl a site if its disallowed via robots.txt. However, this option allows you to ignore this protocol which is down to the responsiblity of the user. This option actually means the SEO spider will not even download the robots.txt file. So it also means ALL robots directives will be ignored. 

Spider Limits Tab

Limit Search Total – The free version of the software will crawl 500 URI. If you have a licensed version of the tool this will be removed, but you can include any number here for greater control over the number of pages you wish to crawl. 

Limit Search Depth – You can choose how deep the spider crawls a site (in terms of links away from your chosen start point). 

Limit Max URL Length To Crawl – Control the length of URLs that the SEO Spider will crawl.  

Limit Max Folder Depth – Control the number of folders (or sub directories) the SEO Spider will crawl.  

Limit Number of Query Strings – Control the number of query string parameters (?x=) the SEO Spider will crawl.  

Spider Advanced Tab

Allow Cookies – As default the SEO spider does not accept cookies, like a search bot. However, you can choose to accept cookies by ticking this box.  

Request Authentication – As default the SEO spider will show a login box when a URL that’s been requested requires authentication. This option can be switched off.  

Pause On High Memory Usage – The SEO spider will automatically pause when a crawl has reached the memory allocation and display a ‘high memory usage’ message. However, you can choose to turn this safeguard off completely.  

Always Follow Redirects – This feature allows the SEO Spider to follow redirects until the final redirect target URL in list mode, ignoring crawl depth. This is particularly useful for site migrations, where URLs may perform a number of 3XX redirects, before they reach their final destination. To view the chain of redirects, we recommend using the ‘redirect chains‘ report.  

Respect noindex – This option means URLs with ‘noindex’ will not be reported in the SEO Spider.  

Respect Canonical – This option means URLs which have been canonicalised to another URL, will not be reported in the SEO Spider.  

Response Timeout – As default the SEO spider will wait 10 seconds to get anykind of http response from a URL. You can increase the length of waiting time, which is useful for very slow websites.  

5XX Response Retries – This option provides the ability to automatically re-try 5XX responses. Often these responses can be temporary, so re-trying a URL may provide a 2XX response.  

Max Redirects To Follow – This option provides the ability to control the number of redirects the SEO Spider will follow.  

Spider Preferences Tab

Page Title & Meta Description Width – This option provides the ability to control the character and pixel width limits in the SEO Spider filters in the page title and meta description tabs. For example, changing the minimum pixel width default number of ‘200’, would change the ‘Below 200 Pixels’ filter in the ‘Page Titles’ tab. This allows you to set your own character and pixel width based upon your own preferences.  

Other – These options provide the ability to control the character length of URLs, h1, h2 and image alt text filters in their respective tabs. You can also control the max image size.  

URL Rewriting

The URL rewriting feature allows you to rewrite URLs on the fly. For the majority of cases, the ‘remove parameters’ and common options (under ‘options’) will suffice. However, we do also offer an advanced regex replace feature which provides further control.

Remove Parameters

This feature allows you to automatically remove parameters in URLs. This is extremely useful for websites with session IDs or lots of parameters which you wish to remove. For example –

If the website has session IDs which make the URLs appear something like this ‘example.com/?sid=random-string-of-characters’. To remove the session ID, you just need to add ‘sid’ (without the apostrophes) within the ‘parameters’ field in the ‘remove paramaters’ tab.

URL Rewriting

The SEO spider will then automatically strip the session ID from the URL. You can test to see how a URL will be rewritten by our SEO spider under the ‘test’ tab.

url rewriting test

Options

We will include common options under this section. The ‘lowercase discovered URLs’ option does exactly that, it converts all URLs crawled into lowercase which can be useful for websites with case sensitivity issues in URLs.

Regex Replace

This advanced feature runs against each URL found during a crawl. It replaces each substring of a URL that matches the regex with the given replace string. The “Regex Replace” feature can be tested in the “Test” tab of the “URL Rewriting” configuration window.

Examples are:

1) Changing all links to example.com to be example.co.uk

Regex: .com
Replace: .co.uk

2) Making all links containing page=number to a fixed number, eg

www.example.com/page.php?page=1
www.example.com/page.php?page=2
www.example.com/page.php?page=3
www.example.com/page.php?page=4

To make all these go to www.example.com/page.php?page=1

Regex: page=\d+
Replace: page=1

3) Removing the www. domain from any url by using an empty Replace. If you want to remove a query string parameter, please use the “Remove Parameters” feature – Regex is not the correct tool for this job!

Regex: www.
Replace:
 

Include

This feature allows you to control which URL path the SEO spider will crawl via regex. It narrows the default search by only crawling the URLs that match the regex which is particularly useful for larger sites, or sites with less intuitive URL structures. Matching is performed on the url encoded version of the URL.

The page that you start the crawl from must have an outbound link which matches the regex for this feature to work. (Obviously if there is not a URL which matches the regex from the start page, the SEO spider will not crawl anything!).

 

Exclude

This allows you to exclude URLs from a crawl by supplying a list of a list regular expressions (regex). The exclude list is applied to new URLs that are discovered during the crawl. This exclude list does not get applied to the initial URL(s) supplied in crawl or list mode. Changing the exclude list during a crawl will only affect newly discovered URLs from then on. It will not be applied retrospectively to the list of pending URLs. Matching is performed on the url encoded version of the URL.

Here are some common examples –

You can also view our video guide about the exclude feature in the SEO Spider –

Speed

This feature allows you to control the speed of the spider, either by number of concurrent threads or by URLs requested per second.

When reducing speed, it’s always easier to control by the ‘Max URI/s’ option, which is the maximum number of URL requests per second. For example, the screenshot below would mean crawling at 1 URL per second –

SEO Spider speed configuration

The ‘Max Threads’ option can simply be left alone.

Increasing the number of threads allows you to significantly increase the speed of the SEO spider.

However, please use responsibly as setting the number of threads high to increase the speed of the crawl will increase the number of http requests made to the server and can impact a site’s response times. We recommend approving a crawl rate with the webmaster first, monitoring response times and adjusting speed if there are any issues.

 

User Agent

The user-agent switcher has inbuilt preset user agents for Googlebot, Bingbot, Yahoo! Slurp, various browsers and more. This feature also has a custom user-agent setting which allows you to specify your own user agent.

Details on how the SEO Spider handles robots.txt can be found here.

 

Custom Search

The SEO Spider allows you to find anything you want in the source code of a website. The custom regex search feature will check the source code of every page you decide to crawl for what it is you wish to find. There are ten filters in total under the ‘custom’ configuration menu which allow you to input your regex and find pages that either ‘contain’ or ‘does not contain’ your chosen input. You cannot ‘scrape’ or extract data from html elements using this feature at the moment.

The pages that either do or do not contain these can be found in the ‘custom’ heading tab and using the relevant filter number which match those in your configuration. For example, you may wish to choose ‘contains’ for pages like ‘Out of Stock’ as you wish to find any pages which have this on. When searching for something like Google Analytics code, it would make more sense to choose the ‘does not contain’ filter to find pages that do not include the code (rather than just list all those that do!). For example –

custom source code search

In this example above, any pages with ‘out of stock’ on them would appear in the custom tab under filter 1. Any pages which the spider could not find the Analytics UA number on would be listed under filter 2.

Please remember – the custom search checks the html source code of a website which might not be the text that is rendered in your browser. Hence, please ensure you are searching for the correct query from the source code.

 

Custom Extraction

The custom extraction feature allows you to collect any data from the HTML of a URL. Extraction is performed on the static html returned by internal HTML pages with a 2xx response code. The SEO Spider supports the following modes to perform data extraction:

When using XPath or CSS Path to collect HTML, you can choose what to extract:

You’ll receive a tick next to your regex, Xpath or CSS Path if the syntax is valid. If you’ve made a mistake, a red cross will remain in place!

The results of the data extraction appear under the ‘custom’ tab and ‘extraction’ filter. They are also included as columns within the ‘Internal’ tab as well.

Some extraction examples include the following –

Google Analytics ID

["'](UA-.*?)["']

Google Analytics ID Extraction

The data extracted is –

google-analytics-id-extracted

Additional Headings

As default, the SEO Spider only collects h1s and h2s. However, perhaps you would like to collect h3s, the Xpath to collect the first couple of h3s in the code would be –

//h3[1]
//h3[2]

h3 extraction

And the first couple of h3s on our site are as follows –

h3s extracted

Mobile Annotations

If you wanted to pull mobile annotations from a website, you might use an Xpath such as –

//link[contains(@media, '640') and @href]/@href

mobile annotations extraction

Which for the Huffington Post would return –

mobile annotations extracted

Hreflang

You will need to to count how many hreflang there are on a page first, before compiling this Xpath. However, to collect the first couple the Xpath would be –


(//*[@hreflang])[1]
(//*[@hreflang])[2]

etc.

hreflang custom extraction

The above will collect the entire HTML element, with the link and hreflang value.

hreflang extracted

So, perhaps you wanted just the hreflang values, you could specify the attribute using @hreflang.


(//*[@hreflang])[1]@hreflang
(//*[@hreflang])[2]@hreflang

Which would simply return the language value, like ‘en-GB’ for example.

Social Meta Tags

You may wish to extract social meta tags, such as Facebook Open Graph tags or Twitter Cards. The Xpath is for example –


//meta[starts-with(@property, 'og:title')][1]/@content
//meta[starts-with(@property, 'og:description')][1]/@content
//meta[starts-with(@property, 'og:type')][1]/@content

etc

Schema

You may wish to collect the types of various Schema on a page, so the set-up might be –


(//*[@itemtype])[1]/@itemtype
(//*[@itemtype])[2]/@itemtype

etc

schema extraction

Email Addresses

Perhaps you wanted to collect email addresses from your website or websites, the Xpath might be something like –


//a[starts-with(@href, 'mailto')][1]
//a[starts-with(@href, 'mailto')][2]

etc

email address extraction

From our website, this would return the two email addresses we have in the footer on every page –

email address extracted

 

Proxy

This feature (Configuration->Proxy) allows you the option to configure the SEO spider to use a proxy server. You will need to configure the address and port of the proxy in the configuration window. To disable the proxy server untick the “Use Proxy Server” option.

 

Google Analytics Integration

You can connect to the Google Analytics API and pull in data directly during a crawl. The SEO Spider can fetch user and session metrics, as well as goal conversions and ecommerce (transactions and revenue) data for landing pages, so you can view your top performing pages when performing a technical or content audit.

If you’re running an Adwords campaign, you can also pull in impressions, clicks, cost and conversion data and the SEO Spider will match your destination URLs against the site crawl, too. You can also collect other metrics of interest, such as Adsense data (Ad impressions, clicks revenue etc), site speed or social activity and interactions.

To set this up, start the SEO Spider and go to ‘Configuration > API Access > Google Analytics’.

google analytics configuration

Then you just need to connect to a Google account (which has access to the Analytics account you wish to query) by granting the ‘Screaming Frog SEO Spider’ app permission to access your account to retreive the data. Google APIs use the OAuth 2.0 protocol for authentication and authorisation. The SEO Spider will remember any Google accounts you authorise within the list, so you can ‘connect’ quickly upon starting the application each time.

Google Analytics Set-up

Once you have connected, you can choose the relevant Google Analytics account, property, view, segment and date range!

Google analytics view

Then simply select the metrics that you wish to fetch! The SEO Spider currently allow you to select up to 20, which we might extend further. If you keep the number of metrics to 10 or below with a single dimension (as a rough guide), then it will generally be a single API query per 10k URLs, which makes it super quick –

Google analytics metrics

As default the SEO Spider collects the following 10 metrics –

  1. Sessions
  2. % New Sessions
  3. New Users
  4. Bounce Rate
  5. Page Views Per Pession
  6. Avg Session Duration
  7. Page Value
  8. Goal Conversion Rate
  9. Goal Completions All
  10. Goal Value All

You can read more about the definition of each metric from Google.

You can also set the dimension of each individual metric against either page path and, or landing page which are quite different (and both useful depending on your scenario & objectives).

Google analytics Dimensions

There are scnearios where URLs in Google Analytics might not match URLs in a crawl, so we cover these by matching trailing and non-trailing slash URLs and case sensitivity (upper and lowercase characters in URLs). Google doesn’t pass the protocol (HTTP or HTTPS) via their API, so we also match this data automatically.

If you have hundreds of thousands of URLs in GA, you can choose to limit the number of URLs to query, which is by default ordered by sessions to return the top performing page data.

Google analytics settings

When you hit ‘start’ to crawl, the Google Analytics data will then be fetched and display in respective columns within the ‘Internal’ and ‘Analytics’ tabs. There’s a separate ‘Analytics’ progress bar in the top right and when this has reached 100%, crawl data will start appearing against URLs. The more URLs you query, the longer this process can take, but generally it’s extremely quick.

Google Analytics data in full

There are 3 filters currently under the ‘Analytics’ tab, which allow you to filter the Google Analytics data –

No GA Data

As an example for our own website, we can see there is ‘no GA data’ for blog category pages and a few old blog posts, as you might expect (the query was landing page, rather than page). Remember, you may see pages appear here which are ‘noindex’ or ‘canonicalised’, unless you have ‘respect noindex‘ and ‘respect canonicals‘ ticked in the advanced configuration tab.

If GA data does not get pulled into the SEO Spider as you expected, then analyse the URLs in GA under ‘Behaviour > Site Content > All Pages’ and ‘Behaviour > Site Content > Landing Pages’ depending on which dimension you choose in your query. The URLs here need to match those in the crawl, for the data to be matched accurately. If they don’t match, then the SEO Spider won’t be able to match up the data accurately.

We recommend checking your default Google Analytics view settings (such as ‘default page’) and filters which all impact how URLs are displayed and hence matched against a crawl. If you want URLs to match up, you can often make the required amends within Google Analytics.

 

Mode

Spider Mode

This is the default mode of the SEO Spider. In this mode the SEO Spider will crawl a web site, gathering links and classifying URLs into the various tabs and filters. Simply enter the URL of your choice and click ‘start’.

List Mode

In this mode you can check a predefined list of URLs. This list can come from a variety of sources – .txt, .xls, .xlsx, .csv or .xml files. The files will be scanned for http:// or https:// prefixed urls, all other text will be ignored. For example, you can directly upload an Adwords download and all URLs will be found automatically.

If you’re performing a site migration and wish to test URLs, we highly recommend using the ‘always follow redirects‘ configuration so the SEO Spider finds the final destination URL. The best way to view these is via the ‘redirect chains’ report.

List mode changes the crawl depth setting to zero, which means only the uploaded URLs will be checked. If you want to check links from these URLs, adjust the crawl depth to 1 or more in the “Limits” tab in Configuration->Spider.

SERP Mode

In this mode you can upload page titles and meta descriptions directly into the SEO Spider to calculate pixel widths (and character lengths!). There is no crawling involved in this mode, so they do not need to be live on a website.

This means you can export page titles and descriptions from the SEO Spider, make bulk edits in Excel (if that’s your preference, rather than in the tool itself) and then upload them back into the tool to understand how they may appear in Google’s SERPs.

Under ‘reports’, we have a new ‘SERP Summary’ report which is in the format required to re-upload page titles and descriptions. We simply require three headers for ‘URL’, ‘Title’ and ‘Description’.

For example –

serp-snippet-upload-format

You can upload in a .txt, .csv or Excel file.

Follow Us!

Why Purchase A Licence?

Buy a Screaming Frog SEO Spider Licence
  • The 500 URI crawl limit is removed
  • You can access ALL the configuration options
  • You can save & re-upload crawls
  • You can search for anything in the source code, & collect any data from the HTML of a URL using XPath, CSS Path or regex
  • You can connect to the Google Analytics API & pull in data directly during a crawl
  • You get support for any technical issues with the software

Contact Us

Screaming Frog Ltd
6 Greys Road,
Henley-on-Thames,
Oxfordshire,
RG9 1RY

Tel: +44 (0)1491 415070
Fax: +44 (0)1491 578134
info@screamingfrog.co.uksupport@screamingfrog.co.uk

Featured Guides

Looking For Something?