Screaming Frog SEO Spider Update – Version 2.40
Screaming Frog SEO Spider Update – Version 2.40
I am excited to announce version 2.40 of the Screaming Frog SEO spider, codenamed internally as ‘Duck Cam’. Thanks again to everyone for their continued feedback, suggestions and support.
Let’s get straight to it, the updated version includes the following new features –
1) SERP Snippets Now Editable
First of all, the SERP snippet tool we released in our previous version has been updated extensively to include a variety of new features. The tool now allows you to preview SERP snippets by device type (whether it’s desktop, tablet or mobile) which all have their own respective pixel limits for snippets. You can also bold keywords, add rich snippets or description prefixes like a date to see how the page may appear in Google.
You can read more about this update and changes to pixel width and SERP snippets in Google in our new blog post.
The largest update is that the tool now allows you to edit page titles and meta descriptions directly in the SEO Spider as well. This subsequently updates the SERP snippet preview and the table calculations letting you know the number of pixels you have before a word is truncated. It also updates the text in the SEO Spider itself and will be remembered automatically, unless you click the ‘reset title and description’ button. You can make as many edits to page titles and descriptions and they will all be remembered.
This means you can also export the changes you have made in the SEO Spider and send them over to your developer or client to update in their CMS. This feature means you don’t have to try and guesstimate pixel widths in Excel (or elsewhere!) and should provide greater control over your search snippets. You can quickly filter for page titles or descriptions which are over pixel width limits, view the truncations and SERP snippets in the tool, make any necessary edits and then export them. (Please remember, just because a word is truncated it does not mean it’s not counted algorithmically by Google).
2) SERP Mode For Uploading Page Titles & Descriptions
You can now switch to ‘SERP mode’ and upload page titles and meta descriptions directly into the SEO Spider to calculate pixel widths. There is no crawling involved in this mode, so they do not need to be live on a website.
This means you can export page titles and descriptions from the SEO Spider, make bulk edits in Excel (if that’s your preference, rather than in the tool itself) and then upload them back into the tool to understand how they may appear in Google’s SERPs.
Under ‘reports’, we have a new ‘SERP Summary’ report which is in the format required to re-upload page titles and descriptions. We simply require three headers for ‘URL’, ‘Title’ and ‘Description’.
The tool will then upload these into the SEO Spider and run the calculations without any crawling.
3) Crawl Overview Right Hand Window Pane
We received a lot of positive response to our crawl overview report when it was released last year. However, we felt that it was a little hidden away, so we have introduced a new right hand window which includes the crawl overview report as default. This overview pane updates alongside the crawl, which means you can see which tabs and filters are populated at a glance during the crawl and their respective percentages.
This means you don’t need to click on the tabs and filters to uncover issues, you can just browse and click on these directly as they arise. The ‘Site structure’ tab provides more detail on the depth and most linked to pages without needing to export the ‘crawl overview’ report or sort the data. The ‘response times’ tab provides a quick overview of response time from the SEO Spider requests. This new window pane will be updated further in the next few weeks.
You can choose to hide this window, if you prefer the older format.
4) Ajax Crawling #!
Some of you may remember an older version of the SEO Spider which had an iteration of Ajax crawling, which was removed in a later version. We have redeveloped this feature so the SEO Spider can now crawl Ajax as per Google’s Ajax crawling scheme also sometimes (annoyingly) referred to as hashbang URLs (#!).
There is also an Ajax tab in the UI, which shows both the ugly and pretty URLs, with filters for hash fragments. Some pages may not use hash fragments (such as a homepage), so the ‘fragment’ meta tag can be used to recognise an Ajax page. In the same way as Google, the SEO Spider will then fetch the ugly version of the URL.
5) Canonical Errors Report
Under the ‘reports‘ menu, we have introduced a ‘canonical errors’ report which includes any canonicals which have no response, are a 3XX redirect or a 4XX or 5XX error.
This report also provides data on any URLs which are discovered only via a canonical and are not linked to from the site (so not html anchors to the URL). This report will hopefully help save time, so canonicals don’t have to be audited separately via list mode.
Other Smaller Updates
We have also made a large number of other updates, these include the following –
- A ‘crawl canonicals‘ configuration option (which is ticked by default) has been included, so the user can decide whether they want to actually crawl canonicals or just reference them.
- Added new Googlebot for Smartphones user-agent and retired the Googlebot-Mobile for Smartphones UA. Thanks to Glenn Gabe for the reminder.
- The ‘Advanced Export’ has been renamed to ‘Bulk Export‘. ‘XML Sitemap‘ has been moved under a ‘Sitemaps’ specific navigation item.
- Added a new ‘No Canonical’ filter to the directives tab which helps view any html pages or PDFs without a canonical.
- Improved performance of .xlsx file writing to be close to .csv and .xls
- ‘Meta data’ has been renamed to ‘Meta Robots’.
- The SEO Spider now always supplies the Accept-Encoding header to work around several sites that are 404 or 301’ing based on it not being there (even though it’s not actually a requirement…).
- Allow user to cancel when uploading in list mode.
- Provide feedback in stages when reading a file in list mode.
- Max out Excel lines per sheet limits for each format (65,536 for xls, and 1,048,576 for xlsx).
- The lower window ‘URL info’ tab now contains much more data collected about the URL.
- ‘All links’ in the ‘Advanced Export’ has been renamed to ‘All In Links’ to provide further clarity.
- The UI has been lightened and there’s a little more padding now.
- Fixed a bug where empty alt tags were not being picked up as ‘missing’. Thanks to the quite brilliant Ian Macfarlane for reporting it.
- Fixed a bug upon some URLs erroring upon upload in list mode. Thanks again to Fili for that one.
- Fixed a bug in the custom filter export due to the file name including a colon as default. Oops!
- Fixed a bug with images disappearing in the lower window pane, when clicking through URLs.
I believe that’s everything! I really hope you like all the new features and improvements listed above. We still have so much planned and in our development queue, so there is plenty more to come as well.
As always, thank you all for your on-going support and feedback. Please do let us know about any bugs, issues or if there are any other features you’d like to see in the tool. Thanks all.