Over 300 SEO issues, warnings and opportunities

The Screaming Frog SEO Spider identifies over 300 SEO issues, warnings and opportunities that can be seen in the ‘Issues’ tab of the app.

  • Issues are an error or issue that should ideally be fixed.
  • Warnings are not necessarily an issue, but should be checked – and potentially fixed.
  • Opportunities are ‘potential’ areas for optimisation and improvement.

Priorities are based upon potential impact that may require more attention, rather than definitive action – from broadly accepted SEO best practice. They are not hard rules for what should be prioritised in your SEO strategy, or must be ‘fixed’ in your SEO audit as they lack context.

Issues provide direction to users who can make sense of the data and interpret it into appropriate prioritised actions relevant to each unique business, website and objectives. The full list of issues are listed below.

Types of Issues




Issue Priorities




Response Codes

HTTP response status codes indicate whether an HTTP request made during a crawl has been successfully completed. Find issues related to URLs that are blocked from being crawled, return a no response, redirect, client or server error.


Website security is important to protect users and reduce risk from common threats. Find issues related to basic security best practices, such as HTTPS, mixed content, and HTTP security headers.


Ensuring a website has logical and relevant URLs is vital for users and search engines in understanding website structure, and the content of a page. Find issues non-optimal formats, or URLs that shouldn't be discoverable.

Over 115 Characters

Page Titles

Relevant and descriptive page titles are essential, as they help both users and search engines understand the purpose of a page. Find issues related to missing, duplicate, long or even multiple page titles.

Outside <head>

Meta Description

Meta descriptions can be used in search engine result snippets, so writing a good meta description can be helpful for users and drive more clicks to a website. Find issues related to missing, duplicate, long or even multiple meta descriptions.

Outside <head>


Headings help provide structure and organisation to a web page, and can allow users and search engines to better understand the content. The h1 should describe the main title and purpose of the page. Find issues related to missing, duplicate, long or non-sequential h1's.

Over 70 Characters


Headings are titles and subtitles within the copy of a page to guide users and search engines to better understand the content. The h2 heading is often used to describe sections within a document and act as signposts for the user. Find issues related to missing, duplicate, long or non-sequential h2's.

Over 70 Characters


Ensuring your web pages deliver the best on-page content is vital to satisfy users and for SEO. Find issues related to exact and near duplicate content, low content, spelling, grammar and readability.


Imagery is crucial in delivering rich web experiences, whether that's to support branding, selling products or impactful visuals. Find issues related to large images, missing alt text, incorrectly sized images and cumulative layout shift.

Missing Alt Text
Missing Alt Attribute
Background Images


Rel="canonical" can be used to help reduce duplicate content, and provide a hint to the search engines about which version of a URL should be indexed in the search results. Find issues related to pages missing canonicals, canonicalised pages, non-indexable canonicals and more.


Robots directives using either the robots meta tag or X-Robots-Tag HTTP Header give search engine crawlers instructions about how to crawl and index web pages. Find issues related to noindex, nofollow and none directives.

Outside <head>


Hreflang lets search engines know about multiple versions of a page for different languages and regions, to enable them to return the appropriate version to users under search. Find issues related to non-200 hreflang URLs, missing return links and incorrect language or region codes.

Missing Self Reference
Missing X-Default


Search engines today are typically able to render web pages to crawl and index pages that use JavaScript. However, JavaScript-rich websites that rely on client-side rendering can be more fragile. Find issues related to blocked resources, identifying JS content, links and key elements.


AMP is an open-source HTML framework that was created to help produce fast-loading pages optimised for mobile. There are specific requirements for AMP and the setup. Find issues related to the SEO setup, as well as AMP specifications using the official AMP validator.


Structured Data

Structured data provides search engines with explicit clues about the meaning of pages and their components and can enable special search result features and enhancements in Google. Find issues related to specifications and Google’s rich result feature requirements.

Validation Warnings
Rich Result Validation Warnings


XML Sitemaps should be up to date, error free, and include indexable, canonical versions of URLs to help search engines crawl and index the URLs that are important for a website. Find issues related to URLs not in the XML Sitemap, orphan URLs and non-indexable URLs.

URLs In Multiple Sitemaps


Websites must provide friendly and rich experience on mobile devices, as well as desktop. Test web pages using Lighthouse to identify mobile usability issues related to viewport, tap targets, content sizing and more.

Mobile Alternate Link


The Google Analytics API can pull in data directly into the crawl for additional insight in technical and content audits. Find issues related to orphan URLs only discovered in Google Analytics, non-indexable URLs with GA data and more.

Orphan URLs

Search Console

The Google Search Console API can collect click data directly into the crawl for additional insight and query the URL Inspection API to gather indexing data in bulk. Find issues related to orphan URLs, URLs without search analytics data, URLs not indexed and more.


Valid HTML helps crawlers parse and understand web pages accurately and errors could impact crawlers when crawling and indexing. Find issues that can impact search bots from being able to parse and understand a page reliably.

High Carbon Rating

Back to top