The canonical href is the exact URL declared in a page’s rel=”canonical” tag. It tells search engines which version of a page should be treated as the preferred one when similar or duplicate URLs exist. Because this check stores the precise canonical URL from the HTML, it is a high-value signal. Even a small change […]
Robots.txt allowed for CSS
The robots.txt allowed for CSS check shows whether the CSS files needed by a page can be crawled under the site’s robots.txt rules. This is important because search engines often need access to CSS to render the page properly, understand its layout, and assess how the content is presented. A page can be technically accessible […]
Robots.txt allowed for page URL
The robots.txt allowed for page URL check shows whether a page’s URL is currently permitted to be crawled under the site’s robots.txt rules. This is a critical signal because robots.txt can block search engines from accessing a page before they even reach its content. When this value changes, it deserves immediate attention. A page may […]
X-Robots-Tag noindex flag
The X-Robots-Tag noindex flag shows whether a page or file includes a noindex directive in its HTTP response headers. This is a high-impact signal because it tells search engines not to index that resource, even if the content itself looks perfectly normal. Unlike a meta robots tag in the page HTML, this instruction is delivered […]
X-Robots-Tag raw value
The X-Robots-Tag raw value is the exact indexing and crawling instruction sent in the HTTP response header rather than in the page HTML. It can control whether search engines index a page, follow its links, or apply other restrictions before they even process the visible content. Because this is a header-level signal, it is especially […]
Meta robots nofollow flag
The meta robots nofollow flag shows whether a page includes a nofollow directive in its robots meta tag. This directive tells search engines not to follow links found on that page in the normal way. That makes it an important signal to monitor. A page may still be live, indexable, and visible to users, yet […]
Meta robots noindex flag
The meta robots noindex flag shows whether a page includes a noindex directive in its robots meta tag. This is one of the most important page-level indexing signals because it tells search engines not to keep that page in their search results. When this value changes, it deserves close attention. A page can still load […]
Meta robots raw value
The meta robots raw value is the exact instruction set placed in a page’s robots meta tag. It tells search engines how the page should be treated, including whether it should be indexed, whether links should be followed, and whether certain search features are allowed. Because this check stores the full raw string, it helps […]
Redirect target URL
The redirect target URL is the final destination a page sends users and search engines to when that page redirects. Monitoring that destination is especially important during migrations, URL changes, and site restructuring, because even a small change in the target can send traffic to the wrong place. This matters for SEO because redirects are […]
Redirect chain
A redirect chain is the full sequence of URLs a request passes through before it reaches the final destination. Instead of going straight from the requested page to the live content, the browser or crawler may be sent through several redirect steps first. That matters because redirects are not just a yes-or-no issue. The exact […]
