Get The Top News In Search
Whether you are optimizing your own website or working on a client’s, it’s imperative that web owners attempt to simulate search engine bot activity, specifically in regards to site crawling. You, being a proactive digital marketer, have probably reviewed (and maybe even tried using) site crawl tools in the past. So, let’s say you’ve done the heavy lifting: you downloaded the latest version of Screaming Frog, read through the extensive user guide, and properly configured the tool to crawl key elements of the site in question. Now that you have the results, what should you do with them? Below are three data points from simulated crawls that I review to glean architectural SEO action items worth addressing.
Malformed URL Structures
Upon completion of a full crawl, my first review is of webpage URL structures. I scan for outliers that could be causing page duplication. A key offender is often generated by malformed internal hyperlinks. More specifically, when a website’s internal hyperlinks oscillate between upper and lowercase versions, the best case scenario is that an excessive number of 301 redirects are triggered. In the worst case, two versions of every webpage exist for the entire site. The result, unfortunately, is a dilution of SEO equity and wasted search engine crawl budget.
Simply scroll through the “Address” column within the “Internal – All” portion of the crawl and keep an eye out for URL inconsistencies. Make note of each offending case and then review the “Inlinks” tab to identify the source of the issue.
If you walk away from your crawl to return to an inordinate number of identified URLs, you might have an issue with spider traps. For reference, a spider trap is an architectural SEO pitfall in which search engine bots get caught in an infinite (or excessively large) loop of crawled webpages.
E-Commerce websites are most susceptible, particularly if employing faceted navigation. Faceted navigation grants website visitors the ability to refine the number of displayed products on a given webpage by applying “filters.” To do so, dynamic parameters are appended to the page’s URL. This becomes an issue of wasted crawl budget when preventative SEO directives are not applied.
So, if you find that you crawled over 300k URLs for a 150-page website, start down this rabbit hole. If faceted navigation is the root issue, look into applying the proper crawl directive (robots.txt disallow, robots meta tag with a value of noindex, canonical tag, etc.) to corral crawls of your site.
Title Tags and Meta Descriptions
Yes, it’s straight forward. But this easy win opportunity is often overlooked when performing a simulated site crawl. Not only are the title tag and meta description fields directly tied to click-through rates, but title tags also offer an opportunity to directly embed targeted keywords within a page’s HTML elements.
First things first when reviewing title tags and meta descriptions: optimize these fields for webpages that are ranking or meant to rank within SERPs. Prioritize based on traffic generated, write unique copy and ensure that a title and meta description exist for every page on your site. To review these fields within your Screaming Frog crawl, just scroll through the right hand “overview” section to “Page Titles” and “Meta Description.” For best practice advice, check out Moz’s SEO fundamentals pages.
While not exhaustive, the three points detailed above should give you a solid start to analyze your website crawl. If you have questions or issues using the Screaming Frog, you can contact their team here. Also, feel free to leave comments or questions below!
Search News Straight To Your Inbox