The Fox Business team did what they promised. They tried to keep an orderly debate for the Republican candidates while staying out of the mix as much as possible. Even if they didn’t have the luxury of following CNBC’s debacle, they would have come out with high marks after their performance… or lack thereof.
They weren’t pushovers, either. When it was necessary to get tough, they did, but they didn’t lay down a self-righteous law to make it happen. However, they didn’t pull punches, though the left will likely say they did. Neil Cavuto went after Ted Cruz on bank bailouts, Ben Carson on past discrepancies, and Rand Paul on his support of environmental issues. However, they didn’t tell anyone they should drop out, attack others, or that they were running a comic book campaign like the last crew did.
Just as their presence in the debate was short and sweet, so too is this article. Enough said.
44 FlaresTwitter28Facebook0Google+5LinkedIn4Buffer7Email—StumbleUpon0Pin It Share0Filament.io44 Flares×
Focusing on great content for your website, but failing on technical SEO is like putting Fernando Alonso in the 2015 McClaren F1 car. You have a great asset, but are being held back by technical issues!
In this post, I discuss four technical SEO issues that go unnoticed by most companies.
Redirects are part and parcel of having an evolving website. You want to ensure that both search engines and users do not have a bad experience and therefore you add in redirects to the most relevant page, and quite right too.
But what occurs more than some people realise, is the page that you are redirecting has already been redirected, thus causing a redirect chain. This is common within both eCommerce and editorial content, but can be solved relatively easily.
The problem you have is you are potentially losing any link authority that you may have gained from pages you redirected two or three iterations ago. I appreciate Matt Cutts has said all link value is passed through redirects, but I am a big believer that the more redirects they go through the more value is lost.
To see if you have any redirect chains on your website, all you need to do is fire up Screaming Frog and run a crawl. On completion of the crawl, go to the menu and select reports > redirect chains.
This will provide you with an XLS of all the redirects and redirect chains that are currently live on the website. The next step will be to start cleaning these up. I have seen some good gains in traffic by changing a redirect chain into a one-to-one redirect.
I come across this issue ALL of the time, yet nobody seems to be solving the issue. It is not that difficult to plan when you are creating an eCommerce website, or change once it has been built, but people still are not dealing with layered navigation.
For those that are not sure what I mean by layered navigation, I am talking about the filtering system you see on most, if not all, eCommerce product listing. It is the navigation that allows you to filter down to brand, size, colour, reviews, etc.
This, alongside product pages, is one of the most common issues causing duplicate content on eCommerce websites. If you are an eCommerce store, 9 out of 10 times if you conduct a site: search in Google, you will see a lot more pages indexed than you would expect. This is likely to be down to issues with layered navigation.
Providing the user with the flexibility to be granular with their filtering is great from a user perspective and one that I fully support. However, they need to be handled correctly.
Here are three examples of issues you will find with layered navigation and how they could be solved.
Product listing pages:
If you provide the user with the functionality to change the number of products that are being viewed within the listing, then you need to ensure that only a single URL is being indexed.
The most common way of handling this is by adding in the rel=canonical tag. The only question you need to ask yourself is which page do you want to be indexed? On most eCommerce solutions you have the following options:
12 (default view)
Depending on the speed of your website I would either rel=canonical to the default view or the view all page, but I would definitely have one. If you do not include a rel=canonical tag then all of these pages will be indexed for every single variation of filter you can imagine for your website. That is a lot of extra pages!
You do not want and/or need all of your filter options to be dynamic. You would expect brand terms to be static URLs rather than dynamic URLs. There are likely to be other filter options and this does depend on the website that you are working on, but keyword research can help you with this.
However when allowing users to filter by items such as colour, size, price and review, you are likely to want to have these dynamic, with a rel=canonical tag added.
www.domain.com/product/brand/ – This is fine to be kept as it is.
www.domain.com/product/brand/?=colour – This should have the following canonical tag added to it –
www.domain.com/product/brand/?=colour&?=size – This should have the following canonical tag added to it –
www.domain.com/product/brand/?=colour&?=size&?=review – This should have the following canonical tag added to it –
*Note: All eCommerce sites are different and keyword research should be carried out to determine the type of pages that are delivered by static and dynamic UR£.
This can be handled in two ways, either canonicalising all pages to a single page, usually the View All, or using the rel=next/prev feature that is available.
The option that you take here is very much dependent on the speed of your website and the amount of products you have available. Google prefers to surface the View All page, and if there are less than ten pages I like to rel=canonical to that page. However if there are consistently more than ten pages, I implement the rel=next/prev tag to indicate to the search engines they are the same page.
When was the last time you honestly looked at your robots.txt? Have you ever looked at it? You are not alone, a lot of people have not. The robots.txt file provides you with the ideal way to restrict search engines from accessing content or elements they do not need to see.
It is important that the robots.txt file is understood and utilised as much as possible. Adding in rogue folders and files can have a serious impact on the way that your website is being crawled.
I attended a conference recently where the presenter asked how many of us are using schema markup, only four people raised their hand. Four people out of a room of nearly 200 people, I was astonished.
For eCommerce it is essential, and I cannot recommend it enough to any of my clients. Not just because we have entered the world of structured data and we need to provide the search engines with context about what we are trying to say, but at present it still differentiates your website in the SERPs.
There are a range of schema markups that are available, so you do not have the excuse of saying ‘I don’t work on an eCommerce store’. To find out more information then take a look here – http://www.schema.org/ and if you are looking for help to create your schema then here is another handy tool – http://schema-creator.org/.
If you only take a couple of recommendations away from this post, I would strongly recommend you solve your layered navigation issues and implement schema where possible.
Do you often miss these four technical SEO features? Are there others that you feel get missed when auditing your website from a technical perspective? I would love to hear your feedback in the comments below or on twitter @danielbianchini.
Daniel Bianchini is the Director of Services at White.net, a creative digital marketing agency based in Oxford, UK. Having been in digital marketing since leaving University, Daniel has worked in-house at Dixons Stores Group (Dixons Carphone), with many leading UK brands and helped start a digital marketing agency based in Hertfordshire.