learn

Learn

blogplaceholder

Schema Structured Data

Schema structured data are snippets of information shown in search results in addition to the usual meta title and meta description. The additional data can take the form of reviews, site links, prices, images, to name just a few. Structured snippets can make an organic listing stand out in the Google search results. They can […]

Read more
blogplaceholder

Canonicalization For SEO: Canonical URL’s

What is canonicalization? Canonicalization is the process of defining a dominant version or ‘canonical’ version of a page and redirecting all other URL’s with the same content to a default version of that page. Without canonicalization in place, Google will crawl this content on separate URLs and consider it duplicate content. By using canonicalization for […]

Read more
blogplaceholder

Image SEO: How to Optimise Your Images for SEO

When you think of optimising your website for SEO, you think of keyword rich content and top class user experience. Despite these factors being the basis of having a well optimised site, it is shocking how few webmasters optimise their images for SEO. SEO friendly images aren’t only for the benefit of search engines, but […]

Read more
blogplaceholder

What is Duplicate Content & How To Avoid It!

To offer the best search experience to a user, Google tries it’s best to filter out as much duplicate content as possible – Why? Because we aren’t interested in seeing multiple search result pages with the similar or exact text. Duplicate content is similar or identical content which appears in multiple locations on the Internet […]

Read more
blogplaceholder

What is a Title Tag & How To Optimise Your Title For SEO

A title tag is most commonly used to give a name to a document or webpage. Not only do they give the reader a brief insight into what the page will contain, title tags also appear in search engines such as Google in the search results pages. Title tags can be optimised for search engines […]

Read more
blogplaceholder

Robots.txt Vs Meta Robots Tag: Which is Best?

The purpose of a robots.txt file, also known as the robots exclusion protocol, is to give webmasters control over what pages robots (commonly called spiders) can crawl and index on their site. A typical robots.txt file, placed on your site’s server, should include your sitemap’s URL and any other parameters you wish to put in […]

Read more