November 21, 2022

Why is Duplicate Content Bad for SEO?

If you’re interested in improving SEO on your website, you may have come across the term “duplicate content”, but do you know what it actually means and how it can impact how your web pages appear in search results? In this article, Wildcat Digital demystifies duplicate content for SEO and offers a few simple solutions that could help you to avoid any negative repercussions.

So, why is duplicate content bad for SEO? Whilst there aren’t official penalties for duplicate content, it can result in negative SEO repercussions. Duplications confuse the algorithm – it doesn’t know which instance of the same content is the most relevant for a given search query and may result in decreased rankings, traffic and conversions. 

Read on to learn more about duplicate content, how it impacts SEO, if Google penalises duplicate content, and how you can fix it. 

Jump To:

Why Duplicate Content is Bad?

Duplicate content is exactly what it sounds like – content that appears on the internet more than once; either on the same domain, a subdomain, or across the internet as a whole. From a user perspective, duplicate content is repetitive and may not be overly useful, but from an SEO perspective, it can cause ranking issues as “appreciably similar” content can make it difficult for Google to decide which version is more relevant for a given search query.

As such, site owners may experience a drop in rankings and traffic, which stems from three main problems:

Learn more about Google ranking factors in our recent blog where we discuss an array of factors that have the potential to impact SEO, both positively and negatively.

Types of Duplicate Content 

Now we know why duplicate content is bad for SEO, it’s time to learn more about the different types of duplicate content. Some are more obvious and easy to spot than others, but it’s important to check for them regardless (or have our SEO experts check for you!).

Scraped or Copied Content

Scraped and copied content is blatant plagiarism. Sometimes small chunks of text are copied, whilst other times, entire pages (or even multiple pages) are scraped. Plagiarism is, of course, bad on a legal and moral level, but it’s also bad for both SEO and user experience, as explained above.

Unsourced Quotes, Facts and Figures

Similarly, using unsourced quotes, facts and figures is plagiarism and, depending on how it is presented, may be considered duplicate content. If using such content, always clearly source your material and endeavour to rewrite the text in your own words where possible. 

URL Variations

URL variations, such as parameters, tracking and analytics codes, as well as session IDs and printer-friendly versions of webpages all have the potential to cause duplicate content issues as they can construct URLs that are almost identical.

Often, this is caused by misunderstanding what a URL actually is, both by yourself (or whoever works on your site), or from the developer who doesn’t prioritise SEO – they quite literally speak a different language!

HTTP vs. HTTPS & WWW. vs non-WWW. Pages

If a website has separate versions of webpages (www.page.com vs page.com – with and without the www. prefix), with the same content, these are effectively duplicate pages and therefore duplicate content. 

The same applies to sites that use both http:// and https://. For reference, http:// is the same as https:// with the only difference being that https:// includes encryption and verification, making the page more secure and ideal for eCommerce sites or sites that capture personal information.

How Much Duplication is Ok?

It is thought that major search engines haven’t actually determined what duplicate content is, or by how much a page is duplicated (which only adds to the list of problems we outlined earlier). However, SEO experts across the industry have attempted to put a figure to this – they prefer content to be at least 30% different from other copy to be determined as not duplicated.

A simple Google search for “duplicate content checker” or “keyword density tool” should provide you with a quick and easy way to ensure that your content isn’t duplicated.

Does Google Penalise Duplicate Content?

Strictly speaking, Google doesn’t penalise sites for duplicate content – you won’t see a notification in Google Search Console telling you that you have been penalised for duplication. Even Google stresses that there is no such thing as a duplicate content penalty. 

However, that isn’t to say that there aren’t negative repercussions associated with duplicate content. As we mentioned earlier, duplicate content confuses the algorithm and forces it to decide which iteration of the same content to rank – often the wrong, or plagiarised, version.

How to Fix Duplicate Content for SEO

In theory, once you’ve found duplicate content it’s easy to fix – you just need to make the content unique, right? In reality, it can be more complicated than it looks and, if you’re completely new to the world of SEO, it may be beneficial to enlist the help of experienced SEO Professionals like our team at Wildcat Digital. 

Rewrite and Replace with Unique Content

If you’ve checked your site for duplicate content (excluding URLs), and found instances of duplication, it is relatively simple to fix this. You just need to rewrite your content. This may take time and effort, but having unique content that is directly relevant to your business on your site is not only useful for SEO, but for user experience and for encouraging conversions.

Canonicalisation

Fixing duplicate URLs is a bit more difficult and requires canonicalisation – specifying which duplication is the master version and should appear in search results. 

301 Redirects

301 redirects are often the best way to fix URL duplication – they literally redirect the duplicated page to the one that you want to appear in search results. This stops the duplicate pages from competing with each other and also creates a stronger relevancy and popularity signal that will positively impact the master page.

Rel=”Canonical”

Rel=”Canonical” tags effectively tells search engines to actually treat a page as duplicate content, but to attribute any  value from links, content, metrics and “ranking power” to the specified URL.

Meta Robots Noindex

No index tags work by telling search engine crawlers to exclude the page from indexing, therefore ensuring that the page cannot appear on search engines and cannot compete with the master page. No indexing is particularly useful in cases of pagination.

Google Search Console Preferred Domain and Parameter Handling

In cases of WWW. vs page.com or HTTP:// vs HTTPS://, Google Search Console allows you to set your preferred domain and specify whether or not the Googlebot should crawl your site’s various parameters differently, thus easily managing potential duplication issues.

However, if you choose this as your primary fix for duplication, be aware that it will only work for Google and will not have any effect on other search engines. 

Get Help from the SEO Experts

If after reading through this article, you’re still unsure about how to deal with duplicate content for SEO on your site, get in touch with the Content SEO experts at Wildcat Digital today. As part of your SEO campaign with us, we’ll audit your site for such issues as duplicate content and work to provide a fix quickly and easily with no hassle on your side.

We’ll also perform a number of other audits on your website to see how it currently performs and to identify any high-priority tasks that could have a quick, positive impact on your Google rankings.

Learn more about our SEO services today, or get in touch with us to arrange a free consultation to see what we can do for you.

Post by

Chloe Robinson

More blogs.

View all