Duplicate Content, Why Once is Enough

Revision as of 19:24, 22 February 2011 by Suzi Ziegler (talk | contribs) (Roll to)



By [[User:|]] on

Search engines like dinner guests have limited use for repetition. No one wants to hear the same story repeatedly any more than search engines will rank duplicate content favorably.

What exactly is duplicate content?

Duplicate content mirrors other content exactly or significantly. It can be posted on the same domain, or on another domain. Most of the time, duplicate content is not meant to deceive viewers, but it often dilutes the authenticity of an online business.

Why duplicate content is bad news

In some cases, content is deliberately duplicated across domains to manipulate search engine ranking and garner more traffic. Deceptive practices can result in poor user experience and turn off potential customers or clients. When a visitor sees the same content repeated within a set of search results, they are likely to leave a site and never return.

Google does a thorough job of indexing pages with distinct information. If Google perceives duplicate content with the intent to manipulate rankings and deceive users, they'll make handslapping adjustments to the indexing and ranking of the sites involved. As a result, the ranking of both sites may suffer -- or worse, both sites could be removed from Google's index. This means they will not appear in search results!

Things you can do prevent duplicate content on your site

Building a website takes time, energy and financial resources. You want people and search engines your to find your site easily. Once your website is found, you'd like viewer to find it useful and informative. Showing duplicate content on multiple pages will diminishe a user's experience.

Fortunately, there are specific things you can do when developing your website to avoid creating duplicating content.

  • Use top-level domains whenever possible to handle country-specific content. Many readers will know that http://www.mocksite.uk contains United Kingdom-centric content, but they may not recognize the same quality in these other URLs: http://www.mocksite.com/uk, and http://uk.mocksite.com. Canonicalization is the process of picking the best URL when several choices are available.
  • Keep your internal linking consistent. Don't use multiple variations to link to the same place. For example, http://www.mocksite.com/ and http://www.mocksite.com/page and http://www.mocksite.com/page/index.htm.
  • If your site has a printer version of a page that is identical to a viewable version of the same page, you can help Google avoid blocking these pages by using a noindex meta tag.
  • Use 301 redirects if you've restructured your site. 301 redirects permanently redirect content in your .htaccess file to successfully redirect users, Googlebots, and other spiders to where you really want them.
  • If you syndicate your content on other sites, do it carefully. Google will always show the version they find to be the most appropriate for users in each given search -- regardless of whether it's your preferred version. Make sure each site syndicating your content includes a link back to your original article. And remember, you can always request that anyone using your syndicated material use the noindex meta tag to prevent search engines from indexing their version of the content, too.
  • Use Google Webmaster Tools to define how your site is indexed. You can tell Google your preferred domain (for example, http://www.mocksite.com or http://mocksite.com).
  • Don't use boilerplates of lengthy copyright text on the bottom of each page. Instead, include a brief summary of the content you wish to display, then add a link to a specific page for more details. Try Google's parameter handling tool to see how they to treats URL parameters.
  • No one likes to land on an empty page, so try to avoid placeholders where possible. Don't publish incomplete pages or pages under construction. If you do create placeholder pages, use a noindex meta tag to block these pages from being indexed.
  • Make sure you're familiar with how content is displayed on your website. Blogs, forums, and related items often show the same content in multiple formats. For example, a blogpost may appear on your home page, on an archive page, or on a page with other entries using the same label.
  • Lastly, if you have many pages that are similar, consider developing each page uniquely, or consolidate the similar pages into one. For example, if you have a travel site with separate pages for Manhattan and Brooklyn, but much of the same information appears on both, you could merge the two pages together, or expand each page to contain unique content exclusive and relevant to each burrow.

Google no longer recommends blocking crawler access to duplicate content on your website with a robots.txt file. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicate pages by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to too much crawling of your website, you can adjust the crawl rate settings in Webmaster Tools.

Remember, duplicate content on a website is not grounds for punitive action from Google unless it appears that the duplicate content is intended to deceive or manipulate search engine results.




Retrieved from "http://aboutus.com/index.php?title=Duplicate_Content,_Why_Once_is_Enough&oldid=20970266"