Jump to content

Tempo

Members
  • Posts

    32
  • Joined

  • Last visited

    Never
  • Days Won

    4

Posts posted by Tempo

  1. I found this content in my language and that seems relevant to me:

    - Duplicate content
    Risk: Search Engines, by definition, hate duplicate content that prevents them from understanding how to accurately index the relevant pages in its index. In terms of linking, he adds that "there is a loss: if ten links point to two identical pages, it is less rewarding than if they all point to a single page with unique content."

    Detection and Tools: To manually retrieve duplicate content, simply take a short excerpt from the page and search it in quotation marks on Google ("your search") to see if it appears multiple times. "Software like" Killer duplicate "pushes the tests further by automating verification," said Aurélien Bardon, founder of Aseox agency.

    Solution: You must either rewrite or delete the contents of the duplicate page, or deindex through a canonical the page that is the least to reference.

    - Slow page loading
    Risk: The loading time is a ranking criterion for Google, especially since the Speed Update in July 2018, with the penalization of sites too slow on mobile.

     

    - The bot could not crawl the page
    Risk: If GoogleBot can not crawl a URL then it can never be indexed. "If the URL generated significant traffic, then after failing on several crawls, Google will degrade the positioning ... often very strongly," says the founder of Aseox.

    Detection: The site: exempldomaine command provides a sample of indexed URLs. Searching the URL in Google also allows you to test its indexing. If a URL is indexed, it is crawled.

     

    - Title tag / Description empty, missing or duplicate
    Risk: The HTML tag is placed in the header of the page displayed in the browser tab and in the clickable title on the SERP of the search engine. It is also a source of information about the content of the page for the crawlers.

    Detection and tools: The new Search Console of Google no longer offering this valuable information, it is necessary to pass a crawl on its site with tools like Screaming Frog, MyRankingMetrics, OnCrawl or Botify.

     

    - Bad pages in the XML sitemap
    Risk: Errors in XML sitemaps have little impact, their goal being simply to help indexing robots identify all the important pages of a website, including recent pages.

    Detection: In the Search Console, the "Sitemaps" tab helps to understand the errors encountered by Google

  2. Happy to know there is a new version. Thank @ClicShopping.
    yes, the map can be useful to know where the customer comes from.
    I like the dashboard because you have a good and quickly view about your e-commerce activities.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use