/    /  Before & After Comparision 

Before & After Comparision

Before & After Comparision  (i2tutorials)

Analysis of pages that have lost search traffic: identify problems and return visitors

Have you ever completely lost traffic from Google on a project that has been given a lot of sweat and blood? Not yet?

I will immediately calm down: although a strong drawdown of attendance is extremely unpleasant emotionally and financially, you can almost always deal with it. There are few incurable cases. At worst, you can take into account errors and start with the new domain. Moreover, a strong drop in traffic is usually caused by reasons that lie on the surface. Almost any SEO specialist can find them, there are many articles and reports on this topic.

 

Hidden threat: pages that have lost search traffic

Therefore, today we will talk about something else. Not about emergency treatment, but about prevention. Any project can lose attendance at the micro level – stop to bring traffic to individual pages. The total attendance at the same time can grow steadily at the expense of other documents.

Most of any large sites (from a couple of hundred url) gradually accumulate problem pages. They definitely need to find and understand the situation. Why “necessarily”? It’s simple:

– firstly, it is usually much easier to return traffic than to win a new one;

– secondly, minor troubles with individual pages often indicate more serious threats to the site. And that means – help them diagnose at an early stage and avoid large losses;

– thirdly, it allows you to keep abreast of the niche and make more effective decisions on the development of the site.

Generally speaking, tracking problem pages of various types is a routine task for an SEO specialist. It is included in the gentlemen’s set of mandatory parameters for monitoring.

 

How to find the url that stopped giving visitors to search

When I was seriously engaged in SEO consulting a few years ago, I was greatly surprised by the lack of tools on the market that would automate the task. I had to write my own script that worked with statistics Metrics. If you use the service, it will do everything for you – the second section of the audit is called “Lost traffic”, there are lists of pages that have lost traffic from Google. Just download them and proceed to the analysis of problems (see below on how to find and eliminate the causes of the fallout).

Of course, there is nothing complicated in creating these lists (the service is not for nothing called “without a tambourine”). You can make them manually, although it is a bit of a chore. Here is the instruction.

 

1. Uploading the login page for the last month

The “Login Pages” report is a standard one for Google.Metrics:

 

2. In the same report, we change the period to a longer one and reload it again.

For example:

we take 12 months. Now we need to filter pages that have too little traffic. After all, if attendance changes from 2 people per year to 0 per month, this still means little. But if 200 people came to a specific url from Google in a year and 0 in a month, then something is wrong.

Depending on the size of the site, the minimum annual traffic that should be taken into account will be different. For a small project, you can use 25 visits, for an average – 50. Specify this limit in the settings:

 

3. Compare lists with each other

We need to find those that are in the “for the year” list and are missing in the “for the month” list.

You can do this with Excel (google help). Personally, I’m not comfortable working in it, I don’t consider Excel to be a universal working tool . Therefore, I made a special online service that solves the problem in one click  (for free, without registration). Simply copy one list into the right margin, the second into the left margin and press the “Find Difference” button.

Now repeat the procedure for the second search engine. So, we have two lists of url in our hands – lost visitors from Google. What do we do next?

 

How to find the reason why the traffic disappeared and return it

Of course, there can be a lot of reasons for the loss of traffic. Not all indicate problems and require action. Consider the most common options. We will carry out the analysis in such a way as to first do the simplest work (what if it was suddenly complicated and not needed?).

So, hypotheses:

 

Change of page address or technical error

To begin, check the response from the server that returns the page. This can be done, for example, in Screaming Frog (free version features are sufficient):

1. You can also use the trial version of the parser from Visual SEO Studio, about which I wrote in a post about free programs for site analysis .

2. What codes can we get? In most cases this will be:

3. 200 (page available) – you need to continue to work and consider other hypotheses.

4. 301/302 (redirect) – the page address was probably intentionally changed. It is worth checking whether everything is in order on the page that is the purpose of the redirect.

5. 404 (no page) – a very frequent option. Once the page once brought good traffic, the deletion most likely happened by mistake. Pages with 404 can not be long in the index of search engines and bring traffic. It is worth taking care of her recovery.

6. 500/504 (server error) – a clear sign of technical problems. It is necessary to understand the reasons and make sure that the document gives the code 200 and has the correct content.

 

Index barring

Another common reason is the random prohibition of indexing pages. We return to the program with which we looked at the server’s response and look for the following in the results table:

– the results of the robots.txt check (in the Screaming Frog, the “Status” field)

– meta robots check results

– link canonical

 

If page:

– blocked in robots.txt

– or has a meta tag noindex

– or has a canonical address different from its address,

the problem is most likely that site settings prevent document indexing. It is necessary to understand whether the ban was made intentionally or by accident. If the ban is random, then you need to change the settings so that search engines can add the problematic url to the search.

 

Irrelevant or poor quality content

If you still have not determined the reason why the page stopped giving visitors, then get ready for a slightly more complex analysis. You need to understand whether the content available on the page, generally bring visitors. It is impossible to give an unambiguous instruction here: a lot depends on the niche and the specifics of the site.

 

Here are some general directions for analysis:

1. Does the page contain content that meets any user need (learn new information, order a product or service, etc.)?

2. Is there enough content to adequately respond to a search user?

3. Is the page not a duplicate of another page (not necessarily complete)?

4. Are the basic technical requirements for optimization (text content is displayed directly in the code of the html page, it has the correct title and h1)?

Finally, the page itself may be in perfect order, but contain content that is not currently being searched. Let’s say it is devoted to New Year salads (they are searched once a year) or the old news (once it has collected event traffic, and now it is simply out of date). Usually with such pages it makes no sense to do something. On the other hand, if the really high-quality content simply gathers dust in the archive, you can try to pick up more relevant keywords for it and slightly shift the emphasis on them.

For example:

we have news with the headline “Abnormal activity of ticks in 2012”, and inside – an expert comment on how to protect yourself from a bite. We put in the heading “Prevention of the bite by ticks: how to protect yourself during an encephalitis outbreak”, we ensure that the tips are most conveniently located on the page – and we consistently receive traffic on the information keyword.

 

Missing index

Even when the technical settings do not interfere with indexing and the page contains high-quality content, this does not guarantee its presence in the index. Therefore, the last step – check the presence of the page in the search:

on Google, via the info operator:

(after the operator – checked url).

If the page is not in the index, we ensure that the robot can easily find it:

– put links from other pages of the site

– we put links from Twitter (see the instruction on acceleration of indexation with its help)

– we add to Google “hops” and the “url scan” tool in Search Console https://www.google.com/webmasters/tools/submit-url ?

– add to the site map .

Of course, there may be more exotic problems that do not fit into this scheme. If you come across this, you can write to me , I will try to understand the situation.

 

Instead of a conclusion. Once again, why do we need all these torments?

As you can see, the task, especially if it is performed completely manually, requires a lot of effort. Although most of the operations are simple, time for detailed analysis takes a fair amount of time. Is the game worth the candle?

Definitely worth it!

Think in vain at the beginning of this article photo of a shark? The analysis of problem pages is one of the ways to catch serious problems with the technical part . The tool tips were found, for example:

– closure in robots.txt section of the site on 500 pages;

– incorrect change of the site structure (instead of a redirect to the old addresses, 404 was given, all the accumulated factors were wasted) and, as a result, a serious drawdown of positions;

– non-working code (for a series of url, instead of content, technical php error messages fell out of the page).

However, small fish – returning traffic to individual pages is also an acceptable goal, with a good cost / benefit ratio.

Successful fishing, success to your sites!