SEO Blog

Posts Tagged ‘Google’s’


Disavowed: Secrets of Google’s Most Mysterious Tool

Posted by:  /  Tags: , , , , ,

Posted by Cyrus Shepard

To many webmasters, Google’s Disavow Tool seems a lifesaver. If you’ve suffered a Google penalty or been plagued by shady link building, simply upload a file of backlinks you want to disavow, and BOOM – you’re back in good graces. Traffic city!

Or nothing happens at all.

Few Google products have produced more fear, rumors and speculation. No one outside Google knows exactly how it works, and fewer understand how to succeed with it. To better understand, I used the tool myself to disavow 1000s of links, and talked with dozens of SEOs who used it in attempts to recover from Google penalties.

How Dangerous Is Disavow?

When you first log into the Disavow Tool, Google does everything in its power to dissuade you from actually using it with scary messaging.

Do Not Disavow

What’s the worst that could happen?

To find out how much damage I could do, I performed an experiment: Disavowing every link pointing to my website. Over 35,000 of them.

In this case, no reconsideration request was filed. Would the disavow tool work on its own?

Experiment

Disavow 35,000 Links to a Single Website

URL: http://cyrusshepard.com

Process:

  1. Download all links from Google Webmaster Tools
  2. Upload 35,000 properly formatted links to Google's Disavow Tool
  3. Wait 2 Months

Results:

Disavow-experiment

After 2 months, nothing happened. No drop in traffic.

The evidence suggests one of three possibilities:

  1. You must file a reconsideration request after disavowing your links, or…
  2. The disavow has built-in safeguards in order to protect you from disavowing good links, or…
  3. It takes longer than 2 months for Google to process all the links.

We’ve heard conflicting accounts from Googlers whether the tool works automatically, or if must file a reconsideration request for it to work. The data implies the later, although some SEOs say they’ve seen results from using the Disavow without filing a reconsideration request.

Google also states they reserve the right to ignore your disavowed links if they think you made a mistake, much like rel=”canonical”.

Best Advice: Safeguards or not, you might still shoot yourself in the foot. Be careful disavowing links!

Can You Use Disavow for Penguin?

Can you use the Disavow Tool if you haven't received a manually penalized? For example, will it work for Penguin?

The answer: maybe.

Here's a reminder: Google updates like Panda and Penguin are part of Google's overall algorithm. They automatically affect your rankings without direct human intervention. On the other hand, a manual penalty is often applied when you violate Google's guidelines. These can be both applied and lifted manually by Google's Webspam team.

Google representatives, including Matt Cutts, have gone on record to say the Disavow Tool could be used to help if you’ve been hit by Penguin (an algorithmic action), but also suggests that this applies to links that also violate Google’s Quality Guidelines.

Penguin and Google’s Unnatural Link Warnings often go hand in hand. So if you were hit by one, you are often hit by the other. Conversely, certain SEOs have claimed benefits from using the disavow on sites that were not penalized.

Best Advice: If you’ve been hit with a manual penalty, you need to file a reconsideration request if using the Disavow Tool. If you haven't been manually penalized, the benefits of using the tool are inconclusive.

Pro Tips for Reconsideration Requests

1. Remove First, Disavow Last

Google wants you to remove links first. Disavow is a last resort.

100% accuracy isn’t required, but effort counts.

Google’s Webspam team keeps a historical index of your backlink profile, so that when you file a reconsideration request they can see the links you’ve worked to remove.

2. Gather Your Links

You can use any source you want, but Google recommends downloading your Latest Links report directly from Webmaster Tools.

3. Find the Bad Links

You can do this two ways, with either automatic tools or manual analysis. Realistically, you should use both. Best Manual Analysis Resource:

Best Link Removal Research Tools:

Link Removal Resources

4. Outreach, Outreach, Outreach

Next, you’re going to send emails to get those links removed. Lots of emails.

Resources for Link Removal Outreach:

4. Trust in Google Docs

When you document your efforts, don’t submit random links to the Webspam team; they may not click on them. By sharing all your evidence via Google Docs, you provide a level of protection that helps ensure the Webspam team sees your evidence.

5. When in Doubt, Disavow Entire Domains

Google’s Disavow Tool gives you 2 options when disavowing links: individual URLs or entire domains.

Many webmasters fail at their reconsideration requests the first time because they miss too many links. The fear is that you’ll disavow something valuable, but if you’ve been rejected time and time again, this one change often leads to success.

Here’s a screenshot from Dr. Pete’s post showing both formats.

Disavow Format

Best Advice: If you are rejected after disavowing individual URLs, try disavowing entire domains.

6. Formatting Counts

Google rejects many disavow files because of bad formatting, but webmasters usually never know. Guidelines state the file type should be .txt only and “must be encoded UTF-8 or 7-bit ASCII.”

7. Bonus: Extra "Removed" Links with Screaming Frog

Google’s link index of your site is rarely up to date. They undoubtedly include links that no longer exist. To find dead links quickly, download a complete file of your latest links from Google Webmaster Tools into Screaming Frog (use List Mode) or another crawler of your choice.

When finished, take any links that return a 404 and download these into a spreadsheet. Be sure to include these dead links as "Removed" when you submit your reconsideration request to Google, otherwise they may not know about them.

Conclusion

The Disavow Tool is useful, but damn tricky.

Someday, perhaps Google can get away from tools like the Disavow. Today, good SEOs can't keep up with what's considered a good link or a bad, and Google continually cracks down on what it considers a “bad link.”

For successful marketers, it’s much more fulfilling to build new links, than disavow old ones.

I suppose that's Google's point, no?

Penalty Lifted

Additional Resources:

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


SEOmoz Daily SEO Blog

25 Killer Combos for Google’s Site: Operator

Posted by:  /  Tags: , , , ,

Posted by Dr. Pete

There’s an app for everything – the problem is that we’re so busy chasing the newest shiny toy that we rarely stop to learn to use simple tools well. As a technical SEO, one of the tools I seem to never stop finding new uses for is the site: operator. I recently devoted a few slides to it in my BlueGlassX presentation, but I realized that those 5 minutes were just a tiny slice of all of the uses I’ve found over the years.

People often complain that site:, by itself, is inaccurate (I’ll talk about that more at the end of the post), but the magic is in the combination of site: with other query operators. So, I’ve come up with two dozen killer combos that can help you dive deep into any site.

1. site:example.com

Ok, this one’s not really a combination, but let’s start with the basics. Paired with a root domain or sub-domain, the [site:] operator returns an estimated count of the number of indexed pages for that domain. The “estimated” part is important, but we’ll get to that later. For a big picture, I generally stick to the root domain (leave out the “www”, etc.).

Each combo in this post will have a clickable example (see below). I'm picking on Amazon.com in my examples, because they're big enough for all of these combos to come into play:

You’ll end up with two bits of information: (1) the actual list of pages in the index, and (2) the count of those pages (circled in purple below):

Screenshot - site:amazon.com

I think we can all agree that 273,000,000 results is a whole lot more than most of us would want to sort through. Even if we wanted to do that much clicking, Google would stop us after 100 pages. So, how can we get more sophisticated and drill down into the Google index?

2. site:example.com/folder

The simplest way to dive deeper into this mess is to provide a sub-folder (like “/blog”) – just append it to the end of the root domain. Don’t let the simplicity of this combo fool you – if you know a site’s basic architecture, you can use it to drill down into the index quickly and spot crawl problems.

3. site:sub.example.com

You can also drill down into specific sub-domains. Just use the full sub-domain in the query. I generally start with #1 to sweep up all sub-domains, but #3 can be very useful for situations like tracking down a development or staging sub-domain that may have been accidentally crawled.

4. site:example.com inurl:www

The "inurl:" operator searches for specific text in the indexed URLs. You can pair “site:” with “inurl:” to find the sub-domain in the full URL. Why would you use this instead of #3? On the one hand, "inurl:" will look for the text anywhere in the URL, including the folder and page/file names. For tracking sub-domains this may not be desirable. However, "inurl:" is much more flexible than putting the sub-domain directly into the main query. You'll see why in examples #5 and #6.

5. site:example.com -inurl:www

Adding [-] to most operators tells Google to search for anything but that particular text. In this case, by separating out "inurl:www", you can change it to "-inurl:www" and find any indexed URLs that are not on the "www" sub-domain. If "www" is your canonical sub-domain, this can be very useful for finding non-canonical URLs that Google may have crawled.

6. site:example.com -inurl:www -inurl:dev -inurl:shop

I'm not going to list every possible combination of Google operators, but keep in mind that you can chain most operators. Let's say you suspect there are some stray sub-domains, but you aren't sure what they are. You are, however, aware of "www.", "dev." and "shop.". You can chain multiple "-inurl:" operators to remove all of these known sub-domains from the query, leaving you with a list of any stragglers.

7. site:example.com inurl:https

You can't put a protocol directly into "site:" (e.g. "https:", "ftp:", etc.). Fortunately, you can put "https" into an "inurl:" operator, allowing you to see any secure pages that Google has indexed. As with all "inurl:" queries, this will find "https" anywhere in the URL, but it's relatively rare to see it somewhere other than the protocol.

8. site:example.com inurl:param

URL parameters can be a Panda's dream. If you're worried about something like search sorts, filters, or pagination, and your site uses URL parameters to create those pages, then you can use "inurl:" plus the parameter name to track them down. Again, keep in mind that Google will look for that name anywhere in the URL, which can occasionally cause headaches.

Pro Tip: Try out the example above, and you'll notice that "inurl:ref" returns any URL with "ref" in it, not just traditional URL parameters. Be careful when searching for a parameter that is also a common word.

9. site:example.com -inurl:param

Maybe you want to know how many search pages are being indexed without sorts or how many product pages Google is tracking with no size or color selection – just add [-] to your "inurl:" statement to exclude that parameter. Keep in mind that you can combine "inurl:" with "-inurl:", specifically including some parameters and excluding others. For complex, e-commerce sites, these two combos alone can have dozens of uses.

10. site:example.com text goes here

Of course, you can alway combine the "site:" operator with a plain-old, text query. This will search the contents of the entire page within the given site. Like standard queries, this is essentially a logical [AND], but it's a bit of a loose [AND] – Google will try to match all terms, but those terms may be separated on the page or you may get back results that only include some of the terms. You'll see that the example below matches the phrase "free Kindle books" but also phrases like "free books on Kindle".

11. site:example.com “text goes here”

If you want to search for an exact-match phrase, put it in quotes. This simple combination can be extremely useful for tracking down duplicate and near-duplicate copy on your site. If you're worried about one of your product descriptions being repeated across dozens of pages, for example, pull out a few unique terms and put them in quotes.

12. site:example.com/folder “text goes here”

This is just a reminder that you can combine text (with or without quotes) with almost any of the combinations previously discussed. Narrow your query to just your blog or your store pages, for example, to really target your search for duplicates.

13. site:example.com this OR that

If you specifically want a logical [OR], Google does support use of "or" in queries. In this case, you'd get back any pages indexed on the domain that contained either "this" or "that" (or both, as with any logical [OR]). This can be very useful if you've forgotten exactly which term you used or are searching for a family of keywords.

Edit: Hat Tip to TracyMu in the comments – this is one case where capitalization matters. Either use "OR" in all-caps or the pipe "|" symbol. If you use lower-case "or", Google could interpret it as part of a phrase.

14. site:example.com “top * ways”

The asterisk [*] can be used as a wildcard in Google queries to replace unknown text. Let's say you want to find all of the "Top X" posts on your blog. You could use "site:" to target your blog folder and then "Top *" to query only those posts.

Pro Tip: The wild'card [*] operator will match one or multiple words. So, "top * questions" can match "Top 40 Books" or "Top Career Management Books". Try the sample query above for more examples.

15. site:example.com “top 7..10 ways”

If you have a specific range of numbers in mind, you can use "X..Y" to return anything in the range from X to Y. While the example above is probably a bit silly, you can use ranges across any kind of on-page data, from product IDs to prices.

16. site:example.com ~word

The tilde [~] operator tells Google to find words related to the word in question. Let's say you wanted to find all of the posts on your blog related to the concept of consulting – just add "~consulting" to the query, and you'll get the wider set of terms that Google thinks are relevant.

17. site:example.com ~word -word

By using [-] to exclude the specific word, you can tell Google to find any pages related to the concept that don't specifically target that term. This can be useful when you're trying to assess your keyword targeting or create new content based on keyword research.

18. site:example.com intitle:”text goes here”

The "intitle:" operator only matches text that appears in the <TITLE></TITLE> tag. One of the first spot-checks I do on any technical SEO audit is to use this tactic with the home-page title (or a unique phrase from it). It can be incredibly useful for quickly finding major duplicate content problems.

19. site:example.com intitle:”text * here”

You can use almost any of the variations mentioned in (12)-(17) with "intitle:" – I won't list them all, but don't be afraid to get creative. Here's an example that uses the wildcard search in #14, but targets it specifically to page titles.

Pro Tip: Remember to use quotes around the phrase after "intitle:", or Google will view the query as a one-word title search plus straight text. For example, "intitle:text goes here" will look for "text" in the title plus "goes" and "here" anywhere on the page.

20. intitle:”text goes here”

This one's not really a "site:" combo, but it's so useful that I had to include it. Are you suspicious that other sites may be copying your content? Just put any unique phrase in quotes after "intitle:" and you can find copies across the entire web. This is the fastest and cheapest way I've found to find people who have stolen your content. It's also a good way to make sure your article titles are unique.

21. “text goes here” -site:example.com

If you want to get a bit more sophisticated, you can use "-site:" and exclude mentions of copy on any domain (including your own). This can be used with straight text or with "intitle:" (like in #20). Including your own site can be useful, just to get a sense of where your ranking ability stacks up, but subtracting out your site allows you to see only the copies.

22. site:example.com intext:”text goes here”

The "intext:" operator looks for keywords in the body of the document, but doesn't search the <TITLE> tag. The text could appear in the title, but Google won't look for it there. Oddly, "intext:" will match keywords in the URL (seems like a glitch to me, but I don't make the rules).

23. site:example.com ”text goes here” -intitle:"text goes here"

You might think that #22 and #23 are the same, but there's a subtle difference. If you use "intext:", Google will ignore the <TITLE> tag, but it won't specifically remove anything with "text goes here" in the title. If you specfically want to remove any title mentions in your results, then use "-intitle:".

24. site:example.com filetype:pdf

One of the drawbacks of "inurl:" is that it will match any string in the URL. So, for example, searching on "inurl:pdf", could return a page called "/guide-to-creating-a-great-pdf". By using "filetype:", you can specify that Google only search on the file extension. Google can detect some filetypes (like PDFs) even without a ".pdf" extension, but others (like "html") seem to require a file extension in the indexed document.

25. site:.edu “text goes here”

Finally, you can target just the Top-Level Domain (TLD), by leaving out the root domain. This is more useful for link-building and competitive research than on-page SEO, but it's definitely worth mentioning. One of our community members, Himanshu, has an excellent post on his own blog about using advanced query operators for link-building.

Why No Allintitle: & Allinurl:?

Experienced SEOs may be wondering why I left out the operators "allintitle:" and "allinurl:" – the short answer is that I've found them increasingly unreliable over the past couple of years. Using "intitle:" or "inurl:" with your keywords in quotes is generally more predictable and just as effective, in my opinion.


Putting It All to Work

I want to give you a quick case study to show that these combos aren't just parlor tricks. I once worked with a fairly large site that we thought was hit by Panda. It was an e-commerce site that allowed members to spin off their own stores (think Etsy, but in a much different industry). I discovered something very interesting just by using "site:" combos (all URLs are fictional, to protect the client):

(1) site:example.com = 11M

First, I found that the site had a very large number (11 million) of indexed pages, especially relative to its overall authority. So, I quickly looked at the site architecture and found a number of sub-folders. One of them was the "/stores" sub-folder, which contained all of the member-created stores:

(2) site:example.com/stores = 8.4M

Over 8 million pages in Google's index were coming just from those customer stores, many of which were empty. I was clearly on the right track. Finally, simply by browsing a few of those stores, I noticed that every member-created store had its own internal search filters, all of which used the "?filter" parameter in the URL. So, I narrowed it down a bit more:

(3) site:example.com/stores inurl:filter = 6.7M

Over 60% of the indexed pages for this site were coming from search filters on user-generated content. Obviously, this was just the beginning of my work, but I found a critical issue on a very large site in less than 30 minutes, just by using a few simple query operator combos. It didn't take an 8-hour desktop crawl or millions of rows of Excel data – I just had to use some logic and ask the right questions.


How Accurate Is Site:?

Historically, some SEOs have complained that the numbers you get from "site:" can vary wildly across time and data centers. Let's cut to the chase: they're absolutely right. You shouldn't take any single number you get back as absolute truth. I ran an experiment recently to put this to the test. Every 10 minutes for 24 hours, I automatically queried the following:

  1. site:seomoz.org
  2. site:seomoz.org/blog
  3. site:seomoz.org/blog intitle:spam

Even using a fixed IP address (single data center, presumably), the results varied quite a bit, especially for the broad queries. The range for each of the "site:" combos across 24 hours (144 measurements) was as follows:

  1. 67,700 – 114,000
  2. 8,590 – 8620
  3. 40 – 40

Across two sets of IPs (unique C-blocks), the range was even larger (see the "/blog" data):

  1. 67,700 – 114,000
  2. 4,580 – 8620
  3. 40 – 40

Does that mean that "site:" is useless? No, not at all. You just have to be careful. Sometimes, you don't even need the exact count – you're just interested in finding examples of URLs that match the pattern in question. Even if you need a count, the key is to drill down. The narrowest range in the experiment was completely consistent across 24 hours and both data centers. The more you drill down, the better off you are.

You can also use relative numbers. In my example above, it didn't really matter if the 11M total indexed page count was accurate. What mattered was that I was able to isolate a large section of the index based on one common piece of site architecture. Assumedly, the margin of error for each of those measurements was similar – I was only interested in the relative percentages at each step. When in doubt, take more than one measurement.

Keep in mind that this problem isn't unique to the "site:" operator – all search result counts on Google are estimates, especially the larger numbers. Matt Cutts discussed this in a recent video, along with how you can use the page 2 count to sometimes reduce the margin of error:


The True Test of An SEO

If you run enough "site:" combos often enough, even by hand, you may eventually be greeted with this:

Google Captcha

If you managed to trigger a CAPTCHA without using automation, then congratulations, my friend! You're a real SEO now. Enjoy your new tools, and try not to hurt anyone.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


SEOmoz Daily SEO Blog

How to Free Your E-Commerce Site from Google’s Panda

Posted by:  /  Tags: , , , , ,

On Feb. 25, 2011, Google released Panda to wreak havoc on the web. While it may have been designed to take out content farms, it also took out scores of quality e-commerce sites. What do content farms and e-commerce sites have in common? Lots of pages. Many with zero or very few links. And on e-commerce sites with hundreds or thousands of products, the product pages may have a low quantity of content, making them appear as duplicate, low quality, or shallow to the Panda, thus a target for massive devaluation.

My e-commerce site was hit by Panda, causing a 60% drop in traffic overnight. But I was able to escape after many months of testing content and design changes. In this post, I’ll explain how we beat the Panda, and what you can do to get your site out if you’ve been hit.

The key to freeing your e-commerce site from Panda lies at the bottom of a post Google provided as guidance to Pandalized sites:

One other specific piece of guidance we’ve offered is that low-quality content on some parts of a website can impact the whole site’s rankings, and thus removing low quality pages, merging or improving the content of individual shallow pages into more useful pages, or moving low quality pages to a different domain could eventually help the rankings of your higher-quality content.

Panda doesn’t like what it thinks are “low quality” pages, and that includes “shallow pages”. Many larger e-commerce sites, and likely all of those that were hit by Panda, have a high number of product pages with either duplicate bits of descriptions or short descriptions, leading to the shallow pages label. In order to escape from the Panda devaluation, you’ll need to do something about that. Here are a few possible solutions:

Adding Content To Product Pages

If your site has a relatively small number of products, or if each product is unique enough to support entirely different descriptions and information, you may be able to thicken up the pages with unique, useful information. Product reviews can also serve the same purpose, but if your site is already hit by Panda you may not have the customers to leave enough reviews to make a difference. Additionally, some product types are such that customers are unlikely to leave reviews.

If you can add unique and useful information to each of your product pages, you should do so both to satisfy the Panda and your customers. It’s a win-win.

Using Variations To Decrease Product Pages

Some e-commerce sites have large numbers of products with slight variations. For example, if you’re selling t-shirts you may have one design in 5 different sizes and 10 different colors. If you’ve got 20 designs, you’ve got 1,000 unique products. However, it would be impossible to write 1,000 unique descriptions. At best, you’ll be able to write one for each design, or a total of 20. If your e-commerce site is set up so that each of the product variations has a single page, Panda isn’t going to like that. You’ve either got near 1,000 pages that look like duplicates, or you’ve got near 1,000 pages that look VERY shallow.

Many shopping carts allow for products to have variations, such that in the above situation you can have 20 product pages where a user can select size and color variations for each design. Switching to such a structure will probably cause the Panda to leave you alone and make shopping easier for your customers.

Removing Poor Performing Products

If your products aren’t sufficiently unique to add substantial content to each one, and they also don’t lend themselves to consolidation through selectable variations, you might consider deleting any that haven’t sold well historically. Panda doesn’t like too many pages. So if you’ve got pages that have never produced income, it’s time to remove them from your site.

Getting Rid of All Product Pages

This is a bold step, but the one we were forced to take in order to recover. A great many of our products are very similar. They’re variations of each other. But due to the limitations of our shopping cart combined with shipping issues, where each variation had different shipping costs that couldn’t be programed into the variations, it was the only viable choice we were left with.

In this option, you redesign your site so that products displayed on category pages are no longer clickable, removing links to all product pages. The information that was displayed on product pages gets moved to your category pages. Not only does this eliminate your product pages, which make up the vast majority of your site, but it also adds content to your category pages. Rather than having an “add to cart” or “buy now” button on the product page, it’s integrated into the category page right next to the product.

Making this move reduced our page count by nearly 90%. Our category pages became thicker, and we no longer had any shallow pages. A side benefit of this method is that customers have to make fewer clicks to purchase a product. And if your customers tend to purchase multiple products with each order, they avoid having to go from category page to product page, back to the category page, and into another product page. They can simply purchase a number of products with single clicks.

Noindexing Product Pages

If you do get rid of all links to your product pages but your cart is still generating them, you’ll want to add a “noindex, follow” tag to each of them. This can also be a solution for e-commerce sites where all traffic enters on category level pages rather than product pages. If you know your customers are searching for phrases that you target on your category pages, and not specifically searching for the products you sell, you can simply noindex all of your product pages with no loss in traffic.

If all of your products are in a specific folder, I’d recommend also disallowing that folder from Googlebot in your robots.txt file, and filing a removal request in Google Webmaster Tools, in order to make sure the pages are taken out of the index.

Other Considerations: Pagination & Search Results Pages

In addition to issues with singular product pages, your e-commerce site may have duplicate content issues or a very large number of similar pages in the index due to your on-site search and sorting features. Googlebot will fill in your search form and index your search results pages, potentially leading to thousands of similar pages in the index. Make sure your search results pages have a rel=”noindex, follow” tag or a rel=”canonical” tag to take care of this. Similarly, if your product pages have a variety of sorting options (price, best selling, etc.), you should make sure the rel=”canonical” tag points to the default page as the canonical version. Otherwise, each product page may exist in Google’s index in each variation.


Maxmoritz, a long time member of our SEO Community, has been working in SEO full time since 2005. He runs a variety of sites, including Hungry Piranha, where he blogs regularly.

Categories: 

SEO Book.com

Insulating Ourselves From Google’s Whims

Posted by:  /  Tags: , , , ,

Ranking well for our chosen keywords involves putting in a lot of effort up front, with no guarantee of ranking, or reward.

Even if we do attain rankings, and even if do get rewarded, there is no guarantee this situation will last. And this state of flux, for many seos, is only likely to get worse as Google advises that updates will be “jarring and julting for a while

Even more reason to make every visitor count.

If we can extract higher value from each visitor, by converting them from visitor to customers, and from short term customers to long term customers, then our businesses are less vulnerable to Google’s whims. We don’t need to be as focused on acquiring new visitors.

There is great value to be had in optimizing the entire marketing chain.

Hunting For Customers Vs Keeping Customers

It comes down to cost.

According to a Harvard Study a few years back, it can cost five times as much to acquire a new customer as it does to keeping a current customer happy. Of course, your mileage may vary, as whether it really costs five times as much, or three, or seven really depends what your cost structure.

However, this concept is an important one for search marketers, as it’s reasonable to assume that the cost of acquiring customers, via keyword targeting, is rising as Google makes the marketing process of keyword targeting more expensive than it has been in the past. This trend is set to continue.

If the cost of customer acquisition is rising, it can make sense to look at optimizing the offer, the conversion rates and optimizing the value of existing customers.

Underlying Fundamentals

If you have something a lot of people desperately need, and there isn’t much competition, it typically doesn’t cost much to land those customers. They come to you. If you have something genuinely scarce, or even artificially scarce, people will line up.

The problem is that most businesses don’t enjoy such demand. They must compete with other businesses offering similar products and services. So, if there is a scarcity issue, it’s a scarcity of customers, not service and product providers.

However, by focusing on a specific niche, businesses can eliminate a lot of competition, and thereby reduce the marketing cost. For example, a furniture manufacturer could conceivably make furniture for a wide variety of customers, from commercial offices, to industry, to the home.

But if they narrowed their focus to, say, private jet fit-outs, they eliminate a lot of their competition. They’d also have to determine if that niche is lucrative, of course, but as you can see, it’s a way of eliminating a lot of competition simply by adding focus and specialization.

By specializing, they are more likely to enjoy higher quality leads – i.e. leads that may result in a sale – than if they targeted broadly, as it is difficult to be all things to all people The cost of marketing to a broad target market can be higher, as can the level of competition in the search results pages, and the quality of leads can be lower.

Conversion Optimization

Once we’re focused on our niche, and we’ve got targeted visitors coming in, how can we ensure fewer visitors are wasted?

Those who do a lot of PPC will be familiar with conversion optimization, and we’ll dive deep into this fascinating area over the coming weeks, but it’s a good concept for those new to SEO, and internet marketing in general, to keep at front of mind.

You’ve gone to a lot of trouble to get people to your site, so make sure they don’t click back once they arrive!

Here’s a great case study by a company called Conversion Rate Experts. It outlines how to structure pages to improve conversion rates. Whilst the findings are the result of testing and adaptation, and are specific to each business, there are a few few key lessons here:

Length of the page. In this case, a long page improved conversion rates by 30%. Of course, it’s not a numbers game, more the fact that the longer page allowed more time to address objections and answer visitor questions.

As Conversion Rate Experts point out:

The media would have us believe that people no longer have any capacity to concentrate. In reality, you cannot have a page that’s too long—only one that’s too boring. In the case of Crazy Egg’s home page, visitors wanted their many questions answered and that’s what we delivered. (If you’d like more people to scroll down your long pages, see the guide we wrote on the topic.)”

It’s best to experiment, to see what works best in your own situation, but, generally speaking, it pays to offer the visitor as much timely information as possible, as opposed to short copy if there is a analytical, need-oriented motivation. Short copy can work better if the customer is impulsive.

As we see in the Crazy Egg case study, by anticipating and addressing specific objections, and moving the customer closer to the point of sale, the webpage is doing the job of the salesperson. This is an area where SEO and PPC, linked with conversion rate optimization, can add a ton of value.

The second interesting point was they optimized the long-term value of the customer to the company by making a time-sensitive offer.

The one-time offer test illustrates another important principle of conversion optimization: Don’t let the fear of a short-term loss stand in the way of a long-term gain

The offer they made turned a short-term customer into a long-term customer. If we have a lot of long term customers on our books, it can take some of the pressure off the need to constantly acquire new customers.

Optimize Everything

We engage in SEO because there are many similar sites.

The benefit of SEO is we can occupy premium real estate. If we appear high on the search result pages, we are more likely than our competitors to command the customers attention. But we stand to gain a lot more stability if we are not wholly reliant on occupying the top spots, and therefore less vulnerable to Google’s whims.

Categories: 

SEO Book.com