SEO Blog

Author Archive


How To Blog Successfully About Anything

Posted by:  /  Tags: , , ,

Posted by TannerC

In order to create a successful blog, you have to be passionately curious about the topic you're covering.

This notion was the central point of my Mozinar on "Blogging Like You Mean It" a few weeks ago, when I shared my personal story of blogging success. Here's a concise recap of the story, for those who missed it: I was once tasked with creating a blog on a topic I was completely uninterested in: television. At that time, there wasn't a TV in my home and I had absolutely no interest in television.

 

 

In order to run the blog, I knew I had to find a way to approach the topic that would be interesting enough to make blogging feel less like work and more like fun. (That's the real trick to successful blogging, by the way: writing about things you are completely and naturally passionate about.) For this project, I was able to come up with a question that fully peaked my interest in the topic: "How has television impacted the history of our culture, and how will it continue to do so in the future?"

 

With that one question, I was able to get interested in the topic, and eventually the blog started ranking in the top search results for some extremely competitive terms related to TV. Within a matter of weeks, we were writing articles that captivated people from around the globe and were even featured prominently on sites like The Guardian, AdWeek, New York Magazine, BuzzFeed, About.com, and Design Work Life. Today, the blog has been taken over by a remarkable, dedicated team at CableTV.com.

 

After I shared this short story in my Mozinar, the number one question people had was: "Can you give an example of exactly how to take an uninteresting topic and make it interesting?"

 

For this post, I'm going to do one better. We're going to quickly go over three examples of how to turn a not-so-exciting topic (whether it's for your day job or whatever else) utilizing websites submitted by readers on the official SEOmoz Facebook page. Rather than turn this into a basic list of top-fives or elaborate examples for each of these three websites, you're going to get a bit more of the actual insights every blogger should be following, regardless of context.

 

It all starts with questions

First off, any uninteresting topic can be made interesting by asking questions.

 

Questions work remarkably well for two reasons: primarily, they give us clear purpose and direction in our efforts. When we're asking questions and actively pursuing the answers, our work suddenly becomes an opportunity to learn and grow, not just to get links or fill up pages on the web.

 

Seconldy, we, as bloggers, should focus on asking (and pursuing the answers of) questions is because there is bound to be an audience for the content created around those questions; people who are looking for the same answers.

 

 

These two facts alone make blogging become not only easier and more rewarding for us as writers, but also create an opportunity for us to create real, meaningful content that will easily create an audience that can rely on us.

 

In an article for The Boston Globe titled "Are we asking the right questions?", Leon NeyFakh evaluates the work of Dan Rothstein, co-founder of Right Question Institute in Cambridge. Leon asserts that, "Wielded with purpose and care, a question can become a sophisticated and potent tool to expand minds, inspire new ideas, and give us surprising power at moments when we might not believe we have any."

 

With the power of questions in mind, we can start evaluating nearly any topic and turning out ideas for successful blog posts. Reader Simon Abramson suggested that we first look at the official blog of Wild Earth, which, as far as I can gather, is a company that takes people into nature to help them build self-reliance, confidence, and ecological resilience. This is certainly a topic I know very little about and am currently not very interested in, personally.

 

The first step for any blogger (when covering any topic) should be to simply ask a lot of questions. If you're not sure where to start with the questions, focus on the 5 Ws: who, what, where, when, and why (and occasionally, how)?

 

 

Who started the whole "nature can help you as a person" industry? What are the psychological affects of being in nature for any period of time? Where are the best places to go if you want to experience nature at its best? When are the best times in your life to go? Why should a busy office worker consider such an option? How does nature build confidence, from a biological perspective?

 

Once you've asked as many questions as you can, try to find one or two that really sparks your interest. Personally, I'd love to look at the psychological affects of nature. With that one question alone a waterfall of other questions comes to mind, and every one of those questions is an opportunity for a blog post.

 

As a blogger, you'll want to focus on a primary question, then let that question fuel other questions that will become your blog content. The more questions you ask around a central question, the more ideas you'll have and the easier your job will become.

 

Use available resources for natural research

So, questions become the central focus of our efforts and allow us to pursue things that not only interest us, but also that of our ideal audience. Now what? Next, do what comes naturally when you have a question: pursue the answers. If you don't have any questions off the top of your head, pursue other questions the same way you would pursue answers.

 

This stage of blogging is what 60% of the work consists of: research. Plan on spending the majority of your blogging work schedule doing research, particularly reading. Keep in mind that your gift to readers is doing the research so they don't have to. Everything you write after this point on should be a concise, easy-to-consume version of whatever it is you spend all of your time researching.

 

Where to start researching answers and additional questions we may not be asking yet? I like to utilize sites like Quora, Topsy, Google Alerts, and Google Blog Search (setup as an RSS feed).

 

If our blog was about, say, a local housecleaning service – something like Marvelous Maids, a housecleaning company that serves St. Charles County in Missouri, as submitted by Moz community member Kathy Stamm Gage – and our primary question to turn that topic into something interesting was along the lines of, "What's the science behind common cleaning chemicals?", then our first task would be to explore the "housecleaning" category on Quora, which – believe it or not – is an actual thing.

 

 

Immediately after visiting the Quora page, we should have a few ideas for additional questions we can then turn into blog posts. Now, sometimes you'll have topic that simply doesn't have much information or activity on Quora, in that event it's best to get clever (by asking "instead of exploring the "housecleaning" page of Quora, what about "germs?") or move onto a resource that will certainly have more information (like a library).

 

In this case, digging around the "housecleaning" section of Quora yields some really interesting results that already spark some blog post ideas. Questions such as, "What are some housecleaning hacks?" teach us that instant orange drink mix will clean the inside of a dishwasher just as easily as expensive cleaners, due to the citric acid. That can definitely be turned into a blog post.

 

Moving on from Quora, we should begin exploring other resources. Our best power for blogging is all about quantity right now, especially if your blog is fairly small or just starting out. Focus on having a vast quantity of questions ready to go and be researched. Attempt to gather a very large quantity of resources you can utilize when exploring those questions as well.

 

As another example, if we stick to the topic of housecleaning, we can take to Topsy and search for "housecleaning chemicals" to get a pretty good list of additional questions/topics worth researching. Including: "How to clean your home using herbs rather than scary chemicals" and "A list of scary chemical cleaners to avoid." Curious about what either of those might entail? If you are, your readers will be too.

 

 

The best questions, of course, come from you. You can easily open up doors to topics and things that do interest you about your original question/theme by being naturally curious. For me, the questions come easy. "What chemicals do we really need to be worried about in common cleaning supplies?" or, "How sustainable are home cleaning supplies, really?" or, "Are things we hear about household cleaning chemicals fact or fiction?" or even, "If you were to add-up all of the various chemicals used in all of the supplies you use to clean your home, what would that list entail exactly?" All questions I personally wouldn't mind researching in order to create compelling (and, most importantly: helpful!) content.

 

These questions all provide instant blog posts that are pretty interesting, easy to research, related to a not-so-interesting industry, all discovered because we asked some simple questions and checked-out a few websites. Voila, blogging success is within our grasp!

 

A vision of what we've covered

We've touched on a few concepts that seem really basic, and yet we continue to see blogs that don't follow any of this pattern. The results speak for themselves, though: by pursuing things that are naturally interesting to us and finding answers for them, we're uncovering what a lot of people out there want to know as well.

 

The problem that our readers have is that they don't have all of the time or ability to research like we can.

 

 

That's also one of your greatest gifts to readers: putting in the time to learn about something so they don't have to. When you then reconstruct what you discover into an easy-to-digest blog post, or graphic, or video, or Tweet, you're establishing yourself as not only someone on their way to expert status, but you're also providing a true, can't-be-faked value.

 

Even if your topic isn't naturally interesting to you (like nature programs or housecleaning), there are ways to make it interesting through natural discovery.

 

Our last example of how this all works comes from Douglas Hodgson, who asked me to evaluate the business of eye care for Frontier Eye Care. Eye care is a naturally an interesting topic for me though, so I think this one will be a little easier than the last two. It's important and interesting to me because I was born with poor eyesight and because eyesight plays such a critical role in what I (and I'm sure millions and millions of others) do every single day.

 

So, how do you take a topic like eye care and make it worth researching? What value could we possibly provide to people who are interested in the eye care industry?

 

It's not so hard to come up with solutions when we make the topic really interesting by asking questions.

 

How has technology affected eyesight and what does the future of technology hold for it? Is there a certain science to picking out the perfect eyewear?

 

Post ideas flourish with just these few questions. What about a graphic illustrating the perfect angles and measurements to make (at home?) before picking a pair of sunglasses? We could even evaluate the history of eyeglasses used by famous figures. Maybe a post on how glasses or rigid gas permeable lenses are created and why 3D home printing may change that (will it though?), or what about an article explaining the countless factors that impact how our eyes develop as we grow (genetics, facial structure, encounters with bright lights, and so on)?

 

The ideas can flow, some will certainly be winning topics, others will just be interesting for a handful of people. What we need to do as bloggers is remember that our goal is to focus on one primary approach or question, then find related questions either we have or other people have, and put in the work to do the research and come up with solutions.

 

 

This is really basic stuff, I hope, but it's quite easy to forget or overlook. There are no worthwhile tricks to successful blogging outside of hard work, in my opinion. This approach not only allows you to learn a lot about your topic on your own, but it also sets you out as a clear, reliable resource in your industry. For any business or industry, that reputation can mean serious success.

 

If you have additional questions or insights into this type of blogging strategy, I'd love to hear your thoughts either in the comments or on Twitter.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


SEOmoz Daily SEO Blog

3 Ways to Align Your Blog Content with Your Target Audience

Posted by:  /  Tags: , , , , ,

3 Ways to Align Your Blog Content with Your Target Audience was originally published on BruceClay.com, home of expert search engine optimization tips.

It’s the question that’s on many people’s minds when they have a blog – who is our audience? What matters to them? If you have a niche blog – one that’s already focused on a particular subject matter – you already know your audience is interested in that, but how do you:

  • Continuously hold their attention?
  • Give them what they want?
  • Draw in new readers?
  • Align content with business offerings?

Understanding your audience is at times a guessing game. Your theories about who you’re talking to are made stronger by incremental data gathered over time. But you have to start from somewhere, right?

So before you start tracking your popular content in analytics, you have to think about what’s even relevant to your audience. What to write.

So let’s have a starting point. And that starting point is your business. What are the things your business offers its community? What are the things your brand is known for? How can you align your content to the people who need what you have, in a way that offers value to them?

Today, that’s what we’re going to talk about – a path for creating content for your blog.

1. Segment your audience by your products/services.

h

You have services and/or products. Who buys them? Start thinking about the differences in your audience by the service/product type (or groups of services/products). When you think about the differences, the persona sometimes becomes very clear.

Ask when they use your products and services and why? What are they trying to achieve? If you have the resources to do a branding exercise, this can be super helpful. But if you don’t, that’s OK, too. Brainstorming on your own or with your team can give you a great starting point.

Let’s use BCI as an example. Our audience type typically varies based on our products and services. We have a category of those who engage in services with us, and then another category of those who take SEO training, use the SEOToolSet and buy our books.

And then we have another audience, our industry. And this category is important to us, too. We also have an audience that will likely never buy from us, but they consume our content, like the blog and newsletter.

These audiences have different interests in the content they want and a whole different set of problems from one another. Some of them may need 101-level content, some may need more advanced content. Some may want tactical how-to info and some may need strategic plans. And some may just want to connect with us on a human level.

Go through this exercise with your business. Write up a persona profiles based on what you know about your audience, and add to it as time goes on. The more data you collect in analytics or by talking to your audience in comments or in social media (where the audience often overlaps), the more defined your persona profiles will be.

2. Know when your audience will crossover to another segment.

h

There will usually be overlap with your audience. And it’s important to recognize this overlap. Expanding on the BCI example we spoke about in the previous section, we know that sometimes people who buy the book will eventually sign up for training.

When you’re thinking about the behavior of your audience and what they need, think about the stages they go through during the span of their engagement with you. What first might be a book purchase could lead to a training class could lead to services.

It might be helpful to quickly sketch a diagram of the type of customer (segmented by product/service) and what path they are likely to go on during their relationship with you.

For example:

  • Reads blog or newsletter > Buys book, attends training, signs up for tools
  • Bought book > Attends training, reads blog, signs up for newsletter
  • Attends conference training > Signs up for extended training
  • Takes training course > Buys services
  • Signs up for tools > Buys book, attends training

This type of information can be particularly relevant if you are doing email marketing, but you can also align your blog content with the journey of the customer as well. The types of information they want at different stages of their engagement with your brand varies.

Which brings me to the next point …

3. Know what they are searching for.

h

Keyword research and audience go hand-in-hand. The information you uncover about who your audience is and what they want fuels your keyword research.

It’s important to know what your audience is searching for because you want to attract new people to your blog with the content you create surrounding the products or services you offer.

Once you have a good list of keywords, segmented by product/service/audience, you want to begin thinking about what sort of content is appropriate for that audience.

This is not only important for attracting new readers to your blog at the moment they are looking for that information, but also because you want to connect with your existing audience and give them the type of information they need.

This is where the personas you’ve already written up can come in handy. And you can also bulk them up in this stage, too. What do these people need at this point in their journey? What are they expecting from your brand? How can you help?

For example, you can make the inference that someone who buys your book (let’s use our book as an example), is a do-it-yourselfer, a small business owner, a budding SEO.

When you understand what challenges your audience faces at what stage, you can begin assigning topics to keywords and building that content into your editorial calendar to offer content to this type of persona.

And don’t forget about the different ways people learn. You can further tailor your content by taking into consideration the many ways people like to consume content.

Did you find this post useful? Do you have comments or ideas? Do tell below!

Bruce Clay Blog

Rank Modifying Spammers

Posted by:  /  Tags: , ,

My good friend Bill at SEOByTheSea has unearthed a Google patent that will likely raise eyebrows, whilst others will have their suspicions confirmed.

The patent is called Ranking Documents. When webmasters alter a page, or links to a page, the system may not respond immediately to those changes. Rather, the system may change rankings in unexpected ways.

A system determines a first rank associated with a document and determines a second rank associated with the document, where the second rank is different from the first rank. The system also changes, during a transition period that occurs during a transition from the first rank to the second rank, a transition rank associated with the document based on a rank transition function that varies the transition rank over time without any change in ranking factors associated with the document.

Further:

During the transition from the old rank to the target rank, the transition rank might cause:

  • a time-based delay response,
  • a negative response
  • a random response, and/or
  • an unexpected response

So, Google may shift the rankings of your site, in what appears to be a random manner, before Google settles on a target rank.

Let’s say that you’re building links to a site, and the site moves up in the rankings. You would assume that the link building has had a positive effect. Not so if the patent code is active, as your site may have already been flagged.

Google then toys with you for a while before sending your site plummeting to the target rank. This makes it harder to determine cause and effect.

Just because a patent exists doesn’t mean Google is using it, of course. This may be just be another weapon in the war-of-FUD, but it sounds plausible and it’s something to keep in mind, especially if you’re seeing this type of movement.

The Search Engine As Black Box

In ancient times (1990s), SEO thrived because search engines were stupid black boxes. If you added some keywords here, added a few links there, the black box would respond in a somewhat predictable, prescribed, fashion. Your rankings would rise if you guessed what the black box liked to “see”, and you plummeted if you did too much of what the black box liked to see!

Ah, the good old days.

These days, the black box isn’t quite so stupid. It’s certainly a lot more cryptic. What hasn’t changed, however, is the battle line drawn between webmasters and search engines as they compete for search visitor attention.

If there are any webmasters still under the illusion that Google is the SEOs friend, that must be a very small club, indeed. Google used to maintain a – somewhat unconvincing – line that if you just followed their ambiguous guidelines (read: behaved yourself) then they would reward you. It was you and Google on the good side, and the evil spammers on the other.

Of late, Google appear to have gotten bored of maintaining any pretense, and the battle lines have been informally redrawn. If you’re a webmaster doing anything at all that might be considered an effort to improve rank, then you’re a “spammer”. Google would no doubt argue this has always been the case, even if you had to read between the lines to grasp it. And they’d be right.

Unconvinced?

Look at the language on the patent:

The systems and methods may also observe spammers’ reactions to rank changes caused by the rank transition function to identify documents that are actively being manipulated. This assists in the identification of rank-modifying spammers.

“Manipulated”? “Rank modifying spammers”? So, a spammer is someone who attempts to modify their rank?

I’ve yet to meet a webmaster who didn’t wish to modify their rank.

Google As A Competitor

Google’s business model relies on people clicking ads. In their initial IPO filing, Google identified rank manipulation as a business risk.

We are susceptible to index spammers who could harm the integrity of our web search results. There is an ongoing and increasing effort by “index spammers” to develop ways to manipulate our web search results

It’s a business risk partly because the result sets need to be relevant for people to return to Google. The largely unspoken point is Google wants webmasters to pay to run advertising, not get it for “free”, or hand their search advertising budget to an SEO shop.

Why would Google make life easy for competitors?

The counter argument has been that webmasters provide free content, which the search engines need in order to attract visitors in the first place. However, now relevant content is plentiful, that argument has been weakened. Essentially, if you don’t want to be in Google, then block Google. They won’t lose any sleep over it.

What has happened, however, is that the incentive to produce quality content, with search engines in mind, has been significantly reduced. If content can be scraped, ripped-off, demoted and merely used as a means to distract the search engine user enough to maybe click a few search engine ads, then where is the money going to come from to produce quality content? Google may be able to find relevant content, but “relevant” (on-topic) and “quality” (worth consuming) are seldom the same thing

One content model that works in such as environment is content that is cheap to produce. Cheap content can be quality content, but like all things in life, quality tends to come with a higher price tag. Another model that works is loss-leader content, but then the really good stuff is still hidden from view, and it’s still hard to do this well, unless you’ve established considerable credibility – which is still expensive to do.

This is the same argument the newspaper publishers have been making. The advertising doesn’t pay enough to cover the cost of production and make a profit – so naturally the winner in this game cuts production cost until the numbers do add up. What tends to be sacrificed in this process – is quality.

NFSW Corp, a new startup by ex-TechCrunch and Guardian columnist writer Paul Carr has taken the next step. They have put everything behind a paywall. There is no free content. No loss-leaders. All you see is a login screen.

Is this the future for web publishing? If so, the most valuable content will not be in Google. And if more and more valuable content lies beyond Google’s reach, then will fewer people bother going to Google in the first place?

The Happy Middle

Google argue that they focus on the user. They run experiments to determine search quality, quality as determined by users.

Here’s how it works. Our engineers come up with some insight or technique and implement a change to the search ranking algorithm . They hope this will improve search results, but at this point it’s just a hypothesis. So how do we know if it’s a good change? First we have a panel of real users spread around the world try out the change, comparing it side by side against our unchanged algorithm. This is a blind test — they don’t know which is which. They rate the results, and from that we get a rough sense of whether the change is better than the original. If it isn’t, we go back to the drawing board. But if it looks good, we might next take it into our usability lab — a physical room where we can invite people in to try it out in person and give us more detailed feedback. Or we might run it live for a small percentage of actual Google users, and see whether the change is improving things for them. If all those experiments have positive results, we eventually roll out the change for everyone”

Customer focus is, of course, admirable, but you’ve got to wonder about a metric that doesn’t involve the needs of publishers. If publishing on the web is not financially worthwhile, then, over time, the serps will surely degrade in terms of quality as a whole, and users will likely go elsewhere.

There is evidence this is already happening. Brett at Webmasterworld pointed out that there is a growing trend amongst consumers to skip Google altogether and just head for the Amazon, and other sites, directly. Amazon queries are up 73 percent in the last year.

There may well be a lot of very clever people at Google, but they do not appear to be clever enough to come up with a model that encourages webmasters to compete with each other in terms of information quality.

If Google doesn’t want the highest quality information increasingly locked up behind paywalls, then it needs to think of a way to nurture and incentivise the production of quality content, not just relevant content. Tell publishers exactly what content Google wants to see rank well and tell them how to achieve it. There should be enough money left on the table for publishers i.e. less competition from ads – so that everyone can win.

I’m not going to hold my breath for this publisher nirvana, however. I suspect Google’s current model just needs content to be “good enough.”

Categories: 

SEO Book.com

Google: “As We Say, NOT As We Do”

Posted by:  /  Tags:

Due to heavy lobbying, the FTC’s investigation into Google‘s business practices has ended with few marks or bruises on Google’s behalf. If the EU has similar results, you can count on Google growing more anti-competitive in their practices:

Google is flat-out lying. They’ve modified their code to break Google Maps on Windows Phones. It worked before, but with the ‘redirect,’ it no longer works.

We are only a couple days into the new year, but there have already been numerous absurdities highlighted, in addition to the FTC decision & Google blocking Windows Phones.

When is Cloaking, Cloaking?

Don’t ask Larry Page:

Mr. Page, the CEO, about a year ago pushed the idea of requiring Google users to sign on to their Google+ accounts simply to view reviews of businesses, the people say. Google executives persuaded him not to pursue that strategy, fearing it would irritate Google search users, the people say.

Links to Google+ also appear in Google search-engine results involving people and brands that have set up a Google+ account.

Other websites can’t hardcode their own listings into the search results. But anyone who widely attempted showing things to Googlebot while cloaking them to users would stand a good chance of being penalized for their spam. They would risk both a manual intervention & being hit by Panda based on poor engagement metrics.

Recall that a big portion of the complaint about Google’s business practices was their scrape-n-displace modus operandi. As part of the FTC agreement, companies are able to opt out of being scraped into some of Google’s vertical offerings, but that still doesn’t prevent their content from making its way into the knowledge graph.

Now that Google is no longer free to scrape-n-displace competitors, apparently the parallel Google version of that type of content that should be “free and open to all to improve user experience” (when owned by a 3rd party) is a premium feature locked behind a registration wall (when owned by Google). There is a teaser for the cloaked information in the SERPs, & you are officially invited to sign into Google & join Google+ if you would like to view more.

Information wants to be free.

Unless it is Google’s.

Then users want to be tracked and monetized.

Trademark Violations & Copyright Spam

A few years back Google gave themselves a pat on the back for ending relationships with “approximately 50,000 AdWords accounts for attempting to advertise counterfeit goods.”

How the problem grew to that scale before being addressed went unasked.

Last year Google announced a relevancy signal based on DMCA complaints (while exempting YouTube) & even nuked an AdSense publisher for linking to a torrent of his own ebook. Google sees a stray link, makes a presumption. If they are wrong and you have access to media channels then the issue might get fixed. But if you lack the ability to get coverage, you’re toast.

Years ago a study highlighted how Google’s AdSense & DoubleClick were the monetization engine for stolen content. Recently some USC researchers came to the same conclusion by looking at Google’s list of domains that saw the most DMCA requests against them. Upon hearing of the recent study, Google’s shady public relations team stated:

“To the extent [the study] suggests that Google ads are a major source of funds for major pirate sites, we believe it is mistaken,” a Google spokesperson said. “Over the past several years, we’ve taken a leadership role in this fight. The complexity of online advertising has led some to conclude, incorrectly, that the mere presence of any Google code on a site means financial support from Google.”

So Google intentionally avails their infrastructure to people they believe are conducting criminal conduct (based on their own 50,000,000+ “valid” DMCA findings) and yet Google claims to have zero responsibility for those actions because Google may, in some cases, not get a direct taste in the revenues (only benefiting indirectly through increasing the operating costs of running a publishing business that is not partnered with Google).

A smaller company engaged in a similar operation might end up getting charged for the conduct of their partners. However, when Google’s ad code is in the page you are wrong to assume any relationship.

The above linked LA Times article also had the following quote in it:

“When our ads were running unbeknownst to us on these pirate sites, we had a serious problem with that,” said Gareth Hornberger, senior manager of global digital marketing for Levi’s. “We reached out to our global ad agency of record, OMD, and immediately had them remove them…. We made a point, moving forward, that we really need to take steps to avoid having these problems again.”

Through Google’s reality warping efforts the ad network, the ad agency, the publisher, and the advertiser are all entirely unaccountable for their own efforts & revenue streams. And it is not like Google or the large ad agencies lack the resources to deal with these issues, as there is some serious cash in these types of deals: “WPP, Google’s largest customer, increased its spending on Google by 25% in 2012, to about $ 2 billion.”

These multi-billion Dollar budgets are insufficient funds to police the associated activities. Whenever anything is mentioned in the media, mention system complexity & other forms of plausible deniability. When that falls short, outsource the blame onto a contractor, service provider, or rogue partner. Contrasting that behavior, the common peasant webmaster must proactively monitor the rest of the web to ensure he stays in the graces of his Lord Google.

DMCA Spam

You have to police your user generated content, or you risk your site being scored as spam. With that in mind, many big companies are now filing false DMCA takedown requests. Sites that receive DMCA complaints need to address them or risk being penalized. Businesses that send out bogus DMCA requests have no repercussions (until they are eventually hit with a class action lawsuit).

Remember how a while back Google mentioned their sophisticated duplication detection technology in YouTube?

There are over a million full movies on YouTube, according to YouTube!

The other thing that is outrageous is that if someone takes a video that is already on YouTube & re-uploads it again, Google will sometimes outrank the original video with the spam shag-n-republish.

In the below search result you can see that our video (the one with the Excel spreadsheet open) is listed in the SERPs 3 times.

The version we uploaded has over a quarter million views, but ranks below the spam syndication version with under 100 views.

There are only 3 ways to describe how the above can happen:

  • a negative ranking factor against our account
  • horrible relevancy algorithms
  • idiocy

I realize I could DMCA them, but why should I have to bear that additional cost when Google allegedly automatically solved this problem years ago?

Link Spam

Unlike sacrosanct ad code, if someone points spam links at your site, you are responsible for cleaning it up. The absurdity of this contrast is only further highlighted by the post Google did about cleaning up spam links, where one of the examples they highlighted publicly as link spam was not a person’s spam efforts, but rather a competitor’s sabotage efforts that worked so well that they were even publicly cited as being outrageous link spam.

It has been less than 3 months since Google launched their disavow tool, but since it’s launch some webmasters are already engaging in pre-negative SEO. That post had an interesting comment on it:

Well Mr Cutts, you have created a monster in Google now im afraid. Your video here http://www.youtube.com/watch?v=HWJUU-g5U_I says that with the new disavow tool makes negative SEO a mere nuisance.
Yet in your previous video about the diavow tool you say it can take months for links to be disavowed as google waits to crawl them???
In the meantime, the time lag makes it a little more than a “nuisance” don’t you think?

Where Does This Leave Us?

As Google keeps adding more advanced filters to their search engines & folding more usage data into their relevancy algorithms, they are essentially gutting small online businesses. As Google guts them, it was important to offer a counter message of inclusion. A WSJ articles mentioned that Google’s “get your business online” initiative was more effective at manipulating governmental officials than their other lobbying efforts. And that opinion was sourced from Google’s lobbyists:

Some Washington lobbyists, including those who have done work for Google, said that the Get Your Business Online effort has perhaps had more impact on federal lawmakers than any lobbying done on Capitol Hill.

Each of the additional junk time wasting tasks (eg: monitoring backlinks and proactively filtering them, managing inventory & cashflow while waiting for penalties tied to competitive sabotage to clear, filing DMCAs against Google properties when Google claims to have fixed the issue years ago, merging Google Places listings into Google+, etc.) Google foists onto webmasters who run small operations guarantees that a greater share of them will eventually get tripped up.

Not only will the algorithms be out of their reach, but so will consulting.

That algorithmic approach will also only feed into further “market for lemons” aspects as consultants skip the low margin, small budget, heavy lifting jobs and focus exclusively on servicing the companies which Google is biasing their “relevancy” algorithms to promote in order to taste a larger share of their ad budgets.

While chatting with a friend earlier today he had this to say:

Business is arbitrage. Any exchange not based in fraud is legitimate regardless of volume or medium. The mediums choose to delegitimize smaller players as a way to consolidate power.

Sadly most journalists are willfully ignorant of the above biases & literally nobody is comparing the above sorts of behaviors against each other. Most people inside the SEO industry also avoid the topic, because it is easier (& more profitable) to work with the elephants & attribute their success to your own efforts than it is highlight the holes in the official propaganda.

I mean, just look at all the great work David Naylor did for a smaller client here & Google still gave him the ole “screw you” in spite of doing just about everything possible within his control.

The linkbuilding tactics used by the SEO company on datalabel.co.uk were low quality, but the links were completely removed before a Reconsideration Request was filed. The MD’s commenting and directory submissions were done in good faith as ways to spread the word about his business. Despite a lengthy explanation to Google, a well-documented clean-up process, and eventually disavowing every link to the site, the domain has never recovered and still violates Google’s guidelines.

If you’ve removed or disavowed every link, and even rebuilt the site itself, where do you go from there?

Categories: 

SEO Book.com

Learn About Robots.txt with Interactive Examples

Posted by:  /  Tags: , , , ,

Posted by willcritchlow

One of the things that excites me most about the development of the web is the growth in learning resources. When I went to college in 1998, it was exciting enough to be able to search journals, get access to thousands of dollars-worth of textbooks, and download open source software. These days, technologies like Khan Academy, iTunesU, Treehouse and Codecademy take that to another level.

I've been particularly excited by the possibilities for interactive learning we see coming out of places like Codecademy. It's obviously most suited to learning things that look like programming languages – where computers are naturally good at interpreting the "answer" – which got me thinking about what bits of online marketing look like that.

The kinds of things that computers are designed to interpret in our marketing world are:

  • Search queries – particularly those that look more like programming constructs than natural language queries such as [site:distilled.net -inurl:www]
  • The on-site part of setting up analytics – setting custom variables and events, adding virtual pageviews, modifying e-commerce tracking, and the like
  • Robots.txt syntax and rules
  • HTML constructs like links, meta page information, alt attributes, etc.
  • Skills like Excel formulae that many of us find a critical part of our day-to-day job

I've been gradually building out codecademy-style interactive learning environments for all of these things for DistilledU, our online training platform, but most of them are only available to paying members. I thought it would make a nice start to 2013 to pull one of these modules out from behind the paywall and give it away to the SEOmoz community. I picked the robots.txt one because our in-app feedback is showing that it's one of the ones from which people learned the most.

Also, despite years of experience, I discovered some things I didn't know as I wrote this module (particularly about precedence of different rules and the interaction of wildcards with explicit rules). I'm hoping that it'll be useful to many of you as well – beginners and experts alike.

Interactive guide to Robots.txt

Robots.txt is a plain-text file found in the root of a domain (e.g. www.example.com/robots.txt). It is a widely-acknowledged standard and allows webmasters to control all kinds of automated consumption of their site, not just by search engines.

In addition to reading about the protocol, robots.txt is one of the more accessible areas of SEO since you can access any site's robots.txt. Once you have completed this module, you will find value in making sure you understand the robots.txt files of some large sites (for example Google and Amazon).

For each of the following sections, modify the text in the textareas and see them go green when you get the right answer.

Basic Exclusion

The most common use-case for robots.txt is to block robots from accessing specific pages. The simplest version applies the rule to all robots with a line saying User-agent: *. Subsequent lines contain specific exclusions that work cumulatively, so the code below blocks robots from accessing /secret.html.

Add another rule to block access to /secret2.html in addition to /secret.html.

Exclude Directories

If you end an exclusion directive with a trailing slash ("/") such as Disallow: /private/ then everything within the directory is blocked.

Modify the exclusion rule below to block the folder called secret instead of the page secret.html.

Allow Specific Paths

In addition to disallowing specific paths, the robots.txt syntax allows for allowing specific paths. Note that allowing robot access is the default state, so if there are no rules in a file, all paths are allowed.

The primary use for the Allow: directive is to over-ride more general Disallow: directives. The precedence rule states that "the most specific rule based on the length of the [path] entry will trump the less specific (shorter) rule. The order of precedence for rules with wildcards is undefined.".

We will demonstrate this by modifying the exclusion of the /secret/ folder below with an Allow: rule allowing /secret/not-secret.html. Since this rule is longer, it will take precedence.

Restrict to Specific User Agents

All the directives we have worked with have applied equally to all robots. This is specified by the User-agent: * that begins our commands. By replacing the *, however, we can design rules that only apply to specific named robots.

Replace the * with googlebot in the example below to create a rule that applies only to Google's robot.

Add Multiple Blocks

It is possible to have multiple blocks of commands targeting different sets of robots. The robots.txt example below will allow googlebot to access all files except those in the /secret/ directory and will block all other robots from the whole site. Note that because there is a set of directives aimed explicitly at googlebot, googlebot will entirely ignore the directives aimed at all robots. This means you can't build up your exclusions from a base of common exclusions. If you want to target named robots, each block must specify all its own rules.

Add a second block of directives targeting all robots (User-agent: *) that blocks the whole site (Disallow: /). This will create a robots.txt file that blocks the whole site from all robots except googlebot which can crawl any page except those in the /secret/ folder.

Use More Specific User Agents

There are occasions when you wish to control the behavior of specific crawlers such as Google's Images crawler differently from the main googlebot. In order to enable this in robots.txt, these crawlers will choose to listen to the most specific user-agent string that applies to them. So, for example, if there is a block of instructions for googlebot and one for googlebot-images then the images crawler will obey the latter set of directives. If there is no specific set of instructions for googlebot-images (or any of the other specialist googlebots) they will obey the regular googlebot directives.

Note that a crawler will only ever obey one set of directives – there is no concept of cumulatively applying directives across groups.

Given the following robots.txt, googlebot-images will obey the googlebot directives (in other words will not crawl the /secret/ folder. Modify this so that the instructions for googlebot (and googlebot-news etc.) remain the same but googlebot-images has a specific set of directives meaning that it will not crawl the /secret/ folder or the /copyright/ folder:

Basic Wildcards

Trailing wildcards (designated with *) are ignored so Disallow: /private* is the same as Disallow: /private. Wildcards are useful however for matching multiple kinds of pages at once. The star character (*) matches 0 or more instances of any valid character (including /, ?, etc.).

For example, Disallow: news*.html blocks:

  • news.html
  • news1.html
  • news1234.html
  • newsy.html
  • news1234.html?id=1

But does not block:

  • newshtml note the lack of a "."
  • News.html matches are case sensitive
  • /directory/news.html

Modify the following pattern to block only pages ending .html in the blog directory instead of the whole blog directory:

Block Certain Parameters

One common use-case of wildcards is to block certain parameters. For example, one way of handling faceted navigation is to block combinations of 4 or more facets. One way to do this is to have your system add a parameter to all combinations of 4+ facets such as ?crawl=no. This would mean for example that the URL for 3 facets might be /facet1/facet2/facet3/ but that when a fourth is added, this becomes /facet1/facet2/facet3/facet4/?crawl=no.

The robots rule that blocks this should look for *crawl=no (not *?crawl=no because a query string of ?sort=asc&crawl=no would be valid).

Add a Disallow: rule to the robots.txt below to prevent any pages that contain crawl=no being crawled.

Match Whole Filenames

As we saw with folder exclusions (where a pattern like /private/ would match paths of files contained within that folder such as /private/privatefile.html), by default the patterns we specify in robots.txt are happy to match only a portion of the filename and allow anything to come afterwards even without explicit wildcards.

There are times when we want to be able to enforce a pattern matching an entire filename (with or without wildcards). For example, the following robots.txt looks like it prevents jpg files from being crawled but in fact would also prevent a file named explanation-of-.jpg.html from being crawled because that also matches the pattern.

If you want a pattern to match to the end of the filename then we should end it with a $ sign which signifies "line end". For example, modifying an exclusion from Disallow: /private.html to Disallow: /private.html$ would stop the pattern matching /private.html?sort=asc and hence allow that page to be crawled.

Modify the pattern below to exclude actual .jpg files (i.e. those that end with .jpg).

Add an XML Sitemap

The last line in many robots.txt files is a directive specifying the location of the site's XML sitemap. There are many good reasons for including a sitemap for your site and also for listing it in your robots.txt file. You can read more about XML sitemaps here.

You specify your sitemap's location using a directive of the form Sitemap: <path>.

Add a sitemap directive to the following robots.txt for a sitemap called my-sitemap.xml that can be found at /my-sitemap.xml.

Add a Video Sitemap

In fact, you can add multiple XML sitemaps (each on their own line) using this syntax. Go ahead and modify the robots.txt below to also include a video sitemap called my-video-sitemap.xml that lives at /my-video-sitemap.xml.

What to do if you are stuck on any of these tests

Firstly, there is every chance that I've made a mistake with my JavaScript tests to fail to grade some correct solutions the right way. Sorry if that's the case – I'll try to fix them up if you let me know.

Whether you think you've got the answer right (but the box hasn't gone green) or you are stuck and haven't got a clue how to proceed, please just:

  1. Check the comments to see if anyone else has had the same issue; if not:
  2. Leave a comment saying which test you are trying to complete and what your best guess answer is

This will let me help you out as quickly as possible.

Obligatory disclaimers

Please don't use any of the robots.txt snippets above on your own site – they are illustrative only (and some would be a very bad idea). The idea of this post is to teach the general principles about how robots.txt files are interpreted rather than to explain the best ways of using them. For more of the latter, I recommend the following posts:

I hope that you've found something useful in these exercises whether you're a beginner or a pro. I look forward to hearing your feedback in the comments.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


SEOmoz Daily SEO Blog

The Partnership Trap – Whiteboard Friday

Posted by:  /  Tags: , , ,

Posted by AndrewDumont

Partnership is such an ubiquitous term. Each circumstance comes with a broad set of definitions. Depending on the context and company, a partnership can take on nearly every imaginable form. Due to its variety, the topic is something that isn't touched on too frequently.

At Moz, we rigorously evaluate each potential partnership before engaging. The reason we do this is to avoid something I like to call "The Partnership Trap." It's common for early-staged companies jump on every partnership opportunity that comes their way, as a partnership is often shiny and full of opportunity. Understandably, it's difficult to say no. This seems harmless in theory, but each partnership that you pursue requires the most precious of resources — time and focus — which few companies can freely spare.

In this week's Whiteboard Friday, I delve into the different forms a partnership can take, what to look for in perspective partnerships, and offer some tips in picking the ones that you should pursue.

 

Video Transcription

"Howdy, SEOmoz fans, and welcome to another edition of Whiteboard Friday. Today I'm going to be talking about a little something I like to call the "partnership trap." The reason why I call it a trap is because a lot of companies look at partnerships and they're like, "Oh, that sounds amazing." But then they get there and they realize that they partnered with too many people, or they partnered with the wrong type of company. So I'm going to be talking a little bit about what a partnership is and how to avoid the trap that some companies fall into.
 
So digging right into it, first let's talk about types of partnership that exist and some examples of that. So there's something I like to call an integration of products or services. A good example of this is what AmEx and Foursquare did. So for example with that partnership, Foursquare partnered with AmEx to provide basically the platform to allow small businesses to offer discounts to people that checked in via Foursquare. So a good example of two companies kind of teaming up together to work on something.
 
An extension of brand or product. A good example of this is Starbucks and ice cream. So Starbucks provided their brand to an ice cream manufacturer and in exchange do a rev share type of deal with the ice cream provider. So that's a good example of something like that.
 
And then an endorsement or exchange. So a good example of this is Best Buy with their warranty. So they aren't actually the one that provide the warranty or create the warranty. It's actually a third party company that does that, and Best Buy offers that to all of their customers when they come into the store.
 
So those are some good examples of partnerships. What are the benefits of partnerships? Why should we partner? There are a few good reasons. The first is shared networks or shared user bases. Speaking generally, it's kind of difficult to put that into the case of a lot of different types of companies. But, typically, the idea is that by partnering with another company, you get to share their network. You get to share their user base, which is beneficial to both sides.
 
Second point is industry or brand validation. Typically, a lot of startups or smaller sized companies like to partner with a larger company to get the validation of a larger brand. That's another good benefit of partnerships.
 
And then the third thing would be an outsourcing of a non-core competence. So you can think of this as, let's say for example for us, if we do not see managing our API as a core competence of our business, then we would look to a third party provider to manage that API business. A good example of that is Twitter outsourcing their API data to third parties, like Gnip and DataSift.
 
So let's talk through kind of a partner checklist. These are things that I like to make sure exist in a partnership before actually going through with it. The first thing and probably the most important thing is that the partnership needs to be mutually beneficial. What I found is that when you partner with companies that you're only getting something and they're not getting something in return, we find that it typically does not end up being a good partnership, because it's like a one sided relationship. You're giving all this, and you don't get anything in return.
 
One of the key things, even if you're that side that is getting all of the benefit, make sure that there's something in return that keeps the partner there and keeps the partnership strong. Make sure that a partnership is mutually beneficial before you engage.
 
Second part, this is probably one of the most important ones when it comes to startups and small businesses is making sure that the partnership is in line with the roadmap and the vision of the company. What a lot of small businesses and startups do is they get a big company, Microsoft or IBM or one of these big players in the space reaching out to them, or them reaching out to them. The problem is they end up going down this path that isn't in line with their vision or isn't in line with their roadmap, and they end up spending all of this time and resources and energy towards something that actually isn't that beneficial to their business in the long term.
 
So make sure to pick your partnerships with that in mind. Think about your roadmap, think about where you're going, and don't partner with people if it doesn't fit in with that.
 
Next thing, output exceeds input. So when you partner with someone, there's a lot that needs to be done. It takes engineering resources. It takes time from a lot of different people within your company. The problem with that is if the output does not exceed the input, then it's a bad investment.
 
There's financial modeling that can help and kind of determine what that output should be. But what I typically like to do is keep the ratio of three to one for output and input. The reason for that is what I've found is that a lot of the partnerships that I engage in, what I expect the output to be actually isn't what the output actually is. I try to get the best guess, but typically what you find is that the output is much less than what you expected or hoped for. So I typically like to give myself a little bit of wiggle room and keep that three to one ratio. So that's a good rule of thumb for when you look at the output.
 
Finally and probably most important, especially for us here at Moz is that cultures are aligned. There's nothing worse than teaming up with a partner that doesn't share the same values as you, doesn't share the same kind of beliefs that your company has. At our company, with our TAGFEE beliefs, if we are not teaming up with a company that shares those same values and isn't aligned with those type of things, we're going to run into problems.
 
It's definitely the number one thing that I take into account when we figure out whether we should partner with somebody, and there have been many times where we've decided not to partner with somebody strictly because their cultures were not aligned with ours. So that's a key thing to take a look at when you think about who to partner with.
 
So finally some tips. Whenever you can, try to mitigate risk when it comes to partnerships. Whenever possible, I try to pilot first. What that means is that you basically do a smaller size sample with the company that you're looking to partner with. Actually Square and Starbucks have a great example of this.
 
So Square and Starbucks teamed up to have Square process payments within all the Starbucks stores, but they didn't start in all of the Starbucks stores. It's only a select number. I believe it was 7,000 Starbucks stores will use Square to process their payments. The reason why they did that is to mitigate the risk of a larger rollout to the entire organization and then not have it work out.
 
So whenever possible try to pilot a program first before going full scale, just to make sure it works out and make sure that it is the type of return that you were expecting and hoping for. So try to pilot whenever possible.
 
Second tip, don't heighten legal. Sarah's probably not going to be too happy with this, but I found that a lot of deals die in legal, and a lot of times they shouldn't die in legal. A lot of people put greater importance on legal than actually needs to be placed on it. Then really you're killing something where legal doesn't really necessarily come into play or isn't a huge part of anything down the line. So make sure that you're secure from a legal standpoint, but don't overly heighten the value and the impact of legal. Know that it is a worst case scenario type of piece to an agreement, and try to make sure that you can be flexible in legal to avoid any issues and prevent things from getting done.
 
Finally, think long term. Most importantly, when you go into a partnership, you've got to think about it as a relationship. You've got to think about it as I am going to get married to this person. I'm going to date this person. This is something that is, or this company would probably be a better way to phrase that, it really is a relationship. You're spending a lot of time with them. You're going to be working with them very in-depth. Your success relies very much on their success. So you have to go into that with the long-term mindset.
 
Don't do just one-off partnerships just for the sake of doing a partnership. Make sure that there's purpose to what you're doing and you're thinking long term and you're picking your partners accordingly with that in mind.
 
Cool. Yeah, so that's pretty much it. If you have any questions, let me know, and thanks for tuning in to this edition. Thanks."

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


SEOmoz Daily SEO Blog

Getting Site Architecture Right

Posted by:  /  Tags: , , ,

There are many ways to organize pages on a site. Unfortunately, some common techniques of organizing information can also harm your SEO strategy.

Sites organized by a hierarchy determined without reference to SEO might not be ideal because the site architecture is unlikely to emphasize links to information a searcher finds most relevant. An example would be burying high-value keyword pages deep within a sites structure, as opposed to hear the top, simply because those pages don’t fit easily within a “home”, “about us”, contact” hierarchy.

In this article, we’ll look at ways to align your site architecture with search visitor demand.

Start By Building A Lexicon

Optimal site architecture for SEO is architecture based around language visitors use. Begin with keyword research.

Before running a keyword mining tool, make a list of the top ten competitor sites that are currently ranking well in your niche and evaluate them in terms of language. What phrases are common? What questions are posed? What answers are given, and how are the answers phrased? What phrases/topics are given the most weighting? What phrases/topics are given the least weighting?

You’ll start to notice patterns, but for more detailed analysis, dump the phrases and concepts into a spreadsheet, which will help you determine frequency.

Once you’ve discovered key concepts, phrases and themes, run them through a keyword research tool to find synonyms and the related concepts your competitors may have missed.

One useful, free tool that can group keyword concepts is the Google Adwords Editor. User the grouper function – described here in “How To Organize Keywords” to “generate common terms” option to automatically create keyword groupings.

Another is the Google Contextual Targeting Tool.

Look at your own site logs for past search activity. Trawl through related news sites, Facebook groups, industry publications and forums. Build up a lexicon of phrases that your target visitors use.

Then use visitor language as the basis of your site hierarchy.

Site Structure Based On Visitor Language

Group the main concepts and keywords into thematic units.

For example, a site about fruit might be broken down into key thematic units such as “apple”, “pear”, “orange”, “banana” and so on.

Link each thematic unit down to sub themes i.e. for “oranges”, the next theme could include links to pages such as “health benefits of oranges”, “recipes using oranges”, etc, depending on the specific terms you’re targeting. In this way, you integrate keyword terms with your site architecture.

Here’s an example in the wild:

The product listing by category navigation down the left-hand side is likely based on keywords. If we click on, say, the “Medical Liability Insurance” link, we see a group of keyword-loaded navigation links that relate specifically to that category.

Evidence Based Navigation

A site might be about “cape cod real estate”. If I run this term through a keyword research tool, in this case Google Keywords, a few conceptual patterns present themselves i.e people search mainly by either geographic location i.e. Edgartown, Provincetown, Chatham, etc or accommodation type i.e. rentals, commercial, waterfront, etc.

Makes sense, of course.

But notice what isn’t there?

For one thing, real estate searches by price. Yet, some real estate sites give away valuable navigation linkage to a price-based navigation hierarchy.

This is not to say a search function ordered by house value isn’t important, but ordering site information by house value isn’t necessarily a good basis for seo-friendly site architecture. This functionality could be integrated into a search tool, instead.

A good idea, in terms of aligning site architecture with SEO imperatives, would be to organise such a site by geographic location and/or accommodation type as this matches the interests of search visitors. The site is made more relevant to search visitors than would otherwise be the case

Integrate Site Navigation Everywhere

Site navigation typically involves concepts such as “home”, “about”, “contact”, “products” i.e. a few high-level tabs or buttons that separate information by function.

There’s nothing wrong with this approach, but the navigation concept for SEO purposes can be significantly widened by playing to the webs core strengths. Tim Berners Lee placed links at the heart of the web as links were the means to navigate from one related document to another. Links are still the webs most common navigational tool.

“Navigational” links should appear throughout your copy. If people are reading your copy, and the topic is not quite what they want, they will either click back, or – if you’ve been paying close attention to previous visitor behaviour – will click on a link within your copy to another area of your site.

The body text on every page on your site is an opportunity to integrate specific, keyword-loaded navigation. As a bonus, this may encourage higher levels of click-thru, as opposed to click-back, pass link juice to sub-pages and ensure no page on your site is orphaned.

Using Site Architecture To Defeat Panda & Penguin

These two animals have a world of connotations, many of them unpleasant.

Update Panda was an update partly focused on user-experience. Google is likely using interaction metrics, and if Google isn’t seeing what they deem to be positive visitor interaction, then your pages, or site, will likely take a hit.

What metrics are Google likely to be looking at? Bounce backs, for one. This is why relevance is critical. The more you know about your customers, and the more relevant link options you can give them to click deeper into your site, rather than click-back to the search results, the more likely you are to avoid being Panda-ized.

If you’ve got pages in your hierarchy that users don’t consider to be particularly relevant, either beef them up or remove them.

Update Penguin was largely driven by anchor text. If you use similar anchor text keywords pointing to one page, Penquin is likely to cause you grief. This can even happen if you’re mixing up keywords i.e. “cape cod houses”, “cape cod real estate”, “cape cod accommodation”. That level of keyword diversity may have been acceptable in the past, but it’s not now.

Make links specific, and link it to specific, unique pages. Get rid of duplicate, or near duplicate pages. Each page should be unique, not just in terms of keywords used, but in terms of concept.

In a post-Panda/Penquin world, webmasters must have razor-sharp focus on what information searchers find most relevant. Being close, but not quite what the visitor wanted, is an invitation for Google to sink you.

Build relevance into your information architecture.

Categories: 

SEO Book.com

Excel Statistics for SEO and Data Analysis

Posted by:  /  Tags: , , ,

Posted by Virgil

Everybody has probably already realized that there is almost no data that we cannot get. We can get data about our website by using free tools, but we also spend tons of money on paid tools to get even more. Analyzing the competition is just as easy, competitive intelligence tools are everywhere, we often use Compete or Hitwise. Opens Site Explorer is great for getting more data about our and competitors backlink profile. No matter what information we are trying to get, we can, by spending fortunes or no money. My favorite part is that almost every tool has one common feature and that is the "Export" button. This is the most powerful feature of all these tools because by exporting the data into Excel and we can sort it, filter it and model it in any way we want. Most of us use Excel on the regular basis, we are familiar with the basic functions but Excel can do way more than that. In the following article I will try to present the most common statistical techniques and the best part it is that we don't have to memorize complicated statistical equations, it's everything built into Excel!

Statistics is all about collecting, analyzing and interpreting data. It comes very handy when decision making faces uncertainty. By using statistics, we can overcome these situations and generate actionable analysis.

Statistics is divided into two major branches, descriptive and inferential.

Descriptive statistics are used when you know all the values in the dataset. For example, you take a survey of 1000 people asking if they like oranges, with two choices (Yes and No). You collect the results and you find out that 900 answered Yes, and 100 answered No. You find the proportion 90% is Yes 10 is No. Pretty simple right?

But what happens when we cannot observe all the data?

When you know only part of your data than you have to use inferential statistics. Inferential statistics is used when you know only a sample (a small part) from your data and you make guesses about the entire population (data).

Let's consider you want to calculate the email open rate for the last 24 months, but you have data only from the last six months. In this case, assuming that from 1000 emails you had 200 people opening the email, which resulted in 800 emails that didn't convert. This equates to 20% open rate and 80% who did not open. This data is true for the last six months, but it might not be true for 24 months. Inferential statistics helps us understand how close we are to the entire population and how confident we are in this assumption.

The open rate for the sample may be 20% but it may vary a little. Therefore, let's consider +- 3% in this case the range is from 17% to 23%. This sounds pretty good but how confident are we in these data? Alternatively, what percentage of a random sample taken from the entire population (data set) will fall in the range of 17%-23%?

In statistics, the 95% confidence level is considered to be reliable data. This means 95% of the sample data we take from the entire population will produce an open rate of 17-23%, the other 5% will be either above 23% or below 17%. But we are 95% sure that the open rate is 20% +- 3%

The term data stands for any value that describes an object or an event such as visitors, surveys, emails.

The term data set has two components, observation unit, which is for example visitors and the variables that can represent the demographic characteristics of your visitors such as age, salary or education level. Population refers to every member of your group, or in web analytics all the visitors. Let's assume 10,000 visitors.

A sample is only a part of your population, based on a date range, visitors who converted, etc. but in statistics the most valuable sample is considered a random sample.

The data distribution is given by the frequency with which the values in the data set occur. By plotting the frequencies on a chart, with the range of the values on the horizontal axis and the frequencies on the vertical axis, we obtain the distribution curve. The most commonly used distribution is the normal distribution or the bell-shaped curve.

An easy way to understand this is by considering the number of visitors a website has. For example the number of visits are on average of 2000/day but it happens to have more visits such as 3000 or less 1000.

Here, probability theory comes in handy.

Probability stands for the likelihood of an event happening such as having 3,000 visitors/day and is expressed in percentages.

The most common example of probability that probably everybody knows is the coin flip. A coin has two faces, head and tail, what is the probability when flipping a coin to have head? Well there are two possibilities so 100%/2=50%.

Enough with theories and let's get a little bit more practical.

Excel is an amazing tool that can help us with statistics, it's not the best but we all know how to use it so let's dive right into it.

First, install the Analysis ToolPack.
Open Excel, Go to Options -> Add-ins->at the bottom we will find

Hit Go ->select Analysis ToolPack->and click OK.

Now under the Data tab we will find Data Analysis.

The Data Analysis tool can give you supper fancy statistical information but first let's start with something easier.

Mean, Median, and Mode

Mean is the statistical meaning of average, for example the mean or average of 4,5,6 is 5 how we calculate in excel the mean? =average(number1,number2,etc)

Mean=AVERAGE(AC16:AC21)

By calculating the mean we know how much we sold on average. This information is valuable when there are no extreme values (or outliers). Why? It looks like we sold on average $ 3000 worth of products, but actually we were lucky that somebody spent more on September 6. But actually we did pretty poorly during the previous six days, with an average of only $ 618. Excluding the extreme values from the mean can reflect a more relevant performance rate.

The median is the observation situated in the middle of the data set. For example, the median of 224, 298, 304 is 298. In order to calculate the mean for a large set of data we can use the following formula =MEDIAN(224,298,304)

When is the median useful? Well, the median is useful when you have a skewed distribution, for example you are selling candies for $ 3 up to $ 15 a bag but you have some very expensive candies for $ 100 a bag that nobody really purchases on a regular basis. At the end of your month you have to make a report and you will see that you sold mostly cheap candies and only a couple of the $ 100. In this case calculating median is more beneficial.

The easiest way to determine when to use the median vs. the mean is by creating a histogram. If your histogram is skewed with an extreme, then you know that the best way to go is by calculating the median.

The mode is the most common value, for example the mode for: 4,6,7,7,7,7,9,10 is 7

In Excel you can calculate the mode by using the =MODE(4,6,7,7,7,7,9,10) formula.

Although this looks nice keep in mind that in Excel the lowest mode is considered, or in other words, if you have to calculate the mode for the following data set 2,2,2,4,5,6,7,7,7,8,9 you can see that you have two modes, 2 and 7 but Excel will show you only the smallest value: 2.

When can we use the mode function? Calculating the mode is beneficial only for whole numbers such as 1, 2 and 3. It is not useful for fractional numbers such as 1,744; 2.443; 3,323, as the chance to have duplicated numbers, or a mode, is very small.

A great example of calculating the mode, or the most frequent number, will be probably on a survey.

Histograms

Let's say your blog recently received hundreds of guest posts, some of them are very good ones but some of them are just not that good. Maybe you want to see how many of your blog posts received 10 backlinks, 20, 30 and so on, or maybe you are interested in social shares such as tweets or likes, but why not just simply visits.

Here we will categorize them into groups by using a visual representation called histograms. In this example I will use visits/articles as an easy example. The way I setup my Google Analytics account is as follows. I have a profile that tracks only my blog, nothing else. If you don't have such profile setup yet, then you can create a segment on the fly.

How are you doing this? Pretty simple:

Now go to export->CSV

Open the excel spread sheet and delete all the columns except for Landing Page and Visits. Now create the ranges (also called bins) that you want to be categorized into. Let's say we want to see how many articles generated 100 visits, 300, 500 and so on.

Got to Data -> Data Analysis->Histograms->OK

  • Input range will be the visits column
  • Bin Range will be the groups
  • Output Range, click on the cell where you want your histogram to show up
  • Check Chart Output
  • Click OK

Now you have a nice histogram that shows you the number of articles categorized by visits. To make it easier to understand this histogram, click on any cell from the Bin and Frequency table and sort the frequency by low to high.

Analyzing now the data is even easier. Now go back and sort all the articles with less or equal to 100 visits (Visit drop down->Number filters->Between…0-100->Ok) in the last month and update them, or promote them.

Visits by source

How valuable this report is for you?

It's pretty good but not amazing. We can see ups and downs but…how much did YouTube contribute in February to the total visits? You can drill down but that is extra work, and it is very uncomfortable when the question arrives on a phone call with a client. To get the most out of your graphs, create valuable self-descriptive reports.

The report above is so much easier to understand. It takes more time to create it but it's more actionable.

What we can see is that in May, Facebook had a bigger amount of contribution to the total than in general. How come? Probably the May marketing campaign was more effective than in other months, resulting in a lot of traffic. Go back and do it again! If it was a working solution, then repeat it.

If you consider that May is just by chance bigger than the rest of the months, then you should create a Chi-Square Test to make sure that the increase in visits is not by chance and it is statistically proven the effectiveness of your campaign.

The actual column is the number of visits, the expected column is the Mean(average) of the "actual" column. The formula of the Chi-Square test is =1-CHITEST(N10:N16,O10:O16) where N10:N16 are the values from Actual and O10:O16 the values from Expected.

The result of 100% is the confidence level that you can have when considering that the work invested in every month campaign impacts the number of visitors coming from Facebook.

When creating metrics, make them as easy as possible to understand, and relevant to the business model. Everybody should understand your reports.

The video below explains pretty well another example of Chi-Square function:
http://www.youtube.com/watch?v=UPawNLQOv-8

Moving average and linear regression for forecasting

We often see graphs like the one above. It can represent sales or visits, it doesn't really matter, it is constantly going up and down. There is a lot of noise in the data that we probably want to eliminate to generate a better understanding.

The solution, moving average! This technique is sometimes used by traders for forecasting, the Stock prices are booming one day but in the second they are hitting the floor.

Let's see how we can use their basic techniques to make it work for us.

Step 1:
Export to excel the number of visits/sales for a long time period, such as one or two years.

Step 2:
Go to Data-> Data Analysis -> Moving Average ->OK

Input range will be the column with the number of visits

Interval will be the number of days on which the average is created. Here you should create one moving average with a higher number such as 30 and another one with a smaller number such as 7.

Output range will be the column right next to the visits column.

Repeat the steps for the interval of 7 days

Personal Preference: I didn't check the chart output and standard error box on purpose, I will create a graph later on.

Your data now probably looks similar to this:

Now if you select all the columns and create a line chart it will look like this:

This representation has less noise, it is easier to read and it shows some trends, the green line cleans up a little bit in the chart but it reacts to almost every major event. The red line instead is more stable and it shows a real trend.

At the end of the line chart you can see that it says Forecast. That is forecasted data based on previous trends.

In Excel there are two ways for creating a linear regression, using the formula =FORECAST(x,known_y's, known_x's) where "x" stands for the date you want to forecast, "known_y's" are the visits column and "known_x's" are the date column. This technique is not that complicated but there is an easier way to do this.

By selecting the entire visits column and dragging down the field handle it will automatically forecast for the following dates.

Note: Make sure to select the entire data set in order to generate an accurate data set.

There is a theory when comparing a 7day moving average and a 30day. As said above the 7day line reacts to almost every major change while the 30day one requires more time to change its direction. As a rule of thumb when the 7day moving average is intersecting the 30day moving average then you can expect a major change that will last longer than a day or two. As you can see above around April 6th the 7 day moving average is intersecting the 30 day one and the number of visits are going down, around June 6th the lines are crossing again and the trends are going upward. This technique is useful when you are losing traffic and you are not yet sure if it is just the trend or it is just a daily fluctuation.

Trendline

The same results can be achieved by using the trend line feature of excel: Right click on the wiggling line -> select: Add Trendline

Now you can select the Regression Type and you can use the Forecast feature as well. Trendlines are probably the most useful to find out if your traffic/sales are going upward, downward or are simply flat.

Without the linear function we cannot confidently tell if we are doing better or not. By adding a linear trendline we can see that the slope is positive the trendline equation explains how our trend is moving.
y=0.5516x-9541.2

X represents the number of days. The coefficient to x, 0.5516, is a positive number. This means that the trendline is going upward. In other words every day that passes by we increase the number of visitors with 0.5 as a trend.

R^2 represents the level of accuracy of the model. Our R^2 number is 0.26 which implies that our model explains 26% of the variations. Simply said: we are 26% confident that every other day that passes by our number of visitors increases with one new visitor.

Seasonal Forecasting

Christmas is coming soon and forecasting the winter season can be beneficial especially when your expectations are high.

If you didn't get hit by Panda or Penguin and your sales/visitors are following a seasonal trend, then you can forecast a pattern for sales or visitors.

Seasonal forecasting is technique that enables us to estimate future values of a data set that follows a recurring variation. Seasonal datasets are everywhere, an ice cream store will be very profitable during the summer season and a gift store can reach the maximum sales during the winter holidays.

Forecasting data for near future can be very beneficial, especially when we planning to invest money in marketing for those seasons.

The following example is a basic model but this can be expanded to a more complex one to fit your business model.

Download the Excel forecasting example

I will break up the process into steps to be easier to follow. The best way to implement it for your business is by downloading the Excel spreadsheet and following the steps:

  • export your data, the more data you have the better forecasting you can make! and place the dates into column A and sales into column B.
  • Calculate the index for each month and add the data in column C

In order to calculate the index scroll down at the bottom right of the spreadsheet and you will find a table called Index. The index for Jan-2009 is calculated by dividing the sales from Jan-2009 by the average sales of the entire year 2009.

Repeat calculating the index for every month of every year.

In column S38 to S51 we calculated the average index for every month

Because our seasonality is every 12 month we copied the index means into column C over and over again matching up every month. As you can see January of 2009 has the same index data as January 2010 and 2011

  • In column D calculate the Adjusted data by dividing the monthly sales by the index =B10/C10
  • Select the values from column A, B and D and create a line chart
  • Select the adjusted line (in my case the Red line) and add a linear trendline, check the "Display Equation on Chart" box

  • Calculate the backcasted non-seasonal data by multiplying the monthly sales by the coefficient from the trandline equation and adding the constant from the equation (column E)

After creating the trendline and we displayed the Equation on the chart we consider the Coefficient the number which is multiplied by X and the constant the number that is usually has a negative sign.

We place the coefficient into cell E2 and the Constant into cell F2

  • Calculate the Backcasted Seasonal data by multiplying the index (column C) with the previously calculated data (column E)
  • Calculate MPE(mean percentage error) by dividing sales by Backcasted seasonal minus 1 (=B10/F10-1)
  • Calculate MAPE (mean adjusted percentage error) by squaring the MPE column (=G10^2)

In my case cell F50 and F51 represents the forecasted data for Nov-2012 and Dec-2012. Cell H52 represents the error margin.

By using this technique we can say that in December 2012 we are going to make $ 22,022 +- 3.11%. Now go to your boss and show him how you can predict the future.

Standard deviation

Standard deviation tells us how much we deviate from the mean, in other words we can interpret it as a confidence level. For example if you have monthly sales, your daily sales will be different every day. Then you can use the standard deviation to calculate how much you deviate from the monthly average.

There are two Standard Deviation formulas in Excel that you can use.
=stdev -when you have sample data -> Avinash Kaushik explains in more details how sampling works http://www.kaushik.net/avinash/web-analytics-data-sampling-411/

or

=stdevp -when you have the entire population, in other words you are analyzing every visitor. My personal preference is =stdev just because there are cases when the JS tracking code is not executed.

Let's see how we can apply Standard Deviation in our daily life

Probably you see the wiggling graph in analytics daily but it is not very intuitive. By using standard deviation in Excel you can easily visualize and understand better what is happening with your data. As you can see above, average daily visits were 501 with a standard deviation of 53, also the most important, you can see where you exceeded the normal so you can go back and check out which of your marketing efforts caused that spike.

For the Excel document use the following link http://blog.instantcognition.com/wp-content/uploads/2007/01/controllimits_final.xls

Correlation

Correlation is the tendency that one variable change is related to another variable. A common example in web analytics can be the number of visitors and the number of sales. The more qualified visitors you have the more sales you have. Dr Pete has a nice infographic explaining correlation vs. causation http://www.seomoz.org/blog/correlation-vs-causation-mathographic

In Excel we use the following formula to determine the correlation:
=correl(x,y)

As you can see above we have a correlation between Visits and Sales of 0.1. What does this mean?

  • between 0 and 0.3 is considered weak
  • between 0.3 and 0.7 is normal
  • above 0.7 is strong

The conclusion in our case is that daily visits don’t affect daily sales, which also means that the visitors that you are attracting are not qualified for conversion. You also have to consider your business sense when making a decision. But a correlation of 0.1 may not be overlooked.

If you want to correlate three or more datasets you can use the correlation function from the Data Analysis tool.

Data->Data Analysis->Correlation

Your result will look similar to this one:

What we can see here is that none of the elements correlate with each other:

  • Sales and visitors= correlation of 0.1
  • Sales and Social Shares = correlation of 0.23
  • Descriptive Statistics for quick analysis

Now you have a pretty good understanding of the mean, standard deviations etc. but calculating each statistical element can take a long time. The Data Analysis tool provides a quick summary of the most common elements.

  • Go to Data->Data Analysis-> Descriptive Statistics
  • Input Range – select the data you want to analyze
  • Output Range – select the cell where you want your table to be displayed
  • Check Summary Statistics

The result is pretty nice:

You already know most of the elements but what is new here is Kurtosis and Skewness

Kurtosis explains how far peaked the curve is from the mean, in other words the higher the kurtosis value is the bigger the peak is on the sides, in our case the kurtosis is a very low number which means the values are spread out evenly

Skewness explains if your data is negatively or positively skewed from a normal distribution. Now let me show you more visually what I mean:

Skeweness: -0.28 (the distribution is more likely oriented towards the higher values 2500 and 3000)
Kurtosis: -0.47 (we have a very small peak deviation from the center)

These are some of the techniques that you can use when analyzing data, the biggest challenge behind statistics and Excel is the ability of applying these techniques in various situations and not being limited to visits or sales. A great example of multiple statistical approaches implemented together was realized by Tom Anthony in his post about Link Profile Tool.

The examples above are just a small fraction of what can be done with statistics and Excel. If you are using other techniques that help you take faster and better decisions I would love to hear about them in the comment section.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


SEOmoz Daily SEO Blog

Marketing Resolutions for 2013 – Hold Us Accountable (and Help Us Out!)

Posted by:  /  Tags: , , , , , ,

Marketing Resolutions for 2013 – Hold Us Accountable (and Help Us Out!) was originally published on BruceClay.com, home of expert search engine optimization tips.

Happy New Year! I’ve been told one way to hold yourself accountable to your goals is to post them publicly – but what’s even better is when you have a support system to help you get there. So Virginia and I were chatting this morning and we’re ready to share with you some professional marketing resolutions and goals for 2013 — and maybe you can help us out.

Hanging Out on a Trolley in San Francisco

Virginia Nussey and Jessica Lee vow to take more tourism-poster-ready shots this year when travelling to conferences together — like this one at SES SF 2012.

Aside from doing more traveling together to our favorite conferences (see left), we both have hopes of growing our skill sets, refining processes and continuing to develop professionally.

We hope you’ll weigh in with your goals, too, and then let’s do our best to collaborate! Who knows, maybe we can help one another accomplish our professional resolutions for 2013. What do you say?

Virginia’s Resolutions for 2013

I tend to shy away from resolutions because the new year seems like an arbitrary time to start doing something you obviously feel you should be doing anyway.

Yet at the same time, a new year resolution has something going for it: a built in metric for measuring success. And as we know, what gets measured gets done.

When you make a resolution for the new year, you have an easy date marker to check on concrete progress made at intervals throughout the year and get a sense of overall effectiveness one year out.

In addition to the time restraint, all goals are more likely to be achieved when they are measurable and concrete. These were chief qualifications as I set my professional resolutions for 2013. Have you found success with new year resolutions in the past? What characteristics do you feel a resolution must have in order to be achievable and not just a passing wish?

These are my professional resolutions. I’d love to hear yours along with your plans for how to make sure they get done.

1. Establish 3 ongoing guest blogging or cross-blogging partnerships.

Handwritten Letter

We ramped up guest blogging on the BCI blog in December, and through the course of arranging posts from guest authors and trading posts from others, I was reminded of how important guest spots are for every blog’s promotion and audience reach.

And so I’m setting a concrete goal for guest blogging in 2013. I want at least 3 blog partners to trade guest posts with every month. I’ve already got some guest partnerships brewing, but if you want in on the guest blog action at the Bruce Clay, Inc. blog, let me know.

It goes without saying that guests to the BCI blog must be experienced, expert-level, highly reputable Internet marketing practitioners.

2. Write down processes and become more efficient with client work flow.

Typewriter

Jess and I wear many hats in our dear content and marketing department. With all the sombrero swapping we do, defined processes are a helpful thing to have on hand during transitions between projects and handing off to other BCI team members. Written processes are also a critical educational component, internally and with clients.

So over the year ahead I want to polish processes for:

  • Keyword research in the SEO content marketing vein.
  • Content writing, including the subject research and discovery process and on-page keyword optimization.
  • Performance measurement of content starting from baseline SEOToolSet ranking reports and on to Google Analytics conversion path analysis.

With those processes in place, Jess and I will be in a happy place for efficiency and expansion opportunities.

3. Invest in a personal pet project where I can apply my professional skill set.

Businessmen Meditating

There’s a special kind of fulfillment that can be attained when your professional and personal interests collide. Coming up with the right idea to pursue is going to require some thought, but I know it’ll be worth it.

Understanding Internet marketing is a special skill in the world we live in today; not using it to further our personal projects and goals is doing ourselves a disservice.

Happiness in life comes from striking a balance in the various areas of our life. Hard working professionals are in danger of overlooking personal passions. I want to be sure to carve out more time every week for the activities and issues that interest me beyond Internet marketing. I want to dedicate my professional skills of promotion on the Internet to the things I care about.

Stay tuned for more details on this front.

Jessica’s Resolutions for 2013

I’m a huge fan of professional development and growth. I’m always trying to better my skill set, hurdle over personal barriers and think of new ways to look at how my discipline is growing and changing. So this year is going to be all about challenging myself to experiment with new things, testing new ideas and learning as much as I can.

1. Get better at analytics.

Highlighted Word on Paper

This has been on my list for way longer then I’d like to admit. I can find my way around Google Analytics OK, but it’s extremely important to what I do every day that I am able to quickly access the data and set up what I need in analytics to make more informed decisions about strategy. It’s especially interesting to me to be able to tie content to revenue.

I’ve been told that the Google Analytics certification is a great way to sit through the learning necessary to customize and get the most out of their analytics. So my first step is to get through the basic GA certification, start using it like a power player, and then explore more advanced learning in the area of analytics.

There’s so much to be learned, and I truly believe that we cannot get the most out of our efforts until we learn how to best use the data that’s available to us. I plan to explore mining data on just about everything I do, so that means lots of learning and doing with the right reporting.

2. Be more efficient.

Architect Plans

Every month that passes here at BCI is only getting busier and busier. With competing deadlines, projects and clients, and everyone working their heiny off, my goal is to find ways for us to work smarter.

Our content department is an interesting beast. We’re not *just* in the business of writing. We deal a lot with many other facets of digital marketing because they all interconnect. But on top of all this, we do have to write. So not only are we dealing with the more strategic side of content, but we toggle back to creating content as well.

I need more advanced tools and processes this year to make sure the content team has continued success, including exploring new tactics and making sure none of us burn out. So efficiency is the name of the game here.

I’m especially interested in exploring different ways of project management to make things “easier.” That includes tools for large-scale project management like scheduling people and editorial calendars. I also want to further explore lean and agile approaches to tackling projects, like sprints.

3. Start being more active in the community.

Community Badge

I love the fact that that we have a platform at BCI to share ideas and concepts – namely the BCI blog and search marketing conferences. That said, every year I vow to do more outreach and collaborate more, but it rarely happens at the rate I’d like it to. And it’s so frustrating!

Know how many group conversations on LinkedIn I see every day that I could contribute to but never do? Know how many queries I get from HARO but never answer? Know how many opportunities there are for me to collaborate with all the creative and talented people in the industry and I just don’t reach out on a regular basis?

So this is my goal: Start sharing more knowledge where I can; start being a part of the conversation more; start collaborating more; start listening more. It all sparks a creative cycle. Every time I spend a bit of my day being a part of the community, I get inspired.

I’m excited to be joining Search Engine Watch this year as a contributing guest author, and I hope to contribute more to the Content Marketing Institute in 2013, too. Looking for other opps, too — so please reach out if you have them!

Virginia is also doing more educational outreach, starting with guest authorship at the Online Marketing Institute. And we hope to forge many more relationships where we can be a more active part of the community we love. I hope to see more webinars, guest posting, speaking and/or teaching and other forms of educational content in our future.

4. Explore the relationship between content and user experience.

Tablet Reading

I think content as it relates to user experience is going to be very important in the future. So I’m interested in researching how user experience and content work together. I want to get to the bottom of how people like to consume content and why.

Some of the questions I need to explore are:

  • What makes a better learning experience for people and why?
  • What gets people excited about content and not overwhelmed by it?
  • What makes people react to one type of content over another?

I’ve been fascinated by user experience for some time. Google is highly focused on the user experience, and I think it’s only a matter of time before we’ll stop the more granular SEO tactics and focus more time on the user experience.
I’m really intrigued at how content becomes a part of this. And I want to start experimenting with how far I can push the limits of what I know is good for SEO in favor of user experience to see how content fares in those environments.

5. More experimenting and testing.

Audio Recorder and Notepad

This year, I want to take more risks with the content I produce. That means exploring new ways of writing (starting with a creative writing course I’ll be taking this month) and new ways of creating and distributing information.

I consider myself a pretty diverse writer, but I tend to fall back on a lot of “how to” in my blog pieces. To be honest, I’ve been somewhat hesitant to break away from that mold because people do seem to find that content very useful. But I want to start exploring other ways of writing that’s still useful, relevant and resonates with readers.

So that’s where I’ll need you, our reader, to weigh in on what types of content you like and want to see more of.

In 2013, I’m looking to find answers to:

  • What works better: shorter or longer posts? Do people really have the time or attention span to read lengthy pieces anymore?
  • Is curated content a good solution for us for certain content objectives?
  • Should we have more guest authors and user generated content? What’s the best way to approach this?
  • Do people like shorter posts? More in-depth posts?
  • Do readers react stronger to posts driven by:
  1. Storylines and experiences?
  2. How-to strategy and tactics?
  3. Posts or posts that touch on forward-thinking ideas and concepts?
  4. Fun, silly posts?
  5. Posts that don’t require a lot of time or thinking?
  6. Posts that have more video or imagery in place of text?

Another thing I’ve been meaning to do for a long time now is start my personal blog. I have soooo many ideas swirling around in my head that need to be unleashed in my own platform so I can have another testing ground.

And speaking of testing, I want to test more. That means getting savvier with the many tools and data that’s available so I can A/B and multivariate test.

Well there you have it. Now I suppose we’re stuck on making these things happen – eh, Virginia? If you’re reading this and you have important professional goals for 2013, vow to commit to them right now by saying them out loud! We’d love to be supportive in any way we can.

Bruce Clay Blog

A Question of Creative Content Ethics

Posted by:  /  Tags: , , ,

A Question of Creative Content Ethics was originally published on BruceClay.com, home of expert search engine optimization tips.

I was going to do another one of those roundup blog posts today. And I cringed at the thought of it, as the past few weeks have been nothing but roundups of 2012 and predictions for 2013. But I was going to do it anyway. And do you know why? Because it’s an easy win.

Paper in a Wastebasket

That was, until I read a brilliant piece this morning by Jonathon Colman on how we can do better with our content. But more on that later.

Yesterday, I shared my 2013 marketing resolutions with you. One of them involved experimenting with new ways of writing and content production that tried to hit the sweet spot in terms of what an audience wants and what we can give them.

Serendipitously, as I was writing this post, senior SEO analyst Bradley Leese walked in and asked a question that made me think: “How do you create content people want to read?”

Wow. Great question.

My first thought was: As a writer, it’s intuitive. You think, either this piece is great or it totally sucks. But that’s just your opinion. So what ends up happening when you’re in the business of marketing content, is that you base the content’s worth on how many people share it and talk about it.

Then I thought about Mashable. Look at how big their captive audience is. They can say “poop on a stick” and it will be shared thousands of times. (No offense to Mashable, as I know they take a journalistic approach to timely and relevant topics). But their brand is their catalyst. So which came first – the content or the brand?

Quality vs. Efficiency – Which Wins in Content?

Ethically, I believe the content producer, not the brand, is responsible for great content. But then there’s the question of schedules, calendars, projects and resources. Who wins then – quality or efficiency?

It’s a vicious cycle, trading originality for visibility and time-savings. Sure, people may be interested in the same old how-to information or a roundup of others’ ideas, but how much does it really make a difference?

Did you know it takes me about 5 hours on average to put together one of my in-depth blog posts? The problem is time. I don’t always have the luxury of spending several hours each time I write a post to research, write, edit, optimize, upload, format, and promote when I have a million other competing deadlines.

But we want to create content we can be proud of and that adds value. And I’m sure many of you out there feel that way, too. However, the output must be within the confines of our limitations – our schedules, timelines and other deliverables.

I was recently inspired by Seth Godin’s blog. His posts are succinct and to the point (like 200 words succint), but also make you think. This week, he had a post about “the drip.” Don’t try to be brilliant all the time, he said, just be brilliant enough to be remembered and make an impact over time.

Here’s an excerpt from that post:

“When you commit to writing regularly, the stakes for each thing you write go down. I spent an hour rereading Gary Larson’s magical collection, and the amazing truth is that not every cartoon he did was brilliant. But enough of them were that he left his mark.”

Even though I have serious internal battles about writing shorter posts (mine usually average 1,500 words), I also vowed 2013 would be the year of experimentation. And after spending some time in Seth Godin’s blog, I was convinced this was a new path I was going to experiment with — more frequent posts with less pressure to try and be brilliant every time a post is created. Yes, that sounds better.

That is, until I read this post by Jonathon Colman that stopped me dead in my tracks.

It stopped me from posting another roundup (which is ironic, because I found it in a roundup). And it brought me right back to the existential crisis I was having about quality versus quantity.

Content marketing is the culprit. We are in the business of vying for visibility and attention through content. Figuring out how to make that content compelling amongst all the noise is the million-dollar question.

And the creative process is not an easy one. I don’t know what giving birth feels like, but I know there’s both pain and joy. And this is what I go through every time I sit down to create content.

See, content creators are not robots. If people want content churned out without any creative process behind it, then perhaps a team like this is more appropriate:

So what’s a content professional to do? I think the balance lies in:

  • What you have time for, and pushing yourself — pushing your skill set and creativity to figure out how to make that time you have for content creation as meaningful as possible to still share useful, valuable and effective ideas — even if it’s not 1,500 words.
  • What your readers (are willing to) have time for, and how to create content that resonates with them in the brief time you have with that person.

What do you think? And how can you help?

Bruce Clay Blog

Page 12 of 19 «...1011121314...»