If you love to play at a casino, you will love this new online version. Cleopatra Casino provides all of the excitement you will find in a traditional casino, but with the added convenience of being available wherever you happen to be in the moment. This is great news for players everywhere. You will appreciate the innovation that has come to this platform, complete with over 2,000 slots and a host of table games on offer in the live casino. The cool Ancient Egyptian theme will leave you with a good feeling as well. In short, there really is no reason to not give this new online casino a serious try.
Registered in Curacao
This is the first reason to play at this online casino. Cleopatra Casino has gained the full backing of the government of Curacao. This means that this island nation is overseeing all of the action taking place at Cleopatra Casino, and they hold its developers to high standards. You can feel confident that your money is being taken care of and that the games are being run in a dignified manner. At the end of the day, this is still a game of chance, but at least you will know that you have a fighting chance. That is the best you can you ask for, and you will hopefully leave a winner!
How About Those Slots?
If you love to play a few cleopatra slots/pokies from time to time, you will love this new entrant into the online casino market. Cleopatra Casino offers more than 2,000 different slots, and new ones are constantly being added. You will find a host of games that follow a distinctly Egyptian theme, while many others are more traditional in nature. Each game is innovative and has its own set of easy to following guidelines for winning. You will enjoy the action, and you might just walk away with a bit of money in the process.
You will want to also take advantage of the weekly slots challenge offered at Cleopatra Casino – to the tune of $2,500. That is every week. It is free to join the challenge, and you will then earn points based on your winning in the slots room accumulated throughout the week. There is a virtual leaderboard that is constantly updated so that you will know exactly where you stand. If you do not win the jackpot, there are still loads of other cash prizes up for grabs each week as well.
Don’t Forget To Check Out the Live Casino
When you go online to Cleopatra Casino, you will also find a live casino that offers you a variety of table games to take part in. The casino itself is easy to get to from the homepage, and it is available to you around the clock. There is no waiting for a spot at your favorite table, and you don’t even have to worry about where to park the car! Just hop online and join the game of your choice, be it blackjack, roulette, or any other games on offer.
The Bonus Offer is Special as Well
When you first join Cleopatra Casino, you will be offered a 100 percent bonus on your initial deposit up to 4,000 Euros. That should give you all the incentive that you need to give this online casino a serious try. You will have a lot of chips to use on the slots or in the live casino. You can try out games you might never have seen before, and you are virtually guaranteed of having a great time.
Many people forget plagiarism is an illegal act. Not only is plagiarism a crime, but it also prevents the individual from producing his or her unique work. Whether you are a teacher requesting original essays from your students, a businessman seeking out unique proposals from your employees, or a website developer in search of one of a kind material, plagiarism is a disease you want to avoid. It causes problems for both the plagiarizer and the person is search of original content. Therefore, to avoid running into any complications set forth by plagiarism and those who commit it, there are websites available that check content for originality. The top five plagiarism checking websites provide guaranteed services that will ensure the content you receive is one hundred percent unique.
What Constitutes as Plagiarism?
What is plagiarism exactly and how does it negatively impact anyone? In the most basic of terms, plagiarism can be defined as using the words of another and claiming them as your own. In other words, if you use someone else’s words and say you said it originally, you are plagiarizing. It gets more complicated, though. You can plagiarize without even being aware that you’re committing the crime. For example, if you write an essay or an article for a professor or a business and then proceed to reuse your work for a different assignment, that is plagiarism. Recycling essays and articles is plagiarism unless you cite your original essay or article as a resource. Even paraphrasing improperly can constitute as a form of plagiarizing, meaning that even if you masterfully reconstruct the essay of another, it can still be considered plagiarism – and therefore still be an illegal act.
You might be wondering, if plagiarism only involves the use of someone else’s words, why is it illegal? Using the words of another and passing it off as your own is considered a literary theft. Using your own words over and over is also considered a form of plagiarism because it cancels out the article or essays originality. In terms of education, plagiarism may result in suspension or expulsion. In terms of the internet and business world, plagiarism may result in copyright infringement – meaning hours and hours of paperwork and hundreds of thousands of dollars spent on lawyers and court fees. The easiest way to avoid plagiarism is to simply not engage in it, and if you are running a company or website, in particular, the best way to check for plagiarized work is to use a plagiarism checker website.
How do Plagiarism Checker Websites Work
Plagiarism checker websites cater to educators as well as people outside of academia seeking to prevent plagiarized content passing through as original. Students can use these sites to ensure they have not unintentionally plagiarized, as this common mistake occurs more than one might think. Teachers can use these tools to ensure their students are not trying to pass of the work of another as their own. Website owners can avoid embarrassment by ensuring their site content is completely original, and publishers can ensure their free-lance writers are providing accurate, unique content and not passing of the work of another as their own as well.
So how does it work? Plagiarism checker websites scan your work into their database and then compare your content with that of their own database. For example, if a student were to upload a paper that he or she wrote independently, the plagiarism checker website would compare the uploaded paper to the hundreds and thousands of pages of content found within their databases. If the scan could not match the paper with anything in the database, the plagiarism report would return clean.
The accuracy of plagiarism checker websites, then, depends on the size of the uploaded document and the nature of their database. For example, if you are writing a scholarly article, you want to ensure your plagiarism checker website has a database that includes academic research websites. Likewise, if your website hosts content involving something such as automobiles, you would need to ensure that the website’s database held articles and information that would be able to detect whether or not another website contained the same information, word for word, about automobiles. With that being said, it is important to ensure the plagiarism checker website you use will be able to detect the type of information you are searching for. Plagiarism checker websites tend to explicitly state the types of information and content their engines can search for; therefore, you will be able to know whether or not your plagiarism checker website will work for you.
Turnitin Turnitin is one of the top resources for plagiarism detection. It is particularly useful for teachers and educators, as school systems can buy a license to use the program throughout the school, saving the individual educators money. How it works is educators upload their students work, and the work is then scanned through the Turnitin database. Turnitin’s database is one of the most extensive available online, with access to over forty-five billion websites, over three hundred and thirty-seven million academic essays, and over one hundred and thirty million scholarly articles and publications.
In addition to being able to match the students’ work to preexisting works on the internet, Turnitin also provides percentages explaining how much of the work is the students’ and how much is the work of another. Turnitin accounts for the certain degrees of plagiarism that are inevitable. For example, the phrase “for example” will appear in countless numbers of sources on the internet, but that does not mean the student plagiarized that phrase. Furthermore, Turnitin will “score” the students’ papers and break down the score in accordance to the amount of original work in relation to possible plagiarized sections. Finally, Turnitin will provide educators with the sources the students should have referenced and with a pre-formatted source card. From there, educators, publishers, and web content managers can use the information provided by Turnitin to determine the actions that need to be taken in order to deal with the current instance of plagiarism and in order to prevent further plagiarism from occurring in the future.
The cost of using Turnitin depends on your reasons for using it, and free quotes are available online in less than three business days. It is available in yearly subscriptions.
iThenticate iThenticate is a plagiarism checker that is geared toward professional and collegiate level academic writing. iThenticate is trusted by one out of every three scholarly journals, and it is also used by collegiate level educators across the nation. iThethenticate has access to over thirty-eight million scholarly articles, books, medical journals, and conferences; over ninety-two million published works such as magazines and encyclopedias; and it also has access to over forty-five billion current and archived websites.
iThenticate caters mostly to publishing firms that want to avoid publishing stolen or copied content. While their services may be utilized by practically anyone, their services are more applicable to working professionals. Their prices, like Turnitin’s, depend based upon the extent of services you are seeking, but like Turnitin, free quotes are available online. While they offer an option to purchase a license for a year, you can also buy individual credits to use their services – this is recommended for those who are only interested in checking for plagiarism in less than eleven papers at any one time.
Dupli Checker While Dupli Checker does not offer the extensive services that Turnitin and iThenticate offer, it does offers its services for free. Used most frequently by teachers who suspect their students of committing plagiarism and by students who want to avoid accidently committing plagiarism, Dupli Checker is a very simple system that provides accurate results. Dupli Checker is available through an online format, and educators and students need only upload their documents or copy and paste them into the box available on the website. From there, Dupli Checker will compare the text with their database, which is large in its own right but not as extensive as the two previous options listed. Considering its price, however, it is not a bad option for teachers and students – although publishers may not be as apt to use it. Dupli Checker is compatible with all operating services, both Windows and Apple products, as it is hosted through an online interface. Nevertheless, it is geared more toward elementary and secondary school educators and students, as it does not have access to the same extensive database that other plagiarism checker websites have access to. Nevertheless, for students and educators on modest budgets, this option provides excellent, accurate results.
Plagiarisma.Net Plagarisma is a free tool that is available as a Google App and a Windows App and is also accessible through the website itself. Plagarisma is available at no cost to users and tailors each plagiarism search the content of the text. For example, if you submit an article that is scholarly, it will run your article through the scholarly database. In other words, Plagarisma offers multiple databases, each one having a specific specialization. In addition to offering plagiarism resources, Plagarisma also offers grammar tools for students and teachers, checking to ensure that not only the information presented is original but also that it is grammatically correct. As a special bonus, Plagarisma is available in multiple language formats and can access databases in languages other than English – a facet that is particularly helpful to those with content that involves extensive research.
While Plagarisma.net provides services for free and offers an extensive, multi-faceted database, it is intended for educators and students more so than publishers and web content managers. Although publishers and web content managers are more than welcome to utilize its services, one must remember that Plagarisma offers a very basic service that, while 100% accurate, is not as extensive as a service such as Turnitin or iThenticate.
Viper
Viper is unique in that it offers both free and paid services. Viper scans your work and content into a database that compares it with over a billion different scholarly and other sources available on the internet. In addition to checking for evidence of plagiarism, it also provides educators, publishers, and web content managers with a report explaining the similarities between the uploaded content and the content found within the database. The only difference between the paid and free services occurs in the long-term. Because Viper is a downloadable service and not a search box you copy and paste material into through an online interface, Viper preserves your work for up to six months. What this means is that if you are using the paid service format, Viper will notify you if a copy of your uploaded material has surfaced on the internet. In other words, it will notify you if someone has plagiarized your work or your students work, after the fact. This service is excellent for those in publishing and those managing website content, as copies of their content surfacing on the internet can cause a problem in regards to copyright infringement.
The only downside of Viper is its compatibility issues. Viper is only suited for Window PCs. In other words, if you are using an Apple product, you are out of luck. Because this software must be downloaded and it is not compatible with Macs, it will only offer results to those who use Windows operating systems.
Each of the aforementioned plagiarism checker websites is authentic and provides accurate results. Certain service providers will be more suited to specific professions, but you can rest assured that regardless of the service you select to check for plagiarism, you will be provided with precise, truthful results. Although paying for a plagiarism service is undesirable, as free is always more convenient, depending on your reasons for needing a plagiarism checking service provider, the nominal fees may be worth it; remember, the cost of violating copyright infringement is a lot higher than the cost of using a plagiarism checker.
At the end of the day, plagiarism creates an embarrassing situation for all involved. The person presenting the plagiarism will always receive the blunt of the shame. Nevertheless, if you are in charge of running a website, and it is brought to your attention that the content on your site – content that you were under the impression was original – is an exact replica of another website’s content, you’re going to be in for a rude awakening. Therefore, avoid any future embarrassment and disappointment by ensuring the content you are presenting is original, and if you are an educator, be sure the content your students are providing you with is actually their own work. Use plagiarism checker websites and rest assured that you are not being deceived. Preventing plagiarism has never been easier.
After a successful three-day run, SMX Advanced 2013 has come to an end and, after saying goodbye to the Bell Harbor International Conference Center and spending the night in Sea–Tac Airport, our Bruce Clay live-bloggers have safely returned home.
In case you missed it, today’s post offers a quick-read recap of nine SMX Advanced live-blogging sessions that cover topics from PPC best practices and link acquisition, to content curation, authorship, and Schema.
Technical SEO is still relevant. When optimizing, remember that user intent comes first, and site speed matters.
Anchor text has gone down in value since last year.
The presence of Google+ 1 JavaScript code can cause a page to get indexed, and Google+ social sharing can make an impact on Google indexing and ranking within 4 or 5 days.
Social shares on Facebook also impact Google indexing and ranking, but it can take 7 to 8 days.
Authorship versus AuthorRank — what’s the difference?
Google Authorship versus AuthorRank: Google Authorship refers to the system whereby authors can connect themselves to pieces of content by linking content back to their Google+ profiles. AuthorRank, on the other hand, is a more nebulous concept that supposedly uses information gathered through the Authorship program and other signals to attribute a level of authority to a writer’s body of work, which is then used as a ranking factor in the SERPs.
13 percent of SERP top pages have Google Authorship markup but, overall, there has been low adoption of Authorship. Authorship is still in its infancy and understanding of how Authorship signals effect SERP position is something that is still being researched and worked on.
Google+ profiles, pages and communities have PageRank, and their PageRank corresponds not only to the level of Google+ engagement they receive, but also to their backlink profile from sites external to Google+. Using Authorship on authoritative sites builds the search rank power of your profile.
Videos, PDFs, Word documents and online PowerPoint presentations can all transfer Authorship but Google Books cannot.
Click-throughs and on-site activity are more important than ever. If a user clicks through to your site and spends 2.5 minutes looking at your article and then clicks the back button, Google will take into consideration their time on site and deliver the searcher more articles from you in future search results.
There are hundreds of Schema objects that can be assigned including reviews, ratings, recipes, and person. Note: Authorship and “person” are not the same. Person is for show hosts and assigns information about the person; Google Authorship uses a rel=author tag and links to the author’s G+ profile.
Only .27 percent of domains are using Schema, but those sites that do use Schema markup have seen 47 percent higher rankings on average.
When it comes to content curation, the wrong questions to ask are: What’s the bare minimum? How many words do I need to get by? How many keywords do I need to put in the copy?
Use qualityimages, which you can find through Flickr’s Creative Commons, Shuttershock or via your own lens. Remember, content with images is more likely to be shared on social media.
Types of curated content that tend to get traction: Improved versions of original posts, argumentative or controversial articles, lists, how-to articles and guides, images/memes, timelines, comparisons, offbeat or extreme posts, and videos.
Curating content can save time. An original blog post may take upwards of four hours, while a curated post can be compiled in two hours or less.
The idea of typing keywords into a search box is soon going to be outdated. Natural interaction with devices is going to be the new model and technology is catching up.
The keyboard and mouse are on their way out, to be replaced with conversational interfaces for apps and devices. Proactive applications that can infer what you want or are planning to do based on your interactions with technology and search engines are coming.
Younger people are drawn to the accessible spoken-word functionality of Bing. Research is being done to figure out if users want Bing to be like a friend, a butler, or a snarky concierge.
With comScore figures placing Bing at 17+ percent market share, June 2013 has been Bing’s best month ever.
Create good high-quality content and make the content available for others to embed in-full on their websites. For example, people like infographics they can feature on their sites.
Don’t stress about whether or not SEO is becoming inbound marketing. Do your basic SEO.
Make sure to keep your internal link structure updated. Internal link building still works well, and manual link outreach still drives revenue.
In light of Penguin, monitor your inbound anchor-text. Understand how link spam analysis works and make sure anchor text stays brand-focused (as opposed to keyword phrase focused) 50 percent of the time or more.
In this session SEO veterans Bruce Clay, Rae Hoffman, Greg Boser and Alex Bennert answer questions including: How do you deal with syndicated content? If your audience isn’t on Google+, should you spend time on it? When do you know when your job as an SEO is done? What’s the biggest mistake marketers are making? And our personal favorite, if you could make Matt Cutts do anything, what would it be?
Matt Cutts ascends the stage to drop some knowledge. Referring to the paid side of search and keyword data, Matt says that he’s only one guy and can’t push the whole organization. He mentions that he’s really pushing for security and recommends the book Little Brother and says it will inform your thinking on the topic.
The SEOs, Matt Cutts, and Duane Forrester, each share one item they would consider a major conference takeaway. Bruce: fix weak links. Alex: make use of data in Google Webmaster Tools and Bing Webmaster Tools. Greg: concentrate on mobile and consider responsive design that supports native apps. Rae: Avoid being generic at all costs. Matt: make something compelling and optimize the user experience. Duane: think social.
Last week, I reviewed “Who Owns The Future?” by Jaron Lanier. It’s a book about the impact of technology on the middle class.
I think the reality Janier describes in that book is self-evident – that the middle class is being gouged out by large data aggregators – but it’s hard, having read it and accepted his thesis, not to feel the future of the web might be a little bleak. Laniers solution of distributing value back into the chain via reverse linking is elegant, but is probably unlikely to happen, and even if it does, unlikely to happen in a time frame that is of benefit to people in business now.
So, let’s take a look at what can be done.
There are two options open to someone who has recognized the problem. Either figure out how to jump ahead of it, or stay still and get flattened by it.
Getting Ahead Of The SEO Pack
If your business model relies on the whims of a large data aggregator – and I hope you realize it really, really shouldn’t if at all humanly possible – then, you need to get a few steps ahead of it, or out of its path.
There’s a lot of good advice in Matt Cutt’s latest video:
It could be argued that video has a subtext of the taste of things to come, but even at face value, Cutts advice is sound. You should make something compelling, provide utility, and provide a good user experience. Make design a fundamental piece of your approach. In so doing, you’ll keep people coming back to your site. Too much focus on isolated SEO tactics, such as link building, may lead to a loss of focus on the bigger picture.
In the emerging environment, the big picture partly means “avoid getting crushed by a siren server”, although that’s my characterization, and unlikely to be Cutts’! Remember, creating quality, relevant content didn’t prevent people from being stomped by Panda and Penguin. All the link building you’re doing today won’t help you when a search engine makes a significant change to the way they count and value links.
Fast forward and we’re all spending our days flying these things (computers). But are we doing any heavy lifting? Are we getting the job done, saving the day, enabling the team? Or are we just “flying around” like one of those toy indoor helicopters, putzing around the room dodging lamps and co-workers’ monitors until we run out of battery power and drop to the floor? And we call it work.”…More than ever, we have ways to keep “busy” with SEO. The old stand-byes “keyword research” and “competitive analysis” and “SERP analysis” can keep us busy day after day. With TRILLIONS of links in place on the world wide web, we could link analyze for weeks if left alone to our cockpits. And I suppose every one of you SEOs out there could rationalize and justify the effort and expense (and many of you agency types do just that.. for a living). The helicopter is now cheap, fast, and mobile. The fuel is cheap as well, but it turns out there are two kinds of fuel for SEO helicopters. The kind the machine needs to fly (basic software and electricity), and the kind we need to actually do any work with it (seo data sets, seo tools, and accurate and effective information). The latter fuel is not cheap at all. And it’s been getting more and more expensive. Knowing how to fly one of these things is not worth much any more. Knowing how to get the work done is
A lot of SEO work falls into this category.
There is a lot of busy-ness. A lot of people do things that appear to make a difference. Some people spend entire days appearing to make a difference. Then they appear to make a difference again tomorrow.
But the question should always be asked “are they achieving anything in business terms?”
It doesn’t matter if we call it SEO, inbound marketing, social media marketing, or whatever the new name for it is next week, it is the business results that count. Is this activity growing a business and positioning it well for the future?
If it’s an activity that isn’t getting results, then it’s a waste of time. In fact, it’s worse than a waste of time. It presents an opportunity cost. Those people could have been doing something productive. They could have helped solve real problems. They could have been building something that endures. All the linking building, content creation, keyword research and tweets with the sole intention of manipulating a search engine to produce higher rankings isn’t going to mean much when the search engine shifts their algorithms significantly.
And that day is coming.
Pivot
To avoid getting crushed by a search engine, you could take one of two paths.
You could spread the risk. Reverse-engineer the shifting algorithms, with multiple sites, and hope to stay ahead of them that way. Become the gang of moles – actually, a “labour” of moles, in proppa Enlush – they can’t whack. Or, at least, a labour of moles they can’t whack all at the same time! This is a war of attrition approach and it is best suited to aggressive, pure-play search marketing where the domains are disposable.
However, if you are building a web presence that must endure, and aggressive tactics don’t suit your model, then SEO, or inbound, or whatever it is called next week, should only ever be one tactic within a much wider business strategy. To rely on SEO means being vulnerable to the whims of a search engine, a provider over which you have no control. When a marketing tactic gets diminished, or no longer works, it pays to have a model that allows you to shrug it off as an inconvenience, not a major disaster.
The key is to foster durable and valuable relationships, as opposed to providing information that can be commodified.
There are a number of ways to achieve this, but one good way is to offer something unique, as opposed to being one provider among many very similar providers. Beyond very basic SEO, the value proposition of SEO is to rank higher than similar competitors, and thereby gain more visibility. This value proposition is highly dependent on a supplier over which we have no control. Another way of looking at it is to reduce the competition to none by focusing on specialization.
Specialize, Not Generalize
Specialization involves working in a singular, narrowly defined niche. It is sustainable because it involves maintaining a superior, unique position relative to competitors.
Specialization is a great strategy for the web, because the web has made markets global. Doing something highly niche can be done at scale by extending the market globally, a strategy that can be difficult to achieve at a local market level. Previously, generalists could prosper by virtue of geographic limits. Department stores, for example. These days, those departments stores need to belong to massive chains, and enjoy significant economies of scale, in order to prosper.
Specialization is also defensive. The more specialized you are, they less likely the large data aggregators will be interested in screwing you. Niche markets are too small for them to be bother with. If your niche is defined too widely, like travel, or education, or photography, for example, you may face threats from large aggregators, but this can be countered, in part, by design, which we’ll look at over the coming week.
If you don’t have a high degree of specialization, and your business relies solely on beating similar business by doing more/better SEO, then you’re vulnerable to the upstream traffic provider – the search engine. By solving a niche problem in a unique way, you change the supply/demand equation. The number of competing suppliers becomes “one” or “a few”. If you build up sufficient demand for your unique service, then the search engines must show you, else they look deficient.
Of course, it’s difficult to find a unique niche. If it’s profitable, then you can be sure you’ll soon have competition. However, consider than many big companies started out as niche offerings. Dell, for example. They were unique because they sold cheap PCs, built from components, and were made to order. Dell started in a campus dormitory room.
What’s the alternative? Entering a crowded market of me-too offerings? A lot of SEO falls into this category and it can be a flawed approach in terms of strategy if the underlying business isn’t positioned correctly. When the search engines have shifted their algorithms in the past, many of these businesses have gone up in smoke as a direct result because the only thing differentiating them was their SERP position.
By taking a step back, focusing on relationships and specific, unique value propositions, business can avoid this problem.
Advantages Of Specialization
Specialization makes it easier to know and deeply understand a customers needs. The data you collect by doing so would be data a large data aggregator would have difficulty obtaining, as it is nuanced and specific. It’s less likely to be part of an easily identified big-data pattern, so the information is less likely to be commodified. This also helps foster a durable relationship.
Once you start finely segmenting markets, especially new and rising markets, you’ll gain unique insights and acquire unique data. You gain a high degree of focus. Check out “Business Lessons From Pumpkin Hackers”. You may be capable of doing a lot of different things, and many opportunities will come up that fall slightly outside your specialization, but there are considerable benefits in ignoring them and focusing on growing the one, significant opportunity.
Respin
Are you having trouble competing against other consultants? Consider respinning so you serve a specific niche. To specialize, an SEO might build a site all about dentistry and then offer leads and advertising to dentists, dental suppliers, dental schools, and so on. Such a site would build up a lot of unique and actionable data about the traffic in this niche. They might then use this platform as a springboard to offering SEO services to pre-qualified dentists in different regions, given dentistry is a location dependent activity, and therefore it is easy for the SEO to get around potential conflicts of interest. By specializing in this way, the SEO will likely understand their customer better than the generalist. By understanding the customer better, and gaining a track record with a specific type of customer, it gives the SEO an advantage when competing with other SEO firms for dentists SEO work. If you were a dentist wanting SEO services, who’s pitch stands out? The generalist SEO agency, or the SEO who specializes in web marketing for dentists?
Similarly, you could be a generalist web developer, or you could be the guy who specializes in payment gateways for mobile. Instead of being a web designer, how about being someone who specializes in themes for Oxwall? And so on. Think about ways you can re-spin a general thing you do into a specific thing for which there is demand, but little supply.
One way of getting a feel for areas to specialize in is to use Adwords as a research tool. For example, “oxwall themes” has almost no Adwords competition and around 1,300 searches per month. Let’s say 10% of that figure are willing to pay for themes. That’s 130 potential customers. Let’s say a specialist designer converts 10% of those, that’s 13 projects per month. Let’s say those numbers are only half right. That’s still 6-7 projects per month.
Having decided to specialize in a clearly defined, narrow market segment, and having good product or service knowledge and clear focus, you are much more likely to be able to spot the emerging pain points of your customers. Having this information will help you stand out from the crowd. Your pitches, your website copy, and your problem identification and solutions will make it harder for more generalist competitors to sound like they don’t know what they are talking about. This is the unique selling proposition (USP), of course. It’s based on the notion of quality. Reputation then spreads. It’s difficult for a siren server to insert itself between word of mouth gained from good reputation.
Differentiation is the aim of all businesses, no matter what the size. So, if one of your problems is being too reliant on search results, take a step back and determine if your offer is specialized enough. If you’re offering the same as your competitors, then you’re highly vulnerable to algorithm shifts. It’s hard to “own” generalist keyword terms, and a weak strategic position if your entire business success is reliant upon doing so.
Specialization lowers the cost of doing business. An obvious example can be seen in PPC/SEO. If you target a general term, it can be expensive to maintain position. In some cases, it’s simply impossible unless you’re already a major player. If you specialize, your marketing focus can be narrower, which means your marketing cost is lower. You also gain supply-side advantages, as you don’t need to source a wide range of goods, or hire as many people with different skillsets, as the generalist must do.
Once you’re delivering clear and unique value, you can justify higher prices. It’s difficult for buyers to make direct comparisons, because, if you have a high degree of specialization, there should be few available to them. If you are delivering that much more value, you deserve to be paid for it. The less direct competition you have, the less price sensitive your offering. If you offer the same price as other offerings, and your only advantage is SERP positioning, then that’s a vulnerable business positioning strategy.
If you properly execute a specialization strategy, you tend to become more lean and agile. You may be able to compete with larger competitors as you can react quicker than they can. Chances are, your processes are more streamlined as they are geared towards doing one specific thing. The small, specialized business is unlikely to have the chain of command and management structure that can slow decision making down in organizations that have a broader focus.
Specialized businesses tend to be more productive than their generalist counterparts as their detailed knowledge of a narrow range of processes and markets mean they can produce more with less. The more bases you cover, the more organisational aspects come into play, and the slower the process becomes.
In Summary
There are benefits in being a generalist, of course, however, if you’re a small operator and find yourself highly vulnerable to the whims of search engines, then it can pay to take a step back, tighten your focus, and try to dominate more specialist niches. The more general you go, the more competition you tend to encounter. The more competition you encounter in the SERPs, the harder you have to fight, and the more vulnerable you are to big data aggregators. The highly specialized are far more likely to fly under the radar, and are less vulnerable to big-brand bias in major verticals. The key to not being overly dependent on search engines is to develop enduring relationships, and specialization based on a strong, unique value proposition is one way of doing so.
Next article, we’ll look at differentiation by UX design and user experience.
Jon Henshaw put the hammer down on inbound marketing highlighting how the purveyors of “the message” often do the opposite of what they preach. So much of the marketing I see around that phrase is either of the “clueless newb” variety, or paid push marketing of some stripe.
@seobook why don’t you follow more of your followers?
One of the clueless newb examples smacked me in the face last week on Twitter, where some “HubSpot certified partner” (according to his Twitter profile) complained to me about me not following enough of our followers, then sent a follow up spam asking if I saw his artice about SEO.
The SEO article was worse than useless. It suggested that you shouldn’t be “obvious” & that you should “naturally attract links.” Yet the article itself was a thin guest post containing the anchor text search engine optimization deep linking to his own site. The same guy has a “book” titled Findability: Why Search Engine Optimization is Dying.
Why not promote the word findability with the deep link if he wants to claim that SEO is dying? Who writes about how something is dying, yet still targets it instead of the alleged solution they have in hand?
If a person wants to claim that anchor text is effective, or that push marketing is key to success, it is hard to refute those assertations. But if you are pushy & aggressive with anchor text, then the message of “being natural” and “just let things flow” is at best inauthentic, which is why sites like Shitbound.org exist. 😉
Some of the people who wanted to lose the SEO label suggested their reasoning was that the acronym SEO was stigmatized. And yet, only a day after rebranding, these same folks that claim they will hold SEO near and dear forever are already outing SEOs.
Sad but fact: Rand Fishkin outs another site that just happens to be competing with Distilled twitter.com/randfish/statu…
The people who want to promote the view that “traditional” SEO is black hat and/or ineffective have no problems with dumping on & spamming real people. It takes an alleged “black hat” to display any concern with how actual human beings are treated.
Then he told me he wasn’t seeing any results from following all the high-flown rhetoric of the “inbound marketing, content marketing” tool vendor. “Last month, I was around 520 visitors. This month, we’re at 587.” Want to get to 1,000? Work and wait and believe for another year or two. Want to get to 10,000? Forget it. … You could grow old waiting for the inbound marketing fairy tale to come true.
Of course I commented on the above post & asked Andrew if he could put “inbound marketer” in the post title, since that’s who was apparently selling hammed up SEO solutions.
In response to Henshaw’s post (& some criticalcomments) calling inbound marketing incomplete marketing Dharmesh Shah wrote:
When we talk about marketing, we position classical outbound techniques as generally being less effective (and more expensive) over time. Not that they’re completely useless — just that they don’t work as well as they once did, and that this trend would continue.”
Hugh MacLeod is brilliant with words. He doesn’t lose things in translation. His job is distilling messages to their core. And what did his commissions for HubSpot state?
thankfully consiging traditional marketing to the dustbin of history since 2006
traditional marketing is easy. all you have to do is pretend it works
the good news is, your customers are just as sick of traditional marketing as you are
hey, remember when traditional marketing used to work? neither do we
traditional marketing doesn’t work. it never did
Claiming that “traditional marketing” doesn’t work – and never did, would indeed be claiming that classical marketing techniques are ineffective / useless.
If something “doesn’t work” it is thus “useless.”
You never hear a person say “my hammer works great, it’s useless!”
As always, watch what people do rather than what they say.
When prescription and behavior are not aligned, it is the behavior that is worth emulating.
That’s equally true for keyword rich deeplink in a post telling you to let SEO happen naturally and for people who relabel things while telling you not to do what they are doing.
If “traditional marketing” doesn’t work AND they are preaching against it, why do they keep doing it?
As a local business owner, it’s important for your business to be listed in Google’s search results. But how do you fix your business listing if the information is incorrect?
In this week’s edition of Local Whiteboard Friday, David Mihm sheds some light on the complicated process that Google uses to create its business listings.
For reference, here’s a still of David’s whiteboard diagram.
Video Transcription
“Hey everybody. Welcome to another edition of Whiteboard Friday and in particular a local edition of Whiteboard Friday. I’m David Mihm, the Director of Local Search Strategy for SEOMoz, and I’m here to answer one of the most common questions that we get asked which is: “Hey, how come my business information is showing up incorrectly at Google?”
So they type in the name of their business, and there’s either a phone number wrong or their address is wrong or sometimes the marker for where their business is, is in the wrong place. So I want to try to answer how Google generates its business listings.
So the first step that a lot of business owners take, which is a great step to take, is they go directly to Google. Google offers a dashboard for businesses that Google Places as well as Google+, there are kind of two ways into it right now. A business owner goes and he enters his business name, his address, his phone number, some categories, maybe the hours that he operates his business, and he tells that directly to Google. Of course the expectation is, “Oh well, I’m the business owner. I’m telling Google this information. That’s how it should show up when Google spits out a search result.” But in reality that’s not actually how Google assembles a business listing. So I’m going to erase these lines, and I’ll try to walk you guys through how this process actually happens.
So for many of you, if you’re business owners, you go to one of these places, the Google Places dashboard or the Google+ local dashboard, and you tell Google about your business and you find before you even get there Google knows about your business. It can guess at what your address and phone number are for example.
So you might wonder where Google is finding that information. Actually in the United States there are three companies that aggregate business data for United States businesses. Again, this is the United States only, but in this country those guys are Infogroup, Neustar and Axiom. So Google buys or leases information from at least one of these companies and pulls it into its index. But it doesn’t go right into Google’s index. It actually goes into a massive server cluster that takes it into consideration as one data source.
So not only is the business owner one of these data sources, but you would have one data provider, maybe Infogroup is another data source. Neustar might be another data source and so and so forth. So imagine this graphic going quite far to the right, even off of the whiteboard just with some of these data aggregation services.
That all gets assembled at a server cluster, somewhere in Mountain View let’s just say, that compiles kind of all of this information. These however, aren’t even the only places that Google gets data. These guys, these data sources actually also, in addition to sending information to Google, they send data out to a whole bunch of other sites across the web. So Yelp, for example, gets information from one of these sources. Yellowpages.com gets information from one of these sources. Many of you guys have seen my local search ecosystem infographic that kind of details a little bit more about how this process works.
Then Google goes out, and it crawls these sites across the web and again throws that information into this server cluster. So again, imagine this table here going off basically to infinity, kind of off this page.
Additionally, in addition to these data aggregators, in addition to websites, Google looks at government information. So if you’re regional, like your county has a place of businesses that are registered in a particular county or maybe your secretary of state, Google is either probably going to crawl that information. In some cases the government publishes this in PDF format or something like that, and that gets pulled into this cluster again as one of these data points in this huge spreadsheet.
Another place that Google might get information believe it or not is Google Street View. Bill Slawski of SEO by the Sea recently gave a keynote at Local University in Baltimore, and there’s information in Googleâs patents that suggest that street view cameras from these cars that they go out and they drive around trying to find driving directions are taking photos of storefronts with business name signage, with the address numbers right there on the storefront, and that information gets pulled into this, what we call the cluster of information.
So there are all these different sources pulling in, and you as the business owner, you are only one of these data sources. So even though you tell Google, “Hey, yes this is my address, this is my phone number, this is where I’m located,” if Google is seeing bad information, at any of these other places from these data aggregators, from websites, from government entities, Google pulls data in from everywhere. So if every other source out, there or a lot of other sources out there that Google trusts, especially major data aggregators or government entities, if they have your information wrong, that could lead to misinformation in the search results.
But thereâs one final step actually before Google will publish the information, the authoritative information from this cluster. Google actually has human reviewers that are looking at this information. They are calling businesses to verify things like categories, the buildings that certain businesses are located in, and these reviewers will again call a real business offline. So if you get a call and it says, “Hey, Mountain View is calling you, it might actually be Google.” So pay special attention if your business receives those kind of calls. They might be trying to validate information that they’re finding from across the web.
The other thing to keep in mind is that Google accepts data from other reviewers, from other human reviewers via a website that it operates called Google Map Maker. So if you’re having trouble with your information from one of these sources, you might check Google.com/mapmaker. It’s like a Wikipedia for locations. Anybody in the world can go in there and update data. So it’s really, really important if you’re a business owner and you’re having trouble with Google publishing bad information about your business, you can’t just go into the Google Places dashboard or the Google+ dashboard and fix this information. You really need to go to all of these different sources. So these major data aggregators, they’re different in every country. So if you’re from somewhere else in the world besides the United States, you need to do some research on who these guys are. You need to update your information at Internet yellow pages sites. You definitely need to update your information with government authorities, and you probably want to check your information at least on this Google Map Maker site, because all of these feed into this central data cluster that then feeds into a Google search result for your business.
So I hope that explains a little bit about this very complicated process that Google has to assemble business listings. If you want more information in the text part of the page on which this Whiteboard is published, I’ll reference one of my colleagues at Local University, Mike Blumenthal. Mike has a great sort of text based layout of what I just explained visually, and Mike is actually the inspiration for this idea of the data cluster at Google Local.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Upset clients are an inevitability. To protect yourself against angry rants, you need to be vigilant when it comes to online reputation management. Learn how to make sure negative comments aren’t associated with your brand in the SERP.
1. Rank for your business name.
(Own page 1 of the SERP — and page 2 and 3, if you can).
When users search for your business on Google, you want them to find your business website, your Twitter account, your Facebook page, your Google Places for Business entry, articles on your business, your LinkedIn profile, blog posts from your business, praise from customers — literally, anything that relates to your business except negative comments and bad press (neither of which, in a best case scenario, even exist).
2. Target negative keyword phrases.
Take a moment to think about negative remarks an irate customer (or crafty competitor) might take to the Internet with. Let’s say your Superman, and Lex Luthor has been trolling Web and writing “Superman sucks,” and “I hate Superman” — maybe he’s even gone so far as to say “Superman is a super fraud.”
Well, the man of steel just can’t tolerate such slander. But, like any smart superhero, he knows that if he can manage to bury Lex’s accusations beyond page two (or even three) or the search, chances are slim that no one will every read Lex’s unsubstantiated claims. How can Superman do that? By creating content that targets those very phrases: “Superman sucks,” “I hate Superman,” and “Superman is a fraud.”
For example, Superman can ask Lois Lane to whip up some optimized blog posts entitled “10 Reasons Superman Doesn’t Suck,” “I Hate Love Superman” and “Why Superman isn’t a Fraud.” (And, of course, as a savvy writer and knowledgeable content creator, Lois will create quality blog posts that are at least 400 words, offer useful information and are optimized for their respective keyword phrases).
What might your business’ enemies try to slander you with? Beat them to the punch by ranking for any negative keyword phrases that could be associated with your business.
3. Snatch up negative domains
Similarly, Superman would want to own any domains that tarnish his good name, such as www.SupermanSucks.com. SEO Man (a.k.a. Bruce Clay) implements this practice. Type in www.BruceClaySucks.com and you will be redirected to www.BruceClay.com. That’s because Bruce owns www.BruceClaySucks.com. Which means someone who wishes to do him digital harm can’t own it. Neither Bruce nor Super Man would have it any other way.
Today, we’re happy to announce a partnership between Followerwonk and Buffer to help you optimize your tweeting. We’re really excited to be teaming up with such a great product and company, and the combination of our apps really does advance the cause of our customers.
Before I dig into specifics of that relationship, I want to lay some groundwork to explain why we formed this partnership.
It all comes down to this little pearl:
Tweets are delicate
Tweets have a half-life of a mere 18 minutes. Poof! and their utility to reach new customers, drive traffic, and extend your reach is pretty much gone.
But it gets worse for our little tweets.
First, most of us have a heck of a job consistently coming up with good content to tweet day after day. It’s kinda like going to the gym: we start out strong, but most of us quickly fade to where we spend the entire time in the sauna.
Second, even if we do come up with lots of good content, we risk undermining our own performance.
I want to talk with you about ways to squeeze the most out of the content we do come up with. How can we maximally schedule our tweets to perform?
Cultivate your current audience
Given the gossamer-like nature of tweets, a simple first step is to schedule most of your tweets when your followers are most active.
This is where our relationship with Buffer helps.
First, go to Followerwonk and complete an Analysis of your followers. Once you’re finished, we’ll present a chart of their most active hours. (Mouse over each hour to view your local time.)
By itself, this insight is extraordinarily useful. There is significant variation from one person to the next in terms of when their followers are most active, and we can now take advantage of this data with the Buffer button integrated into Followerwonk.
When you click this new button, it will create a schedule on Buffer with as many times as you specify (you can specify from 1 to 99 times, currently). And when you do, you’ll find a new schedule on Buffer like this:
(Of course, if you don’t yet have a free account on Buffer, definitely grab one.)
We use a weighted, random distribution to divvy up the times you specify. This means that we don’t ignore off-hours; we just assign times to them less frequently. Fine-tune the schedule we create for you in Buffer.
Also, make sure you install the Buffer Chrome extension. Once you do, you’ll be presented with a Buffer alternative when you tweet on Twitter.
With this, you can take a moment before each tweet and consider: is this one of the top hours for my audience? If not, hit the Buffer button and rest assured the tweet will be queued up to go out at a more optimal time.
Aim for your future audience
Your tweets should not be solely determined by your current followers. A couple of reasons why:
You may have a lot of spam followers or who people who aren’t important to your business.
Since they are already followers, you’ve already “converted” them. Part of your goal on Twitter should be to find new potential converts.
There’s potentially a lot of “low hanging fruit” in other hours that you might not typically reach.
This is where the Followerwonk integration with Buffer really excites me! You can now create schedules based not just on the most active hours of your followers, but of any other person’s followers.
I’ll explain how to do this in a moment. But first, I want to address what is central to this approach: whether or not tweets can reach out beyond your current followers.
Yes, tweets can extend out of your network
I analyzed 4,757 active Twitter accounts pulled from a sub-set of Followerwonk users. (This isn’t a random sample, but it’s far easier for us to analyze these users, as we track all of their data internally.)
Here’s the breakdown of their activity:
Note the last one.
A huge number of users are retweeting content from those they don’t follow. That’s really revealing, as many folks assume that engagement is limited to your existing social network. But it’s not. And it’s this sort of boundary-breaking activity that’s golden.
I then took those approximately 5,000 users and crawled 610,779 of their tweets and retweets. Of those items, here’s the breakdown:
I love retweets. (In fact, we based an entire influence metric around them.) And here we see that they’re an important component of most users’ activities! They’re almost as important as @mentions, in fact.
Let’s zoom in further and look just at retweets.
Here, we see that roughly 27% of retweeted tweets are of users the retweeter doesn’t follow. That’s big! And it means that there’s serious penetration of content into new networks.
Without doing an actual survey, it’s impossible to say exactly how users get their content retweeted by non-followers. But I have a few ideas.
Notice that retweets of non-followers have a larger number of @mentions. This is useful! It suggests that, in part, this breakout strategy is due to @mentioning others (and the recipient retweeting them).
If we consider retweets as a proxy for readership, we see that tweets can and do frequently extend beyond one’s current followers. This ability for a tweet to transcend a social network is likely due to any number of factors, including the “discovery” tab on Twitter, retweets of third parties, search, @engagement or #hashtag components of the tweet, and so on.
I draw out this point to highlight that you shouldn’t feel restricted to just tweet when your followers are online. Certainly, that’s an important consideration for any basic Twitter strategy, but keep in mind that your tweeting during certain hours has assembled an audience whose activity probably closely matches your current schedule. Time to break that mold?
The benefits of off-hour targeting
At any given time on Twitter, here’s what your potential audience probably looks like:
Here’s the thing: your current followers are likely a large percent of those “attentive and eager” readers who you’ve captured during your active hours. And while you certainly need to continue to cultivate that crowd, it’s perhaps less easy for you to find new prospects. You’ve already gotten a lot of the “low hanging fruit.”
But just think: there are active and eager fans in other hours, and they’ve probably never seen a single tweet from you. With the right content, you can probably do a terrific job at reaching them.
Let’s discuss how.
First, find a competitor or affinity brand on the other side of the world. Do an analysis and notice the active hours for her followers. Quite different than yours, eh?
Look at the full report to better understand the characteristics of their followers. The word clouds might reveal biographic details that might be of use. Look, too, at her most influential followers. These folks can help you spread your message.
Now think about content. Since few of your current followers are online during this user’s followers’ most active hours, you can think of a content strategy designed exclusively to this audience of far-flung prospects:
Tweet in a different language
Include responsible and constructive @mentions as in-roads into this audience
Use our comparison reports to find relationship overlaps between you and this competitor; DM or otherwise engage with shared followers who are in the off-hour timezone
Target content to the audience (for example, if you’re and SEO targeting France, you might focus on Francophone search engines)
Once you’ve tailored the content, queue them up in Buffer with a schedule based on the analysis of this far-off competitor.
Smart scheduling = lots of opportunities
Unlike Google+ and other social networks, you can’t separately target people on Twitter. Theoretically, all your tweets reach all your followers.
But, of course, they really don’t.
As I’ve highlighted above, you can use schedules as a basic means to separately target different swaths of your current and future audience. For most people, off-hour targeting affords a unique means to reach new territory.
But it’s not always of course: if you’re a local business, it probably won’t work well to be targeting folks in France! However, we think that there are a lot of other creative ways you can utilize analyses of competitors, suppliers, and local experts to fine-tune a scheduling and content strategy.
We’re extremely excited about our new relationship with Buffer. There are lots of ingenious ways to combine activity analysis like these and targeting scheduling.
We look forward to extending this partnership with great new things in the future!
(And so I don’t have to go all off-hours, why not follow me on Twitter right now?)
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!
Google Analytics is every Internet marketer’s best friend. The tools are always changing and updating, making the Google Analytics blog an important one to keep up on so you know the features, capabilities and data available to you. There are a couple new tools that got us really excited because of the intel they provide about how visitors are using our sites. Learn about the new Customer Journey tool and the Real-Time Widget now available through your Google Analytics account.
The Customer Journey to Online Purchase
Google is offering more detailed information to make marketer’s lives easier. On April 25, Google introduced a new benchmarking tool for marketers: The Customer Journey to Online Purchase.
The tool is Google’s response to:
The increasing complexity of the customer journey and
The increasing need of marketers to make sense of the contribution of each marketing channel in the final purchase so that they can improve their strategy accordingly.
Before committing to buy online, a customer may engage with a specific brand through many different media channels over several days (or even months, in some cases).
Based on the different sectors, different marketing channels come into play at different times and contribute to the final purchase decision.
The tool has been built on data gathered from over 36,000 Google Analytics clients that authorised sharing, including millions of purchases across 11 industries in 7 countries (Australia is not included at the moment).
How different marketing channels (such as display, search, email, and your own website) help move users towards purchases. For example, some marketing channels play an “assist” role during the earlier stages of the marketing funnel, whereas some play a “last interaction” role just before a sale.
How long it takes for customers to make a purchase online (from the first time they interact with your marketing to the moment they actually buy something), and how the length of this journey affects average order values. The length of the customer journey, in both number of days and number of interactions, varies widely depending on the type of purchase. Some decisions require substantial research, while others are made very quickly. Typically, more complex purchases lead to longer paths and larger purchase values.
Implications of the Customer Journey to Online Purchase Tool
Online retailers need to understand their customer journeys in the context of how a broader data set does similar journeys.
By understanding the different stages of the customer journey, businesses can evaluate the success or otherwise of online campaigns and the role each one plays in the conversion.
Using this information will help to design campaigns that deliver the right message at the right moment in a customer’s journey to purchase.
Google Analytics New Feature: Real Time Widgets
This is better than TV!
On April 16, Google announced four real-time widgets that can be added to any new or existing Google Analytics Dashboard, marking the first time real-time data has been possible in a dashboard widget.
The widgets make it possible for users to perform many types of real-time analysis and they can also be combined and customised with different filters to segment and compare data side by side.
To set-up a Real-time widget, webmasters simply need to click the +Add Widget menu option from the Google-Analytics dashboard.
Once a widget has been added, they can select Counter, Timeline, Geomap or Table from the Real-time section.
Counters show the number of active visitors on the site, in a similar way to the prime “Right Now” counter on the Real-Time overview report. The major difference is that it is possible to determine what the dimension is, if any, to be shown under the counter. On the Real-Time Overview report it shows New vs. Returning users, while on the widget it is possible to select a different dimension to break out that counter’s numbers from the set 11 dimensions available in all the Real Time Widgets: Campaign, City, Country/Territory, Keyword, Medium, Page, Page Title, Referral Path, Source, Traffic Type, or Visitor Type.
Timelines show the scrolling pageviews over either the last 30 minutes or last 60 seconds.
Geomaps show visits on a map. It is possible to choose to display by country or cities, and drill down a region from a world map down to a national one.
Tables show active visitors, with up to three of the dimensions listed above.
Another widget that has been recently added (May 5) to the real time reports is “Goal Conversions”.
Google Analytics real-time conversion widget
Implications of the Google Analytics Real-Time Widget
This helps businesses to instantly see how traffic is moving around their website.
This gives insight at a very specific level, enabling specific decisions to be made.
Real time widgets provide a drill-down that immediately displays the data that matters.
It has been an amazing month at Moz, and for the first time, we have served up three indexes in one month – holy fresh data, Batman! This index is also our second index release from our Virginia data center.
As we’ve become so regular with our index releases, it seems that a blog post announcing each release may not be the best way to inform our customers on releases as they happen. This will be the last index update announced on the blog, but index release readers, have no fear! There’s a new, more digestible way to get all of the information about the index updates you know and love.
Going forward, we invite you subscribe to the Mozscape RSS feed found on the lower right-hand corner of the http://moz.com/products/api/updates page so that you will know when we have released a new index. Simply click on the RSS feed icon…
…and you’ll be directed to our Feedburner Mozscape updates page. This page makes it easy to enter your information to subscribe to our index updates:
Just enter your email address, choose your reader, and let the Mozscape index data flow in!
As our index releases continue to become more frequent, you can expect the number of updates received through the RSS to mount. We hope you enjoy this new way to consume fresh update info as much as we do.
Here are the metrics for this index:
85,870,573,626 (86 billion) URLs
5,524,096,501 (5.5 billion) Subdomains
155,443,706 (155 million) Root Domains
902,845,046,889 (903 billion) Links
Followed vs. Nofollowed
2.17 % of all links found were nofollowed
57.32 % of nofollowed links are internal
42.68 % are external
Rel Canonical – 14.44 % of all pages now employ a rel=canonical tag
The average page has 79 links on it
67.836 internal links on average
11.13 external links on average
And the correlations with Google’s US search results:
Page Authority – 0.36
Domain Authority – 0.19
MozRank – 0.24
Linking Root Domains – 0.30
Total Links – 0.25
External Links – 0.29
This histogram shows the crawl date and freshness of results in this index:
This index spans 38 days with crawl starting April 1st and finishing on May 8th.
We always love to hear your thoughts! And remember, if you’re ever curious about when Mozscape next index release is planned, check out http://moz.com/products/api/updates.
Thanks for reading, and have a Mozzy weekend!
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!