The stock market had a flash crash today after someone hacked the AP account & made a fake announcement about bombs going off at the White House. Recently Twitter’s search functionality has grown so inundated with spam that I don’t even look at the brand related searches much anymore. While you can block individual users, it doesn’t block them from showing up in search results, so there are various affiliate bots that spam just about any semi-branded search.
Of course, for as spammy as the service is now, it was worse during the explosive growth period, when Twitter had fewer than 10 employees fighting spam:
Twitter says its “spammy” tweet rate of 1.5% in 2010 was down from 11% in 2009.
If you want to show growth by any means necessary, engagement by a spam bot is still engagement & still lifts the valuation of the company.
Many of the social sites make no effort to police spam & only combat it after users flag it. Consider Eric Schmidt’s interview with Julian Assange, where Eric Schmidt stated:
- “We [YouTube] can’t review every submission, so basically the crowd marks it if it is a problem post publication.”
- “You have a different model, right. You require human editors.” on Wikileaks vs YouTube
We would post editorial content more often, but we are sort of debating opening up a social platform so that we can focus on the user without having to bear any editorial costs until after the fact. Profit margins are apparently better that way.
As Google drives smaller sites out of the index & ranks junk content based on no factor other than it being on a trusted site, they create the incentive for spammers to ride on the social platforms.
All aboard. And try not to step on any toes!
When I do some product related searches (eg: brand name & shoe model) almost the whole result set for the first 5 or 10 pages is garbage.
- Blogspot.com subdomains
- Appspot.com subdomains
- YouTube accounts
- Google+ accounts
- WordPress.com subdomains
- Facebook Notes & pages
- subdomains off of various other free hosts
It comes without surprise that Eric Schmidt fundamentally believes that “disinformation becomes so easy to generate because of, because complexity overwhelms knowledge, that it is in the people’s interest, if you will over the next decade, to build disinformation generating systems, this is true for corporations, for marketing, for governments and so on.”
Of course he made no mention in Google’s role in the above problem. When they are not issuing threats & penalties to smaller independent webmasters, they are just a passive omniscient observer.
With all these business models, there is a core model of building up a solid stream of usage data & then tricking users or looking the other way when things get out of hand. Consider Google’s Lane Shackleton’s tips on YouTube:
- “Search is a way for a user to explicitly call out the content that they want. If a friend told me about an Audi ad, then I might go seek that out through search. It’s a strong signal of intent, and it’s a strong signal that someone found out about that content in some way.”
- “you blur the lines between advertising and content. That’s really what we’ve been advocating our advertisers to do.”
- “you’re making thoughtful content for a purpose. So if you want something to get shared a lot, you may skew towards doing something like a prank”
Harlem Shake & Idiocracy: the innovative way forward to improve humanity.
Life is a prank.
This “spam is fine, so long as it is user generated” stuff has gotten so out of hand that Google is now implementing granular page-level penalties. When those granular penalties hit major sites Google suggests that those sites may receive clear advice on what to fix, just by contacting Google:
Hubert said that if people file a reconsideration request, they should “get a clear answer” about what’s wrong. There’s a bit of a Catch-22 there. How can you file a reconsideration request showing you’ve removed the bad stuff, if the only way you can get a clear answer about the bad stuff to remove is to file a reconsideration request?
The answer is that technically, you can request reconsideration without removing anything. The form doesn’t actually require you to remove bad stuff. That’s just the general advice you’ll often hear Google say, when it comes to making such a request. That’s also good advice if you do know what’s wrong.
But if you’re confused and need more advice, you can file the form asking for specifics about what needs to be removed. Then have patience
In the past I referenced that there is no difference between a formal white list & overly-aggressive penalties coupled with loose exemptions for select parties.
The moral of the story is that if you are going to spam, you should make it look like a user of your site did it, that way you
- are above judgement
- receive only a limited granular penalty
- get explicit & direct feedback on what to fix