you're reading...
Beanstalk Babbling, Economic Beanstalks, Iatrogenic SEO Symptoms

Who Dropped My Rank?!?! Perhaps its proprietary information…

With all rank and file jokes aside, the filing of various scores with a system designed to rank said scores begins with a start line of “regardless of profit or purpose,” and then let the hierarchy building begin!  “You want a list of my favorites?  Great!  Just tell me what categories I am to work with and I will score and rate the content in my mind according to personal preference” used to be initially an intimate inquiry process released only under certain circumstances …but not anymore!  Scores of repositories exist for the sole purpose of convincing someone to go ahead and leave behind a click of an opinion based on virtually any criteria the crafter desires.  Who needs to pay a professional hundreds of dollars to pull together a personal profile of someone when aggregation techniques can provide hundreds of personal profiles with but a click or two…and for less than $10.00!

I have never been fond of monitoring my own list of favorites (I have no need to stay conscious of what my 21st favorite song is at all times), but I have noticed that unappealing as vegetables were in my own youth, their appeal has altered significantly enough to where I don’t view treat my peas as if they were distasteful medicine-filled gumballs needing to be rolled off my plate one by one and then swallowed whole and I like cheese over my broccoli, thank you.  This means that had I been consistently monitoring my list of favorite foods over the years, I would be able to chart the course my preferences has traveled over the years…especially if I had the data in a searchable and flexible format.

Sticking with the food theme for a moment, wouldn’t the chart be extra-special if my buying habits were also a part of this chart?  Just picture it for a moment…one line representing what I believed to be a “favorite” of mine and then another line representing my actual buying habits relative to my self-generated, self-tracked list of favorites?  Oh what glorious…what’s that?  Why didn’t I purchase more shrimp since it has held such a high ranking all these years?  And what about my chocolate consumption?   Too much purchased in relation to the favorites ranking I’ve given it?

Although there is a layer of mockery in this “out of sight, out of mind” scenario, keeping track of calories can be a potentially immediate life and death set of boundaries to face, but it is within the mocking of my blending two separate data sources into a theoretical one-page chart.  In this example, I was the source of the favorites list and nowadays, purchasing habits are being bought and sold at sometimes such maddening paces, the cascading results are sometimes impossible to grasp a hold of in scope alone, let alone spread.

So when it comes to the highly prized value of ranking in a search engine, assigning clear and decisive fault and reason for a drop in a ranking in a search engine is the stuff legal debates and debacles are made of.  It’s not impossible, but some level of coordination must occur somewhere within the ranking chains and more often than not, such coordination must come from an internal position somewhere…even if its from a sitting up position on a couch with a laptop perched on a lap.  With multiple trails and paths leading to better ranking, ultimately it comes down to the competition in any given string derivative and the subsequently recognized activity associated with such pursuit, but being recognized in a public manner by one of the major search engines does not necessarily mean they are revealing their full awareness of all sites that are active as a part of their results.

In essence, being invisible in a results roster from a search engine technology does not provide proof of there being no recognition whatsoever from the search engine that a site does exist, which is a portion of the hide-and-seek process associated with how any particular search engine technology achieves the results rosters they produce at any given moment.

For example, a robots.txt file will not stop a PageRanking action to take place if Google stumbles across the file just as the NoIndex meta tag instruction still permits PageRanking assignment.  This means Google has a line item designated for a specific address, with its own set of proprietary instructions set to record only the PageRanking of a page – pursuant to publisher instructions – and contemplating what might go into any PageRanking formula is a larger headache than contemplating what goes into a FICO score assigned by a credit reporting agency.

To provide what appears to be at least theoretical “legal protection,” top search engines have been taking the art of sculpting search results to the mattresses and pillows with their meta tag modifications (adaptations?), complete with suggesting ways to side-step existing content in directories such as DMOZ to ensure “relevant” results are returned (it was suggested in one article that very few ever visit DMOZ to modify their profile, so why not use the tags to ensure current relevance outweighs the DMOZ data.  Yet another yes/no instructive divider, with the have’s being virtually assured positive weight over the have not’s by the major providers and the designers supporting such a popularity position.  In essence, “do as we say and you’ll have what you’re looking for…if you produce the content, of course.”  Wheeeee!

Ultimately, scores of people have developed reams of scoring systems to analyze and evaluate a page, from the character count and byte size all the way to the esoteric evaluations even I have applied to a page every now and again.  How these types of instructions can be applied to a site certainly depends on how the tags are capable of showing up in the first place (i.e. do-it-yourself sites with built-in meta tagging versus do-it-yourself HTML and FTP upload) and if you have control over this collection of instructions (some DIY sites will not allow for meta tag modifications), you certainly will have an ability to climb the charts faster courtesy of the combined influence of the 3rd party coagulation and aggregation systems currently in place responsible for assigning signs and signals and the inclusion of such scoring every time a results roster is produced.

The bottom line is that what criteria the search engines “like” and don’t “like” is similar to many other industries that rely on what’s “in” and what’s “out,” such as with the fashion industry.  Building a wardrobe based on time-honored traditions (black goes with anything and shoes that don’t trigger pain signals every time they are slipped on) is a mantra of those who do not wish to keep up with the supernova pace of public opinion.

So if your site is built flexible enough to roll with the pokes and patches being produced, making global changes may not be that big of a deal, but when it comes to those who are playing  the “…just one more click!  I have to have that one more click!  Come to meeeeeeeeee!  Come on…just gimme another click, baby!  One more!  Oh yes!  My own personal ATM machine!,” the 1 million plus nests can lose a lot of juice when a major search engines scoring system is somehow modified and a well-placed link directory can juice up a string derivative sector and good luck on keeping up!

Whether or not it is the search engine making the modification or whether it is another system of scoring, even the hosting company of a site can play a role in the ranking of a site and ultimately, the algorithm used to produce search engine results does not always promise conclusion one way or another as to who modified what to cause someone’s ranking to go down, and whether or not a list of scoring providers is something that is proprietary or instead a reflection of some sort of Kangaroo Count scoring system (the courts saying “innocent” while the networks host postings stating “guilty”…what do The Statistics say?) being the speaker system through the white noise.

So if you don’t understand how any of that SEO stuff works, and you’re not looking to go all hard core with your materials as they rate and rank in the search engines (such as myself), don’t entirely sweat conversations talking about the penalties that will be assigned if XYZ isn’t followed to a T by the major search engines.  Social networks are typically able to pull up a lot of slack caused by such penalties (including typos and even link farms) such as providing a user some form of additional keyword tagging above and beyond the meta tagging system and even Technorati still plays a role in refining attraction to a site above and beyond any data already assigned to your site.

Since there is no guarantee you’ll ever be able to isolate the true source of your ranking suddenly dropping (“darn Google did it to me again” vs. any other reason), whether or not its worth trying to figure out the cause is entirely up to you and your objectives as a publisher.  If you aren’t concerned about the ranking race in the search engines, you’re not going to need to track the day-to-day details of your publication, such as with my opening suggestion of maintaining a list of favorite foods over my lifetime.  I do look at some of the statistics attached to my blogs and I certainly glance at search engine results from time to time, but I’m more attracted to crafting core content than scoring points for being extra-crafty with my own string derivative structures that have no visible purpose other than to attract a click every now and again…

especially when its just for the purpose of artificially adjusting existing ratings through the manufacturing of structures designed to contradict boundaries just enough to create both large and small pay-per-click-driven networks.

One final (and unfinished) side-note.  One feature of the pay-per-impression financial structure is the idea that each display of an ad is to be financially rewarded, which always seemed pretty parallel to existing rental agreements pertaining to billboards, newspapers, etc.  The newspaper promises X number of readers in a certain demographic and the advertiser pays the newspaper to pay Y dollars for Z amount of space.  Because pay-per affiliate schematics rely on a source for the advertisement dollars, a pay-per-click structure means an affiliate is forced to provide free impression advertising, chosen and then delivered by the advertisement provider.  With numerous flaws in the design, it is nearly impossible for an advertisement provider to prove one way or another whether or not the click-through they have on record was caused by a human or an automated program…was the click-through a part of a list of sites scheduled to be clicked by a human at any given moment…whereas payment on impression at least presents a valuation system more in line with a billboard structure, with the affiliate receiving reward for every single advertisement shine-through, the advertising provider making their bucks and even though the advertiser is still left hanging on a chain as to what impressions were actually left behind when the advertisement was placed on an affiliates site?

Pay-per-click has always seemed to present a precarious and highly volatile monetization process significantly in favor of the advertisement providers who get paid regardless of how a click showed up on the record…and heavy financial favoritism towards any address holds similar economic power as various individuals continue to fall from grace…but I am a mere tourist when it comes to this level of financial schematics.

Click here, anyone?  I’ll show you my list of all-time favorites…oh, nevermind.

Discussion

Comments are closed.