... and why I 'm dying to get finally in the Google SERP
Have you also experienced that getting indexed on Google, despite the Google crawler visits each day your site, is getting tougher and tougher, not to say it's apparently almost impossible in short term?! Between us, in the corridors of Google, they're talking about the notorious 'Google Sandbox' theory. According this theory, a new website is first 'sandboxed' and doesn't get a ranking when the keywords of that website are not incredibly competitive. The Google Sandbox is in fact a filter placed in March of 2004 which new websites prevents from having immediately success in the Google search engine result pages. This filter "is only intended to reduce search engine spam". The sandbox filter is not a permanent filter for your website, what means you can only wait, wait and wait until Google liberates you from this filter. In mean time, don't recline, but write original and well optimized content; write, publish and share articles, place a link on other websites etc.
An example:
I started with wallies.info this year on April 1st and submitted this URL on Google, Yahoo and MSN Search on the same day. Two months later, when I'm searching for '
http://www.wallies.info' and 'wallies.info', Google has twice 1 search result, Yahoo! twice 65 results and MSN Search 313 and 266 results. A remarkable difference, isn't it?! Anyway, Google has a huge problem and backlog to index (new) pages. But two or three times a week, I receive a Google Alert for these two searches, but they aren't encountered again in the Google search engine results pages (SERP) at all.
With the introduction of Google Sitemaps (
https://www.google.com/webmasters/sitemaps/), a beta website update reporting service, on Friday 3rd of June 2, I hope this will restrict the Sandbox waiting room. With a Sitemap, crawlers are better enabled to find out recently changed pages and get immediately a list of present pages. As Google Sitemaps is released under a Creative Commons license, all search engines can make use of it. Important to know is that Google Sitemaps will not influence the calculation of your PageRank.
Sitemaps has its own variant of the XML protocol and is called the 'Sitemap Protocol'. For each URL some additional information such as the last modified date can be included.
There are several methods to create your XML Sitemap:
1. The Sitemap Generator (
https://www.google.com/webmasters/sitemaps/docs/en/sitemap-generator.html) is a simple script that can be configured to automatically create Sitemaps and submit them to Google.
2. Make your own Sitemap script
3. With the Open Archives Initiative (OAI) protocol for metadata harvesting (
http://www.openarchives.org/OAI/openarchivesprotocol.html)
4. With RSS 2.0 and Atom 0.3 syndication feeds
5. A simple list of URLs with one per line
In the current RSS era, it's obvious that the fourth method is the most logical and easiest. Roughly said, you need only to make a new XML template. For a working Sitemap example of the wallies.info blog, got to
http://www.wallies.info/blog/gsm.php.
This XML Sitemap has to be submitted on the Google Sitemaps page (
https://www.google.com/webmasters/sitemaps/ ). When you've updated your listed pages or your Sitemap has changed, you have to resubmit your Sitemap link for re-crawling.After I've submitted the wallies.info Sitemap, it took approximately between 3 and 4 hours before Google has downloaded the file.
Please note that Sitemaps doesn't influence in no way the calculation of your PageRank, Google doesn't add every submitted Sitemap URL to the Google Index and Google doesn't guarantee anything about when or if your Sitemap pages will appear in the Google SERP.
Off course, it's easier for you to set up an automated job to submit this XML-file.
You can do this with an automated HTTP request, like this example (your sitemap has to be URL encoded, this is everything behind /ping?sitemap=):
www.google.com/webmasters/sitemaps/ping?sitemap=http%3A%2F%2Fwww.yoursite.com%2Fsitemap.xml
What is the Sitemap Protocol?
The Sitemap Protocol informs the Google search engine which pages in your website are available for crawling. A Sitemap consists of a list of URLs and may also contain additional information about those URLs, such as when they were last modified, how frequently they change, etc.
An example of the XML Sitemap format:
-
-
http://www.wallies.info/blog/2005-06-07T05:34:36+02:00
daily
1.0
-
http://www.wallies.info/blog/item/130/index.html2005-06-05T10:59:22+02:00
1.0
-
...
The XML Sitemap Format uses the following XML tags:
- urlset : this tag encapsulates all other tags of this list;
- url : this tag encapsulates the changefreq, lastmod, loc and priority tags of this list;
- changefreq (optional) is how frequently the content at the URL is likely to change. Valid values are 'always', 'hourly', 'daily', 'weekly', 'monthly', 'yearly' and 'never';
- lastmod (optional) is the time the content at the URL was last modified. The timestamp has to be in a ISO 8601 format;
- loc (required) : the URL location / a URL for a page on your site (< 2.048 characters).
- priority (optional) : the priority of the page relative to other pages on the same site and is a number between 0.0 and 1.0 (default 0.5). This priority is only used to select between URLs on your site. The priority of your pages will not be compared to the priority of pages on other sites.
An urlset may contain up to 50.000 URL's and the file must not be larger than 10MB when uncompressed. Multiple Sitemaps are gathered in a Sitemap index file with a maximum of 1,000 sitemaps of the same site.
The Google Sitemaps URL:
https://www.google.com/webmasters/sitemaps/For feedback of this Sitemaps article, please feel free to visit
http://www.wallies.info/blog/item/132/index.htmlWalter V. is a self-employed internet entrepreneur and founder-webmaster of several websites, including wallies.info: A snappy blog about snappy blue things: blog | wiki | forum | links -
http://wallies.infomblo.gs: a snappy moblog community -
http://mblo.gs