For those wondering about the basics of Google, here is some good information From the Google Webmaster help section.
When
you sit down at your computer and do a Google search, you're almost
instantly presented with a list of results from all over the web. How
does Google find web pages matching your query, and determine the order
of search results?
In the simplest terms, you could think of searching the web as
looking in a very large book with an impressive index telling you
exactly where everything is located. When you perform a Google search,
our programs check our index to determine the most relevant search
results to be returned ("served") to you.
The three key processes in delivering search results to you are:
Crawling: Does Google know about your site? Can we find it? | Learn more… |
Indexing: Can Google index your site? | Learn more… |
Serving: Does the site have good and useful content that is relevant to the user's search? | Learn more… |
Crawling |
Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
We use a huge set of computers to fetch (or "crawl") billions of
pages on the web. The program that does the fetching is called
Googlebot (also known as a robot, bot, or spider). Googlebot uses an
algorithmic process: computer programs determine which sites to crawl,
how often, and how many pages to fetch from each site.
Google's crawl process begins with a list of web page URLs,
generated from previous crawl processes, and augmented with Sitemap
data provided by webmasters. As Googlebot visits each of these websites
it detects links on each page and adds them to its list of pages to
crawl. New sites, changes to existing sites, and dead links are noted
and used to update the Google index.
Google doesn't accept payment to crawl a site more frequently, and
we keep the search side of our business separate from our
revenue-generating AdWords service.
Indexing |
Googlebot processes each of the pages it crawls in order to compile
a massive index of all the words it sees and their location on each
page. In addition, we process information included in key content tags
and attributes, such as Title tags and ALT attributes. Googlebot can
process many, but not all, content types. For example, we cannot
process the content of some rich media files or dynamic pages.
Serving results |
When a user enters a query, our machines search the index for
matching pages and return the results we believe are the most relevant
to the user. Relevancy is determined by over 200 factors, one of which
is the PageRank
for a given page. PageRank is the measure of the importance of a page
based on the incoming links from other pages. In simple terms, each
link to a page on your site from another site adds to your site's
PageRank. Not all links are equal: Google works hard to improve the
user experience by identifying spam links and other practices that
negatively impact search results. The best types of links are those
that are given based on the quality of your content.
In order for your site to rank well in search results pages, it's
important to make sure that Google can crawl and index your site
correctly. Our Webmaster Guidelines outline some best practices that can help you avoid common pitfalls and improve your site's ranking.
Google's Related Searches, Spelling Suggestions, and Google Suggest features are designed to help users save time by displaying related terms, common misspellings, and popular queries. Like our google.com
search results, the keywords used by these features are automatically
generated by our web crawlers and search algorithms. We display these
suggestions only when we think they might save the user time. If a site
ranks well for a keyword, it's because we've algorithmically determined
that its content is more relevant to the user's query.