Search engine allow people to perform a query and find any information they want. If there is no search engine, it would be difficult find the website that contain the information you want. There are 3 important stages in search engine including crawling, indexing and retrieval. Crawling is the stage where the search engine discover the new content on the site. Indexing is the stage where the search engine analyze the webpages and get them indexed into a huge database. Retrieval is the stage where the search engine will fetch a list of relevant websites when the user perform a query.
The search engine relies on the links in the webpages to get all kinds of websites indexed into the search result. Links make it possible for search engine robot to get connected to billions of webpages on the internet. Everytime the search engine crawl a site, the search engine will make a copy of the page and add it into a list of index where it is scheduled to crawl the links again. The links that are newly crawled and indexed are stored in a large database. The web crawler keeps reiterate the process until it builds a gigantic index of webpages in the database. It is a never ending process because the search engine will keep on coming back to the site whenever there is activity to index the new posts that have been recently added.
Many major search engines maintain data centers around the world so that users worldwide can access the webpages they want whenever they want instantly. Each data center is equipped with thousands of servers to hold billions of information so that they can be retrieved later. It is important that the webpages can be accessed instantaneously as 1 – 2 seconds delay can cause dissatisfaction among the customers.
Search engines works just like an answer machine. No matter what you search, it is always able to come up with the most relevant sites to give you the answer you are looking for. The websites are ranked based on two important factors, which are relevance and popularity. Relevance is more than just finding a webpage that contains keywords that match with the user's search term. During the early days, when people first started to use internet in their homes, search engines only look for the number of times the matching keyword term appear in the webpages when ranking them in the search result. Because of this, many webmasters to stuff the content and meta tag of their webpages with keywords. Google claimed that they are crawling a few hundred pages every second even in the early days of the internet.
With more and more people using the internet and more websites being set up, they have improved the search engine robot and make it more smarter when coming to rank webpages on the internet. Nowadays, search engine use complicated algorithm and hundreds of other variables when ranking websites in the search result. To make it easy for the search engine to crawl the site, you must not hide the link under many categories. The user should not have to click through too many categories to reach the destination link. If the link is hidden many clicks away, the search engine may give up crawling it.
When performing a search on the internet, you must include the keyword. For example, if you are searching for the timeline of Hitler's rise to power, you must include the keyword timeline Hitler power in the search box. The search result vary across different search engines. Google is the most used search engine in the world and it is the default homepage for many internet users. Even Bing copies some search results from Google. Every search engine has its own ranking algorithm, which is a guarded patented secret to them. Google recommends webmasters to create pages for the users instead of the search engine. According to Google, you must create a clear hierarchy and text link and the pages must have useful information. The URL must be human friendly and include descriptive keyword. On the other hand, Bing recommend that you maintain a keyword rich URL in the search result.