Welcome to WebmasterServe!

FREE TO JOIN! Join us now to engage in informative and friendly discussions about Webmastering, SEO, SEM, Internet Marketing, Programming, Graphic Design, Online Jobs and more. What are you waiting for? Ready to join our friendly community? It takes just one minute to register.

Dismiss Notice

Join WebmasterServe Forums 
Join the discussion! Have a better idea or an opinion? It takes just one minute to register Click Here to Join

My 2 Cents Google search engine programming

Discussion in 'PHP Programming' started by onauc, Dec 14, 2004.

Thread Status:
Not open for further replies.
  1. onauc

    White Belt

    Joined:
    Dec 6, 2004
    Messages:
    1
    Ratings:
    +0 / -0
    Howdy,

    I want to know how the google, webcrawler etc. searchengines really work as I am learning php programming and want to write a searchengine.
    I have read around 10 websites, found on google, about "how searchengines work†and not a single one of them make it clear if it is the spider or the index or the search software does the ranking according to it's ranking algorithm.
    All they ever say is that, a searchengine has 3 softwares :
    a) the spider
    b) the index
    c) the search system (search-box, template, etc.)
    The spiders crawl the web collecting webpages and then forward them to the index and then the search software searches the index for the sought keywords/phrases.
    Also, some say that the spiders copy the whole website into it's index. So, in other words, there is 2 copies of a website. One residing in the website owner's webserver and the other residing on the index of the searchengine.
    So now, I can only assume 3 possibilities how a searchengine works from all this:

    1.
    The spider does not do the ranking according to any algorithm.
    All it does is visit a website, grab all it's html codes (copy a website) and then dump the html codes to it's index.
    The Index is nothing but a big txt file (.txt, .html) on the searchengine's webserver that keeps full copy (html codes) of each website.
    The search-system, when searching and finding links (in the index) gives the ranking according to the searchengine's ranking algorithm.
    This means, the spider nor the index is responsible for the ranking because these 2 parts of the searchengine are not taught the ranking algorithm.

    OR

    2.
    The spider does the ranking according to the searchengine's ranking algorithm.
    It visits a website and grabs all it's html codes (copy a website) and then finally dump the html codes to it's index. When it dumps the copies of websites it ranks them according to the searchengine's algorithm.
    The Index is nothing but a big txt file (.txt, .html) on the searchengine's webserver that keeps full copy (html codes) of each website.
    The search-system, when searching and finding links (in the index) does not give the ranking according to the searchengine's ranking algorithm because that has been already done by the spider when dumping the data onto the index.
    This means, the spider is responsible for giving the ranking and not the index nor the search-system responsible for the ranking because these 2 parts of the searchengine are not taught the ranking algorithm.

    OR

    3.
    The spider does not do the ranking according to any algorithm.
    All it does is visit a website, grab all it's html codes (copy a website) and then dump the html codes to it's index.
    The Index is not only a big txt file (.txt, .html) on the searchengine's webserver that keeps full copy (html codes) of each website but also the system that does the ranking.
    When it receives data from the spider, it ranks the links in it's database according to the searchengine's ranking algorithm.
    The search-system, when searching and finding links (in the index) does not give the ranking according to the searchengine's ranking algorithm.
    Frankly, all it does is output a copy of certain parts of the index onto a searcher's screen.
    This means, neither the spider or the search-system is responsible for the ranking because these 2 parts of the searchengine are not taught the ranking algorithm.


    So, which assumption is correct according to the 3 above ?


    Ok, I am not thinking of competing with google but you should understand that I want to run a searchengine and it should have a spider, an index and a search facility and I should be able to teach it ranking algorithms.
    The web-scripts out there do not offer the admin to teach his searchengine (that runs with these ready-made web-scripts) their own ranking algorithm.
    The web-script developing company built the ranking algorithms and we admins cannot change them.
    The major searchengines can change their ranking algorithms from time to time when they find-out that webmasters have guessed their ranking algorithms and are abusing them to get their non-relevant websites ranked high under every keyword under the sky.
    eg.
    I run a search-engine. I use a ready-made web-script. My search-engine one day gets popular. Now, you decide to get traffic to your website from it.
    You check what ready-made web-script I am using and you buy that script and experiment on it and find-out the ranking algorithm.
    Now, you falsely optimise your website so it ranks high under every keyword on my searchengine, even those keywords that are not really related to your website. Sooner or later, people dump my searchengine. My venture comes to a dead-end.
    Now, to avoid all this, I must be able to change my ranking algorithm when I fiond-out that webmasters have found-out my ranking algorithm and are abusing it.
    Typical that these ready-made searchengine web-scripts do not offer the admin to change the ranking algorithm and create their own algorithms too.

    Also, what is peer-to-peer searchengine ?
     
  2. Webmasterserve

    Staff Member Administrator

    Joined:
    Aug 23, 2004
    Messages:
    201,431
    Ratings:
    +77 / -0
    You have an excellent point about the limitations of off-the-shelf script but writing a search engine is no task for an individual, I think you will need quite a few associates to get the project off the ground
     
  3. Darksat

    Yellow Belt

    Joined:
    Dec 14, 2004
    Messages:
    19
    Ratings:
    +0 / -0
    Most new search engines such as killerinfo.com just pull results from other engines.

    as far as I am aware though the spider grabs the info puts it in the index and then the data in the index is tabulated into a databse which is used to return results.
     
  4. Webmasterserve

    Staff Member Administrator

    Joined:
    Aug 23, 2004
    Messages:
    201,431
    Ratings:
    +77 / -0
    I don't think the spider at killer.info add the data to its database, I think it just grabs the data from the 3rd party search engine and serve it to the searcher, that way the proper engines do all the hard work of
    spedering etc, its a bit like the meta search engine I will be integrating into www.haabaa.com when I have some time.
     
  5. alexandru

    Yellow Belt

    Joined:
    Aug 24, 2004
    Messages:
    67
    Ratings:
    +0 / -0
    Not to mention the hardware needed for a search engine.... Google uses a farm of 4000 clustered PCs... runing Debian Linux :)
    So if you want to have only 1/100 of their computational power, you still need 40 PCs....
     
  6. fiverivers

    fiverivers Guest

    Ratings:
    +0 / -0
    You have an outstanding aspect about the restrictions of off-the-shelf program but composing a on the internet look for results is no process for an personal, I think you will need quite a few affiliates to get the venture off the community
     
Thread Status:
Not open for further replies.

Share This Page