Robots.txt is a text file generated for search engine crawlers. When search engine crawler will came to your site then they will first access your Robots.txt while and then will see that which part of your site has to crawl. In simple words, robots.txt file tells search cralwers which part of your site, you have to crawl.
Robots.txt file allows spiders or crawlers to allow or disallow to crawl all pages of a website or a particular webpage. Robots.txt file is a simple text file that must be placed in root directory of a website.
Robots.txt, also known as Robots Exclusion Protocol (REP), is a text file that contains codes that tells the search engine how to crawl your site. You have to upload the robots.txt to the top level directory for example abc.com/robots.txt in order for the search engine to take into consideration.
I was reading about your comments and your definitions about robot.txt. all of your definitions about robot.txt is correct. Because of reading about how you define it, I also get the idea on how to use it properly.
A robots.txt file is a text file at the root of your site that indicates those parts of your site you do not want accessed by search engine web crawlers. The file uses the robots which is a set of rules with a small set of commands that can be used to point access to your site by section and by specific kinds of web crawlers.
Search engines send out tiny programs called spiders or robots to search your site and bring information back so that your pages can be indexed in the search results and found by web users. If there are files and directories you do not want indexed by search engines, you can use the "robots.txt" file to define where the robots should not go.
Robot.txt is a text file that is used for instruct to crawler the web page is crawling or not. If you want to not crawl any specific webpage or folder in your directory than you instruct to crawler through out the file.