If there are files and directories you do not want indexed by search engines, you can use the "robots.txt" file to define where the robots should not go.
These files are very simple text files that are placed on the root folder of your website: www.yourwebsite.com/robots.txt.
We can make it by allowing all website links:
Use Google Search Console – With this tester tool you can analyze the latest cached version of the page, as well as using the Fetch and Render tool to see renders from the Googlebot user agent as well as the browser user agent. Things to note: GSC only works for Google User agents, and only single URLs can be tested.
Robots.txt is a file to prevent the particular pages to get indexed by search engine. To prevent the the page which is credential then you can use disallow for e-commerce and normal site difference is in e-commerce there is money transaction data in checkout page this is not to be indexed and for normal website admin logins not to be indexed then can use disallow for these page by the agent name of any crawlers. And to check its activated check with site url/robots.txt
txt is an easy process. Follow these simple steps: Open Notepad, Microsoft Word or any text editor and save the file as 'robots,' all lowercase, making sure to choose . txt as the file type extension (in Word, choose 'Plain Text' ).