A Robot.txt is a text file that informs the search engine crawler which pages or files the crawler can request or can’t request. Its primary usage is to manage crawler traffic to your site and keep off a page depending on the file type.
This avoids overloading your site with requests; it is not a mechanism to keep a web page out of Google or any other search. To keep a web page out of google search you can use noindex directives
Your system comes with a pre-configure robot.txt file along with some recommended robots.
However, you can change them as per your requirements using the syntax examples given below –