A robots.txt file which is located in the root of the website directory and is the first thing, a search engine spider or bot will look for when it visits your website, so it’s vital that you have one there, even in its simplest form.

These files are called a Robots Exclusion Protocol which helps to control how the spider goes through your site, allowing you to block certain sections from being crawl or index or can block even  whole site.
Test Robots.txt Files

It is useful if you do not want your site to go live before it has been fully tested and checked.

Using robots.txt

Before you build your robots.txt, you should know the risks of only using this URL blocking method. At times, you might want to consider other mechanisms to ensure your URLs are not findable on the web.

It is important that your check three things before you make your robot.txt available for search engines.

1. Ensure private information is safe.

2. Use the right syntax for each crawler.

3. Block crawlers from references to your URLs on other sites.

Google has recently updated its robots.txt tool in Webmaster Tools to make it easier to maintain and make robots.txt files.

Now its is possible to see the robots.txt file and also test new URLs to check if they have been blocked from crawling. The new tool will furthermore highlight a specific directive which is not complicated like it used to be before.

The new tool allows you to test and make changes in the file by simply uploading the new version of the file to their server to make the changes take effect before making the file available for search engine’s crawlers.

the best part of the new robots.txt tester tool is that you will now be able to review the older versions of their robots.txt file, and check the past issues if any.