You can view what kind of traffic is hitting your website by viewing the Logs section of your cPanel:
https://antara.websitewelcome.com:2083There are a couple of different options here. You can use a robots.txt file to block a search engine crawl, or you can change a search engine's crawl rate. I recommend learning more about them from the information I've provided below.
----== How to use robots.txt ==----
What is the purpose of the robots file?
When a search engine crawls (visits) your website, the first thing it looks for is your robots.txt file. This file tells search engines what they should and should not index (save and make available as search results to the public). It also may indicate the location of your XML sitemap. The search engine then sends its "bot" or "robot" or "spider" to crawl your site as directed in the robots.txt file (or not send it, if you said they could not).
Google's bot is called Googlebot, and Microsoft Bing's bot is called Bingbot. Many other search engines, like Excite, Lycos, Alexa and Ask Jeeves also have their own bots. Most bots are from search engines, although sometimes other sites send out bots for various reasons. For example, some sites may ask you to put code on your website to verify you own that website, and then they send a bot to see if you put the code on your site.
Read Google's official stance on the robots.txt file.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449---Where does robots.txt go?---
The robots.txt file belongs in your document root folder (Your primary domain is rooted in the public_html folder).
You can simply create a blank file and name it robots.txt. This will reduce site errors and allow all search engines to rank anything they want.
Blocking Robots and Search Engines from Crawling
If you want to stop bots from visiting you site and stop search engines from ranking you, use this code:
#Code to not allow any search engines!
User-agent: *
Disallow: /
You can also prevent robots from crawling parts of your site, while allowing them to crawl other sections. The following example would request search engines and robots not to crawl the cgi-bin folder, the tmp folder, and the junk folder and everything in those folders on your website.
# Blocks robots from specific folders / directories
User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /junk/
In the above example,
http://www.yoursitesdomain.com/junk/index.html would be one of the URLs blocked, but
http://www.yoursitesdomain.com/index.html and
http://www.yoursitesdomain.com/someotherfolder/ would be crawlable.
Keep in mind that robot.txt works like a "No Trespassing" sign. It tells robots whether you want them to crawl your site or not. It does not actually block access. Honorable and legitimate bots will honor your directive on whether they can visit or not. Rogue bots may simply ignore robots.txt.
View more robots.txt codes here.
http://www.robotstxt.org/robotstxt.htmlRead about changing Google's crawl rate.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=48620----== Telling Google How Often To Crawl Your Website ==----
Google uses sophisticated algorithms for determining how often to crawl your site. Their goal is to crawl as many pages as possible from your site on each visit without overwhelming your server's bandwidth.
If you are experiencing bandwidth related server load issues (i.e. too many requests too quickly), you may want to reduce how fast Google and other search engines crawl your site. Too many requests on your site in a very short period can cause your site to load slowly and even cause load issues on the server. This is especially true of very busy websites, or websites that are poorly or inefficiently coded.
If you are not experiencing any bandwidth or bandwidth related load issues on your server, it is recommended that you allow Google to determine the optimal crawl rate for your website.
Changing Google's Crawl Rate
Google allows you to adjust the crawl rate (the time used by Googlebot to crawl your website) for an entire domain or subdomain. You cannot specify different crawl rates for sections of your site (e.g. specific folders or subdirectories).
For example, you can specify a custom crawl rate for www.yoursitesdomain.com and subdomain.yoursitesdomain.com, but you cannot specify a custom crawl rate for www.yoursitesdomain.com/subfolder.
Changing the crawl rate only changes the speed of Googlebot's requests during the crawl process. It does not have any effect on how often Google crawls your site, nor how deeply they crawl your URL structure.
To change Google's crawl rate:
1) Login to Google Webmaster Tools
http://www.google.com/webmasters/tools/2) Add your site to Google Webmaster Tools, if you have not done so already.
3) On the Webmaster Tools Home page, click on the site you want.
4) Under Site Configuration, click Settings.
5) In the Crawl Rate section, select the option you want.
The new crawl rate will be valid for only 90 days.