Settings in Webmaster Tools like Geographic target and Query-parameters to ignore are great, but it means other Search Engines won't have access to this data. Why not propose a new robots.txt directive for these settings? Andy, NY Have a question? … [Read more...]
Can I use robots.txt to optimize Googlebot’s crawl?
Can I use robots.txt to optimize Googlebot's crawl? For example, can I disallow all but one section of a site (for a week) to ensure it is crawled, and then revert to a 'normal' robots.txt? Blind Five Year Old, SF, CA … [Read more...]
Should I block duplicate pages using robots.txt?
Halfdeck from Davis, CA asks: "If Google crawls 1000 pages/day, Googlebot crawling many dupe content pages may slow down indexing of a large site. In that scenario, do you recommend blocking dupes using robots.txt or is using META ROBOTS … [Read more...]
Uncrawled URLs in search results
Matt Cutts explains why a page that is disallowed in robots.txt may still appear in Google's search results. … [Read more...]
Will a link to a disallowed page transfer PageRank?
Steen from Copenhagen asks: "If a page is disallowed in the robots.txt, will a link to this page transfer/leak link juice?" Recorded on April 23, 2009. … [Read more...]