Understanding the difference between the robots.txt file and Robots Tag is critical for search engine optimization and security. It can have a profound impact on the privacy of your website and customers as well. The first thing to know is what robots.txt files and Robots Tags are. Robots.txt Robots.txt is a file you place in your website’s top level directory, the same folder in which a static homepage would go. Inside robots.txt, you can instruct search engines to not crawl content by disallowing file names or directories. There are two parts to a robots.txt directive, the user-agent and one or more disallow instructions. The user-agent specifies one or all Web crawlers or spiders. When we think of Web crawlers we tend to think Google and Bing; however, a spider can come from anywhere, not just search engines, and there are many of them crawling the Internet. Here is a simple robots.txt file telling all Web crawlers that it is okay to spider every page: User-agent: * Disallow: … [Read more...] about Have You Considered Privacy Issues When Using Robots.txt & The Robots Meta Tag?
Robots txt add sitemap
In the battle between search engines and some mainstream news publishers, ACAP has been lurking for several years. ACAP — the Automated Content Access Protocol — has constantly been positioned by some news executives as a cornerstone to reestablishing the control they feel has been lost over their content. However, the reality is that publishers have more control even without ACAP than is commonly believed by some. In addition, ACAP currently provides no “DRM” or licensing mechanisms over news content. But the system does offer some ideas well worth considering. Below, a look at how it measures up against the current systems for controlling search engines. ACAP started development in 2006 and formally launched a year later with version 1.0 (see ACAP Launches, Robots.txt 2.0 For Blocking Search Engines?). This year, in October, ACAP 1.1 was released and has been installed by over 1,250 publishers worldwide, says the organization, which is backed by the European … [Read more...] about ACAP Versus Robots.txt For Controlling Search Engines
After a year of discussions, ACAP — Automated Content Access Protocol — was released today as a sort ofrobots.txt 2.0 system for telling search engines what they can or can’t includein their listings. However, none of the major search engines support ACAP, andits future remains firmly one of "watch and see." Below, more about the how andwhy of ACAP. Let’s start with some history. ACAPgot going in September 2006, backed by major European newspaper andpublishing groups that in particular felt Google was using content withoutproper permissions and wanting a more flexible means to provide this thanallowed by the long-standing robots.txt and meta robots standards. These two standards are found at the robotstxt.org, and ACAP has been referring to them often at "RobotsExclusion Protocol" or REP, though within the SEO world, they’re generally knownby their actual names. Robots.txt was born in 1994 as a way to block content on a server-wide basis;meta robots … [Read more...] about ACAP Launches, Robots.txt 2.0 For Blocking Search Engines?
The Robots.txt Summit at Search Engine Strategies New York 2007 was the latest in a series of special sessions with the intent to open a dialog between search engines representatives and web site publishers. Past summits featured discussion on comment spam on blogs, indexing issues and redirects. The subject of this latest summit was to discuss the humble but terribly important robots.txt file. Danny Sullivan moderated, with panelists Keith Hogan, Director of Program Management, Search Technology, Ask.com, Sean Suchter, Director of Yahoo Search Technology, Yahoo Search, Dan Crow, Product Manager, Google and Eytan Seidman, Senior Program Manager Lead, Live Search. The Robots.txt summit session was not on how to use the robots.txt file, rather as Danny Sullivan explained, “We’re assuming you know how to use it and are frustrated with it. This is about how you want to see it evolve.” For a potentially dry and technical subject, the panel turned out to be quite … [Read more...] about Up Close & Personal With Robots.txt
Google, MSN ,Yahoo and ASK have announced uniform support for a sitemap submission to all search engines. Simply by adding the following line of code into your robots.txt file, the engines will know where your sitemap is located on the server and pick it up on their routine crawls. Sitemap: http://www.example.com/sitemap.xml Obviously, replacing the URL above with the URL of your sitemap index file. Neither of these announcements was specific but looking at the example provided by Yahoo this does appear to support sitempas exported into an XML file, not just an HTML file. … [Read more...] about Google, Yahoo, MSN add SiteMap Auto-Discovery