After a year of discussions, ACAP — Automated Content Access Protocol — was released today as a sort ofrobots.txt 2.0 system for telling search engines what they can or can’t includein their listings. However, none of the major search engines support ACAP, andits future remains firmly one of "watch and see." Below, more about the how andwhy of ACAP. Let’s start with some history. ACAPgot going in September 2006, backed by major European newspaper andpublishing groups that in particular felt Google was using content withoutproper permissions and wanting a more flexible means to provide this thanallowed by the long-standing robots.txt and meta robots standards. These two standards are found at the robotstxt.org, and ACAP has been referring to them often at "RobotsExclusion Protocol" or REP, though within the SEO world, they’re generally knownby their actual names. Robots.txt was born in 1994 as a way to block content on a server-wide basis;meta robots … [Read more...] about ACAP Launches, Robots.txt 2.0 For Blocking Search Engines?
Robot txt sitemap
Understanding the difference between the robots.txt file and Robots Tag is critical for search engine optimization and security. It can have a profound impact on the privacy of your website and customers as well. The first thing to know is what robots.txt files and Robots Tags are. Robots.txt Robots.txt is a file you place in your website’s top level directory, the same folder in which a static homepage would go. Inside robots.txt, you can instruct search engines to not crawl content by disallowing file names or directories. There are two parts to a robots.txt directive, the user-agent and one or more disallow instructions. The user-agent specifies one or all Web crawlers or spiders. When we think of Web crawlers we tend to think Google and Bing; however, a spider can come from anywhere, not just search engines, and there are many of them crawling the Internet. Here is a simple robots.txt file telling all Web crawlers that it is okay to spider every page: User-agent: * Disallow: … [Read more...] about Have You Considered Privacy Issues When Using Robots.txt & The Robots Meta Tag?
In the battle between search engines and some mainstream news publishers, ACAP has been lurking for several years. ACAP — the Automated Content Access Protocol — has constantly been positioned by some news executives as a cornerstone to reestablishing the control they feel has been lost over their content. However, the reality is that publishers have more control even without ACAP than is commonly believed by some. In addition, ACAP currently provides no “DRM” or licensing mechanisms over news content. But the system does offer some ideas well worth considering. Below, a look at how it measures up against the current systems for controlling search engines. ACAP started development in 2006 and formally launched a year later with version 1.0 (see ACAP Launches, Robots.txt 2.0 For Blocking Search Engines?). This year, in October, ACAP 1.1 was released and has been installed by over 1,250 publishers worldwide, says the organization, which is backed by the European … [Read more...] about ACAP Versus Robots.txt For Controlling Search Engines
The Robots Exclusion Protocol (REP) is not exactly a complicated protocol and its uses are fairly limited, and thus it’s usually given short shrift by SEOs. Yet there’s a lot more to it than you might think. Robots.txt has been with us for over 14 years, but how many of us knew that in addition to the disallow directive there’s a noindex directive that Googlebot obeys? That noindexed pages don’t end up in the index but disallowed pages do, and the latter can show up in the search results (albeit with less information since the spiders can’t see the page content)? That disallowed pages still accumulate PageRank? That robots.txt can accept a limited form of pattern matching? That, because of that last feature, you can selectively disallow not just directories but also particular filetypes (well, file extensions to be more exact)? That a robots.txt disallowed page can’t be accessed by the spiders, so they can’t read and obey a meta robots tag … [Read more...] about A Deeper Look At Robots.txt
Today, Google, Yahoo!, and Microsoft have come together to post details of how each of them support robots.txt and the robots meta tag. While their posts use terms like “collaboration” and “working together,” they haven’t joined together to implement a new standard (as they did with sitemaps.org). Rather, they are simply making a joint stand in messaging that robots.txt is the standard way of blocking search engine robot access to web sites. They have identified a core set of robots.txt and robots meta tag directives that all three engines support: Google and Yahoo! already supported and documented each of the core directives, and Microsoft supported most of them before this announcement. In their posts, they also list the directives they support that may not be supported by the other engines. For robots.txt, they all support: Disallow Allow Use of wildcards Sitemap location For robots meta tags, they all support: noindex nofollow noarchive nosnippet … [Read more...] about Yahoo!, Google, Microsoft Clarify Robots.txt Support