In the battle between search engines and some mainstream news publishers, ACAP has been lurking for several years. ACAP — the Automated Content Access Protocol — has constantly been positioned by some news executives as a cornerstone to reestablishing the control they feel has been lost over their content. However, the reality is that publishers have more control even without ACAP than is commonly believed by some. In addition, ACAP currently provides no “DRM” or licensing mechanisms over news content. But the system does offer some ideas well worth considering. Below, a look at how it measures up against the current systems for controlling search engines. ACAP started development in 2006 and formally launched a year later with version 1.0 (see ACAP Launches, Robots.txt 2.0 For Blocking Search Engines?). This year, in October, ACAP 1.1 was released and has been installed by over 1,250 publishers worldwide, says the organization, which is backed by the European Publishers Council, the World Association of Newspapers and the International Publishers Association. If that sounds pretty impressive, hang on. I’ll provide a reality check in a moment. But first, let’s pump ACAP up a bit more. Remember back in July, when the Hamburg Declaration was…
- Xalo.vn joins search engine market
- Vietnamese search engine produces results
- Firms look to top search engine results
- Why is Germany’s Hubert Burda investing in a Vietnamese search engine?
- Bieber Fever dominates online search engine: survey
- Savills Asia-Pacific launches new residential search engine
- Army units effectively take part in natural disaster control, search and rescue
- Made-in-Vietnam robot used in Hanoi coffee shop
- Viet Nam team wins Asia-Pacific robotics contest in Kuala Lumpur
- Japanese man's childhood dreams give birth to giant robot
ACAP Versus Robots.txt For Controlling Search Engines have 289 words, post on at November 30, 2009. This is cached page on SEO. If you want remove this page, please contact us.