XML Sitemaps The standard XML Sitemap file can use any name you choose to give it and need not be stored at the site root (although that is a reasonable place to put it). The file itself must be a UTF-8 encoded text file, which means URLs that include some special characters must use entity escaping, so the URLs in the Sitemap can be correctly parsed by search. Sitemaps can be saved in uncompressed form and presented as .XML files, or can be compressed in gzip format and presented as .GZ files. … [Read more...] about A Primer On How To Get The Most Out Of Sitemaps
Creating xml schema from xml file
Will the adoption of the Microdata format by the major search engines harm the development of the competing formats, and is the Web a little less rich because of it? Will a resource like schema.org make it easier for site owners to adopt and use a standard that they might not have otherwise used? Will search become better because site owners are making it easier for search engines to index content? Will this metadata approach benefit people who are more technically proficient and might not have any trouble implementing it, at the cost of indexing content that might be more relevant and meaningful but which doesn’t use microdata? … [Read more...] about Author Markup, Schema.org and Patents, Oh My!
So if your site is already indexed, and updates are already getting picked up regularly, you really don't need an XML sitemap. And I bet if the customers of this company knew how cheap and easy it is to generate your own sitemap, and understood the limitations of what a sitemap can do for them, at least some of them wouldn't have signed up for this company's services. … [Read more...] about XML Sitemaps Are Not All That
In 2014, Google published their Biperpedia paper, which tells us about how they might create ontologies from Query streams (sessions about specific topics) by finding terms to extract data from the Web about. At one point in time, Search engines would do focused crawls of the web starting at sources such as DMOZ, so that the Index of the Web they were constructing contained pages about a wide range of categories. By using query stream information, they are crowdsourcing the building of resources to build ontologies about. This paper tells us that Biperpedia enabled them to build ontologies that were larger than what they had developed through Freebase, which may be partially why Freebase was replaced by wiki data. … [Read more...] about Schema, Structured Data, and Scattered Databases such as the World Wide Web
Google ignores the priority attribute in XML sitemaps but does pay attention to lastmod, according to John Mueller. Google determines the priority of your pages itself, probably by popularity and authority. Lastmod, however, is a tag that indicates when the URL has changed the last time, which is really interesting to Google. … [Read more...] about The Best XML Sitemap Tools, Tips, and Tricks