Difference between revisions of "Category:ErrorLog"

(Google caching issue: another thought)
(Google caching issue)
Line 172: Line 172:
  
 
: ''Another solution that just occurred to me.  Brian, if you now have your robots.txt set up to exclude the [[AboutUsBot]], we could simply delete the pages for your domains, then in a couple of weeks once Google has seen the deletions, we could have our bot re-create the [[NoBot]] version of the pages.  The only trick would be if they were re-created and again flagged as adult before google caught up.  This possibility for re-flagging by the community exists with my first proposal above as well, but might be less likely with this second one, since the pages wouldn't actually exist.  Thoughts? [[User:TedErnst|TedErnst]] | <small>[[User talk:TedErnst|talk]]</small> 08:11, 5 November 2007 (PST)
 
: ''Another solution that just occurred to me.  Brian, if you now have your robots.txt set up to exclude the [[AboutUsBot]], we could simply delete the pages for your domains, then in a couple of weeks once Google has seen the deletions, we could have our bot re-create the [[NoBot]] version of the pages.  The only trick would be if they were re-created and again flagged as adult before google caught up.  This possibility for re-flagging by the community exists with my first proposal above as well, but might be less likely with this second one, since the pages wouldn't actually exist.  Thoughts? [[User:TedErnst|TedErnst]] | <small>[[User talk:TedErnst|talk]]</small> 08:11, 5 November 2007 (PST)
 +
 +
Hello Ted,
 +
 +
Thank you very much for your response. I think the second option, of deleting the pages, allowing crawlers to notice their absence, then recreating the nobot versions of the pages is the best solution. The only difficulty is knowinq quite when the crawlers have been and gone - it's possible it might only take a few weeks for crawlers like Googlebot to notice the absence of the pages, but then again it might take months. If, as a non-dev, you could arrange this, that would be great. Once the devs get involved, I would suggest that they allow well-known crawlers such as googlebot unfettered access to any page which existed before these changes are put in place, but to continue block them from any new adult-only pages created thereafter (if that is the general intention).
  
 
== spam filtering ==
 
== spam filtering ==

Revision as of 23:02, 5 November 2007

Also see BugSquashing and FireFighting

This category currently contains no pages or media.