Difference between revisions of "Category:ErrorLog"
MarkDilley (talk | contribs) (Registrar broken link) |
(→Unable to add Math.cz , Nysun.com or StoraEnso.com: also Pdx.com as well) |
||
Line 83: | Line 83: | ||
I am unable to automatically add the sites [[Math.cz]] , [[Nysun.com]] or [[StoraEnso.com]]. Have these sites been intentionally blocked? --[[User:Eohippus|Eohippus]] 16:19, 19 October 2007 (PDT) | I am unable to automatically add the sites [[Math.cz]] , [[Nysun.com]] or [[StoraEnso.com]]. Have these sites been intentionally blocked? --[[User:Eohippus|Eohippus]] 16:19, 19 October 2007 (PDT) | ||
+ | * And AboutUsBot also won't create [[Pdx.com]], which is in your back yard. --[[User:Eohippus|Eohippus]] 15:25, 8 November 2007 (PST) | ||
== Cross Site Scripting Vulnerability! == | == Cross Site Scripting Vulnerability! == |
Revision as of 23:25, 8 November 2007
Also see BugSquashing and FireFighting
This page is for reporting errors using AboutUs. Please include as much information as you can, including the page name you were trying to access, what you were trying to do and the date and time.
- (?) CaseSpace: Fix Template Forwarding (?)
- (?) CaseSpace: Fix WhatLinksHere (?)
- (?) CaseSpace: Fix Page Move (?)
- (?) CaseSpace: Fix International Character Handling (?)
- (?) EMail: Tweak to Allow Duplicate Confirmation (?)
Contents
- 1 search ads formatting
- 2 UberPatrol Rollback Bug
- 3 Column for Ads by Google is too narrow
- 4 Incorrect whois information
- 5 search and redirects
- 6 Favicon on image server is old
- 7 Editing in Opera
- 8 AddThis social bookmarking -- Facebook posts lose page thumbnail images
- 9 Rollback
- 10 www not working
- 11 CaseSpace: SimInistries.org
- 12 CaseSpace: Keith's user image
- 13 CaseSpace: Arguments don't work across casespace redirect
- 14 AdultContentPolicy splash page
- 15 Unable to add Math.cz , Nysun.com or StoraEnso.com
- 16 Cross Site Scripting Vulnerability!
- 17 Subdomains
- 18 Toolbar addButton function
- 19 Flagging crash
- 20 AdultSplash shouldn't include the leading /
- 21 Weird domain box cruft
- 22 recaptcha problem (moved from ConcernsPage)
- 23 bug in the domain area
- 24 Firefox/2.0.0.8 on the Mac doesn't seem to work with AboutUs
- 25 Google caching issue
- 26 spam filtering
- 27 Registrar broken link
search ads formatting
The 3rd and 4th ads are smooshed together a bit. TedErnst | talkUberPatrol Rollback Bug
One-step rollback doesn't work. [10:01pm] UmarSheikh: i think that the trouble with uberpatrol is that we need to remove -1 from (mysize - index_into -1) to make uberpatrol =work
Column for Ads by Google is too narrow
In the column of "Ads by Google", the width is sometimes not enough for the URL to fit, with the result that the URL appears to be truncated. This happens with both the old skin and the new one. --Eohippus 08:31, 5 September 2007 (PDT)
Incorrect whois information
I just got AboutUsBot to create Etfo.org, and the address shown is incorrect. Either AboutUsBot is using an out-of-date Whois database or something else has gone wrong. The correct address can be found at http://whois.domaintools.com/etfo.org or at Etfo.on.ca. --Eohippus 13:05, 13 September 2007 (PDT)
The Whois information we have is a bit out of date, and we have a WhoisRefresh project to address it. Jason Parmer 12:00, 26 September 2007 (PDT)
search and redirects
Search seems to be different now, with just google and no mediawiki search? Anyway, the point is redirects don't seem to work from search. For example, I searched for "Wiki Way" and instead of going to TheWikiWay, I went to WikiWay which is a redirect to TheWikiWay. Can this be fixed? TedErnst | talk 02:00, 25 September 2007 (PDT)
- The mediawiki search is showing up for me (in Firefox on a Mac), but way at the bottom of my screen, and only after a bunch of whitespace. This must be a bug in the AdSense for Search implementation. -- TakKendrick
Favicon on image server is old
* for example ~~ MarkDilley
Fixed -Stephen Judkins 11:53, 24 October 2007 (PDT)
Editing in Opera
Editing pages using Opera has, as of last Thursday or Friday, given the following problems:
- Show Preview doesn't work
- Clicking on Save Page doesn't seem to do anything, and one doesn't leave the edit box but refreshing the page shows that it has been updated.
- FatimaRaja 14:44, 1 October 2007 (PDT)
AddThis social bookmarking -- Facebook posts lose page thumbnail images
I was excited to see the AddThis widget, could think of a bunch of uses for posting AboutUs pages to Facebook. But there's a problem -- if you use the domain tools thumbnail as the image for the Facebook posted item, it turns into a tiny thumbnail of a domain tools error message. Here's the sequence: [1] [2] [3]
I've tried it with a few AboutUs pages, getting the same result. --Brian Kerr
Hmm, it seems to be working fine for me. Can you still reproduce this? -Stephen Judkins 12:30, 31 October 2007 (PDT)
Rollback
note left at UberPatrol ~~ MarkDilley
www not working
I thought at one time, we had the search bar ignore the www. when looking for a site or to create one: for example, search for www.ODC.COM should get ODC.COM - I thought. MarkDilley
Should be resolved. -75.148.54.233 15:27, 24 October 2007 (PDT)
CaseSpace: SimInistries.org
SimInistries.org should be SIMinistries.org. When I try to move this to the correct name, I get a database error. Fixed the bug. Thanks for finding it! -Stephen Judkins 11:18, 19 October 2007 (PDT)
TakKendrick | Talk 23:49, 15 October 2007 (PDT)
CaseSpace: Keith's user image
Site dies with the casespace DB error when Keith tries to upload his user portrait
CaseSpace: Arguments don't work across casespace redirect
This means when you go to the diff link on something that has a differnt name, you can't get to the diff to be able to patrol it.
I think this has been resolved. Keep an eye on it if you experience any further problems. -75.148.54.233 17:59, 19 October 2007 (PDT)
AdultContentPolicy splash page
Because ACP doesn't pass along arguments, if the splash page is tabbed out for patrolling before you have visited the splash screen the ones you have opened already are worthless.
Unable to add Math.cz , Nysun.com or StoraEnso.com
I am unable to automatically add the sites Math.cz , Nysun.com or StoraEnso.com. Have these sites been intentionally blocked? --Eohippus 16:19, 19 October 2007 (PDT)
- And AboutUsBot also won't create Pdx.com, which is in your back yard. --Eohippus 15:25, 8 November 2007 (PST)
Cross Site Scripting Vulnerability!
http://www.xssed.com/mirror/19588/
Subdomains
I noticed that the link at the top of an article to the website itself does not handle subdomains correctly. For example, instead of going to http://midnr.startspot.nl/ the link goes to http://www.midnr.startspot.nl/. Some servers seem to "detect" the www and then redirect to the domain instead of the subdomain. midnr.startspot.nl is an actual example of this.
I think this has been addressed in most cases. Unfortunately, it's not going to work 100% of the time given the data we have. -Stephen Judkins 13:10, 24 October 2007 (PDT)
Toolbar addButton function
It seems that someone, or something, got rid of the addButton function necessary to generate the edit page toolbar. Firefox's debugger says addButton is not defined.
Should be fixed -Stephen Judkins 17:20, 23 October 2007 (PDT)
Flagging crash
I flagged a site and on Special:MarkAsAdult got this error:
exception 'Exception' with message 'Web service did not respond.' in /opt/aboutus/wiki/extensions/AboutUsWebServices/AboutUsWebServices.php:26 Stack trace: #0 /opt/aboutus/wiki/extensions/AboutUsAdultContent/AdultContent.php(64): auDispatch('add_adult_tag', Array) #1 /opt/aboutus/wiki/includes/SpecialPage.php(628): auMarkAsAdult(NULL, Object(SpecialPage)) #2 /opt/aboutus/wiki/includes/SpecialPage.php(434): SpecialPage->execute(NULL) #3 /opt/aboutus/wiki/includes/Wiki.php(196): SpecialPage::executePath(Object(Title)) #4 /opt/aboutus/wiki/includes/Wiki.php(45): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #5 /opt/aboutus/wiki/index.php(89): MediaWiki->initialize(Object(Title), Object(OutputPage), Object(User), Object(WebRequest)) #6 /opt/aboutus/wiki/rewrite.php(340): include('/opt/aboutus/wi...') #7 {main}
-- TedErnst | talk 14:21, 24 October 2007 (PDT)
I went back to the page, reloaded, saw that it wasn't flagged, tried again, and got the same error. TedErnst | talk 14:21, 24 October 2007 (PDT)
I couldn't reproduce the problem--flagging seems to work fine for me--but I restarted Compost anyways. Let me know if the problem reappears. -Stephen Judkins 15:58, 24 October 2007 (PDT)
AdultSplash shouldn't include the leading /
The pagename includes the leading slash when you hit the AdultSplash page. Perhaps AdultSplash should use the title::getPrefixedText function rather than the title::getLocalUrl or whatever it is using?
Weird domain box cruft
Since the NewSkin we have had this issue, look in the domain box below the diff: [4] ~~ MarkDilley
Should be resolved. -Stephen Judkins 12:05, 30 October 2007 (PDT)
recaptcha problem (moved from ConcernsPage)
I would be logged in if I could "join". Unfortunately, the required image in the box was not visible. If you have system requirements, it would save frustration if such requirements were stated. --- I tried using both Firefox and Safari, by the images did not show up in either browser. —The preceding unsigned comment was added by 66.235.26.191 (talk • contribs) .
bug in the domain area
The registrar badge is not fully working, example: SeoParking.com - see the view whois record ~~ MarkDilley
Firefox/2.0.0.8 on the Mac doesn't seem to work with AboutUs
http://www.aboutus.org and almost every other AboutUs page displays only a null page from Firefox/2.0.0.8 or Firefox/2.0.0.9 on my Macintosh G5 running OS X Version 10.4.10
Settings: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9
http://www.aboutus.org/cbc.ca redirects correctly to http://www.aboutus.org/CBC.ca , but that is another null page.
http://www.aboutus.org/robots.txt displays correctly, though.
-Iapetus 11:46, 2 November 2007 (PDT)
- Until yesterday, I was also using 2.0.0.8 and OS X 10.4.10. Everything seemed fine. Ugh. Today I'm using 2.0.0.9 and 10.5.0 and all also seems well. Iapetus, any other sites giving problems? I wonder how we track this down. TedErnst | talk 20:21, 2 November 2007 (PDT)
Google caching issue
- Moved from ConcernsPage. TedErnst | talk 07:49, 3 November 2007 (PDT)
Some of our sites are adult. As a result, old whois details that you published have been cached by various search engines such as google. Since you introduced the policy of blocking access to the pages if they are about adult content, google et al now cache the old whois details, because your site returns a 302 temporary redirect to a warning and login page. There is thus no way for the information to be cleared from google's cache as they will always have the old version stored since you deny them access to the new. Please fix this (and not just for google, but for any major search engine). —The preceding unsigned comment was added by Bcase (talk • contribs) .
- Two possible solutions. If you do not want adult-content domain pages to show up on Google, add all the adult-content domain pages to your robots.txt page or add robot exclusion code to all the adult-content domain pages, or send a fake 401.
If you want the current versions of adult-content domain pages to be indexed, then treat Googlebot and other robots as if they were already logged in and had clicked the "view adult content" button. --Iapetus 10:47, 4 November 2007 (PST)
Iapetus - who is your comment directed at?
I, as owner of the adult sites, have no way of changing aboutus.org's http response codes to google's crawlers. It is up to the admin of aboutus.org to allow google to crawl and index the latest versions of the page, given that they happily allowed them to crawl the old versions of the page in the first place, causing google to cache details which we really didn't want exposed in this way.
There are only two solutions. Either:
a) allow known crawlers to index all aboutus.org pages, whether they are about adult sites or not (thus allowing the crawler's cache to be updated to remove any old information), or
b) delete the pages, which would cause the original cached page to be dropped from the index.
The current behaviour of returning a 302 temporary redirect basically tells the crawler that the pages have not changed since they last saw them (which means they indefinitely cache the old data), and to basically ignore the "login form", since it is "temporary" redirect (http 302). Returning a 401 provides no guarantee that the old page would be dropped from a crawler's cache. Only a 404 would be close to guaranteeing that.
I would appreciate it if someone from aboutus.org would respond to this. We have over 400 domains affected by this issue, and it is causing significant stress for a number of people in our team. Please, please please do something about it. —The preceding unsigned comment was added by Bcase (talk • contribs) .
- I work for AboutUs. I'm on the community side and not the dev side. I wonder if removing the adult flag for a couple of weeks would allow google to catch up? And the put the flag back? We can also work with dev for a longer term solution, but does this work as temp solution, something we can work on with the tools we (non-dev) have? TedErnst | talk 08:00, 5 November 2007 (PST)
- Another solution that just occurred to me. Brian, if you now have your robots.txt set up to exclude the AboutUsBot, we could simply delete the pages for your domains, then in a couple of weeks once Google has seen the deletions, we could have our bot re-create the NoBot version of the pages. The only trick would be if they were re-created and again flagged as adult before google caught up. This possibility for re-flagging by the community exists with my first proposal above as well, but might be less likely with this second one, since the pages wouldn't actually exist. Thoughts? TedErnst | talk 08:11, 5 November 2007 (PST)
Hello Ted,
Thank you very much for your response. I think the second option, of deleting the pages, allowing crawlers to notice their absence, then recreating the nobot versions of the pages is the best solution. The only difficulty is knowinq quite when the crawlers have been and gone - it's possible it might only take a few weeks for crawlers like Googlebot to notice the absence of the pages, but then again it might take months. If, as a non-dev, you could arrange this, that would be great. Once the devs get involved, I would suggest that they allow well-known crawlers such as googlebot unfettered access to any page which existed before these changes are put in place, but to continue block them from any new adult-only pages created thereafter (if that is the general intention). —The preceding unsigned comment was added by Bcase (talk • contribs) .
- Brian, I see you and Kristina already are working on this. She sent a list. She's already done A-F and the rest of us will help with the rest. Should be done in the next 48 hours. TedErnst | talk 15:57, 5 November 2007 (PST)
- See also: Solve Adult Content 302 Issue TedErnst | talk 11:22, 7 November 2007 (PST)
spam filtering
I cannot RemoveThumbnail on DierenSex-AnimalSex.nl. ~~ MarkDilley
Registrar broken link
at Plas2fuel.com view whois record ~~ MarkDilleyThis category currently contains no pages or media.