Good day,
I wonder where can I submit a media-wiki bug, this is the scenario (sadly mediawiki doesn't have a own php or code log):
Problem: No matter what password I set, I can not login on Wiki, I always get:
Login error You have not specified a valid user name
Note that I am applying case sensitive policy, example: WikiSysop instead of wikisysop.
Mediawiki 1.14.0
wiki:wiki(root)/#/startserv
Sun Java System Web Server 7.0U4 B12/02/2008 05:38
info: FCGI1000: Sun Java System Web Server 7.0U4 FastCGI NSAPI Plugin B12/02/2008 05:38
info: CORE5076: Using [Java HotSpot(TM) 64-Bit Server VM, Version 1.5.0_18] from [Sun Microsystems Inc.]
info: HTTP3072: http-listener-1: http://wiki:80 ready to accept requests
info: CORE3274: successful server startup
wiki:wiki(root)/#uname -a
SunOS wiki 5.10 Generic_138888-08 sun4u sparc SUNW,Sun-Fire-V240
wiki:wiki(root)/#
mysql> update user set user_password=md5(concat(user_id,'-',md5('temp'))) where user_name = "WikiSysop";
Query OK, 1 row affected (0.06 sec)
Rows matched: 1 Changed: 1 Warnings: 0
mysql>
And it does not work, always I get the same error, what is the process for bug submission?
the previous page display fine, everything works fine except login process.
Thanks in advance,
fabio.
_________________________________________________________________
Lauren found her dream laptop. Find the PC that’s right for you.
http://www.microsoft.com/windows/choosepc/?ocid=ftp_val_wl_290
[cc'd back to wikitech-l]
2009/6/8 Tim Starling <tstarling(a)wikimedia.org>:
> It's been discussed since OggHandler was invented in 2007, and I've
> always been in favour of it. But the code hasn't materialised, despite
> a Google Summer of Code project come and gone that was meant to
> implement a transcoding queue. The transcoding queue project was meant
> to allow transformations in quality and size, but it would also allow
> format changes without much trouble.
Ahhh, that's fantastic, so it is just a Simple Matter of Programming :-D
(I'm tempted to bodge something together myself, despite my low
opinion of my own coding abilities ;-) )
Start simple. "Upload your phone and camera video files! We'll
transcode them into Theora and store them." Pick suitable (tweakable)
defaults. Get it doing that one job. Then we can think about
size/quality transformations later. Sound like a vague plan?
Bottlenecks: 1. CPU to transcode with. 2. Disk space for queued video.
- d.
It would be a simple matter of programming to have something that
allows upload of encumbered video and audio formats and re-encode them
as Ogg Theora or Ogg Vorbis. It would greatly add to how much stuff we
get, as it would save the user the trouble of re-encoding, or
installing Firefogg, or whatever.
So why don't we do this? Has it been officially assessed as a legal
risk * (and I mean more than people saying it might be on a mailing
list **), has no-one really bothered, or what?
* until the Supreme Court uses in re Bilski to drive the software
patents into the ocean, cross fingers.
** though I fully expect people will now do so anyway
- d.
Hello,
I see I've created quite a stir around, but so far nothing really
useful popped up. :-(
But I see that one from Neil:
> Yes, modifying the http://stats.grok.se/ systems looks like the way to go.
For me it doesn't really seem to be, since it seems to be using an
extremely dumbed down version of input, which only contains page views
and [unreliable] byte counters. Most probably it would require large
rewrites, and a magical new data source.
> What do people actually want to see from the traffic data? Do they want
> referrers, anonymized user trails, or what?
Are you old enough to remember stats.wikipedia.org? As far as I
remember originally it ran webalizer, then something else, then
nothing. If you check a webalizer stat you'll see what's in it. We are
using, or we used until our nice fellow editors broke it, awstats,
which basically provides the same with more caching.
Most used and useful stats are page views (daily and hourly stats are
pretty useful too), referrers, visitor domain and provider stats, os
and browser stats, screen resolution stats, bot activity stats,
visitor duration and depth, among probably others.
At a brief glance I could replicate the grok.se stats easily since it
seems to work out of http://dammit.lt/wikistats/, but it's completely
useless for anything beyond page hit count.
Is there a possibility to write a code which process raw squid data?
Who do I have to bribe? :-/
--
byte-byte,
grin
Hello my name is gerardo cabero , from Argentina
Job for GSOC 2009 , in the Proyect with Michael Dale is my mentor ...
Find me now in the process of understanding the code. well as investigating
Saludos gerardo cabero
Keeping well-meaning admins from putting Google web bugs in the
JavaScript is a game of whack-a-mole.
Are there any technical workarounds feasible? If not blocking the
loading of external sites entirely (I understand hu:wp uses a web bug
that isn't Google), perhaps at least listing the sites somewhere
centrally viewable?
- d.
It would break the repo, yes.
Unless the hotlinking blocker allowed the Mediawiki
user agent. Easy to bypass, though.
-Chad
On Jun 7, 2009 11:32 AM, "David Gerard" <dgerard(a)gmail.com> wrote:
2009/6/7 Gerard Meijssen <gerard.meijssen(a)gmail.com>:
> Is this discussion about policy relevant to this mailing list ?
Somewhat:.
If we officially don't like hotlinking, is it reasonable to disable
hotlinking from Wikimedia sites? If so, can it be done without
breaking remote file repo use of Commons?
- d.
_______________________________________________ Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia....
Hello,
I need a little help understanding the deployment policy used on
Wikipedia in order to have a better image of the relation between
different types of request in bugzilla and the code added to ro.wp
following those requests.
I read at http://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3/RELEASE-NOTES?view=m…
that "MediaWiki is now using a "continuous integration" development
model with quarterly snapshot releases. The latest development code is
always kept "ready to run", and in fact runs our own sites on
Wikipedia.".
Indeed, when googling for the blogs of some wikimedia engineers, you
can see that at certain times the latest code from trunk is pushed
onto the production servers. On the other hand, when activating an
extension, the latest stable version is activated.
Is this the way it's really happening? If so, why are there two
different policies? Which of those two do you consider best?
Thanks,
Strainu