Erik wrote:
>Bomis does other stuff which you are not aware of, Mav :-)
>
Good point - the only time I've ever tried to go to Bomis.com was at work but
I was blocked by the my employers censorware. So there may very well be many
other Bomis-owned websites I'm not aware of.
--mav
Since EL content is under the GFDL just like the spanish-language wiki
is there maybe a chance that we could do a mass import of all the
articles they have that we do not? I mean, theres is no reason not too
and it could only help the spanish-language wiki.
Or is it a reason that they might get mad and the talks of rejoining
will die? They seemed to have been pretty fruitless up until now anyways....
Lightning
> The whole backup process is automated, but I start the script manually
> because we're short on disk space and I prefer to keep an eye on it.
I'm confident Jimbo would consider adding diskspace a minor investement,
provided we don't use anything fancy (raid, SCSI) (Jimbo am I right?).
I guess bandwidth costs are the main expense and several orders of
magnitude larger.
I know that 18 days without a backup is a rare event, but even losing 6
days of edits, uploads and discussions would give me the shivers. Are
they stored on a different disk by the way? If backup is a matter of
minutes per database, then the benefits of a scheduled daily outage far
exceed the costs, it will even save some bandwidth ;)
Backups are an inconvenience until you need them.
Erik Zachte
Toby Bartels saith on WikiEN-l:
> Try setting your options for the mailing list
> so that the digest comes with MIME enabled.
> This is off by default in case of broken readers,
> but there's no reason that anybody should leave it off
> if they have a decent mail reader -- like Mozilla.
> Then if an individual message specifies its charset,
> your mail reader will know and won't have to guess.
Brion Vibber also saith (in part) on Wikitech-l:
> ...Change your options to use MIME digests instead of plain text digests...
I /do/ have MIME enabled. I'm not using Mozilla Mail
(the so-called "Thunderbird") but Mozilla Navigator
("Firebird") because I use web-based Yahoo! Mail. In
Y!Mail, the entire digest comes as one HTML file, and
messages with different charsets in one digest confuse
both Y!Mail (as to what charset it should write out)
and Mozilla (as to what charset it should autodetect).
There is no option in Y!Mail not to show e-mails within
other e-mails automatically, and POP access for Y!Mail
costs (too much) money (so I can't use Mozilla Mail).
The main problem is not so much with UTF-8 bytes,
but with ISO Latin-1/ANSI bytes that are invalid in UTF-8
and hence display as white question marks inside
black diamonds. It<?>s hard to read messages with these
symbols. (The 'smart' quote is one of these
invalid characters.)
Isn't there some simple software routine to convert all
incoming messages - or at least the Latin-1/ANSI ones - to
UTF-8?
There was talk a while back about moving to a wiki-like
or bulletin board system instead of using mailing lists;
that would solve this charset problem, and would be a
generally good idea. Are there any plans for that?
-[[User:Geoffrey|Geoffrey Thomas]]
=====
-Geoffrey Thomas
geoffreyerffoeg(a)yahoo.com
__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com
John Robinson wrote:
>I thought it ought to be mentioned that the lack of search capability
>is not just annoying to editors, it reading of the encyclopedia for
>study virtually impossible, expecially for someone new to the site.
>Would it be possible to put the Google search box in as had been done
>in the past? I realize that's lame in comparison but it's better than
>nothing.
Yes, please!
It's no good as a long term solution, but it's something.
And it does work, kind of.
-- Toby
1. I've committed a new language file to CVS. What next? The Ro: pedia
guys are getting restless and I don't know what to tell them
2. What is the cycle for files going from CVS -> test server -> live
Is any part of the above automatic? If so, when does it happen?
If not, who currently has the ability to do that? When do they do it?
I know the tech experts are swamped in database problems. but *please*
could you take a moment to answer these?
The info for wannabe hackers on meta peters out at getting CVS access.
I promise to write this up on meta once I understand it!
Hi, folks!
I implemented a whitelist mode for editing articles: if $wgWhitelistEdit
is true, users can only edit pages if they are logged in. The code took
about 25 lines of code and the default setting is "false" - meaning,
everybody on this planet can edit articles - as usual.
Is it ok to commit this feature to CVS or should I rather put it into my
own set of local diffs?
Bye!
Matthias
Foreword: As I begin to start making inter-langa links
on the Ar:wikipedia, I (curiously) am remined of the
langalinks issue -- as they might bear bearingly on my
current workload. :) How are they coming -- Is it Mr.
Magnus that is working on these?
More of my fancy ideas about langalinks:
1. Allow for self-language link in page -- which is
ignored.
This would allow for a little easier mass copying of
langalinks. (no typing a last link-- hard work :U )
If a page is moved to a new article -- making the old
one a redirect. Upon this move function, a note is
sent to recurse all the langa links connected to it.
Reading each file, and updating the specific langalink
thats associated with that file. A recursor to the
original links schedules a bot change (with no hurry)
Considering the possibility that all this editing
might generate a lot of bot activity, and cause
mixups, it may then be wise to separate langalinks
from their articles -- an edit window may have a field
at the bottom for only metadata (langalinks for
starters) and this metadata is actually a separate
field, thereby eliminating any unnecessary edit
conflicts. (Does the DB works that way at all?)
-SV
__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com
I've made some quick hacks to fall back to displaying cached pages where
available when the database can't be connected to.
Change is in CVS and on the test wiki; you can simulate the database
being down here:
http://test.wikipedia.org/w/breakwiki.php
I've also pulled some of the caching stuff out of Article and into a
separate CacheManager class; some more refactoring is likely warranted.
The new code dispenses as well with the duplicate uncompressed cache
files when using gzip mode; a browser that doesn't support gzip-encoded
pages should have the page fed to it uncompressed on the fly. This will
significantly reduce space requirements for the cache at little CPU cost
(most browsers do support gzip, and decoding gzip is relatively
inexpensive).
Please test in various browsers!
-- brion vibber (brion @ pobox.com)