On Fri, Nov 14, 2003 at 07:00:31PM +0100, Axel Boldt wrote:
Brion Vibber wrote:
>Please do *not* use programs such as Webstripper. Especially on dynamic
>sites like Wikipedia, they create a huge amount of load on the servers by
[...]
While I of course agree with Brion's statement, I
think we should also
try to understand the other side. When I was still on a dial-up line
that charged by the minute, I also several times tried to download
interesting sites all at once, so that I could read them leisurely
off-line. That seems to be quite natural behavior. You can't really
learn stuff when the clock is ticking.
Don't we have a nice compressed static HTML tree by now that we could
offer people under the "Download Wikipedia" heading on the main page?
I see no problem about the possibility to generate a whole Wikipedia
(mirror) site as a static snapshot. Probably it could be done with minimal
modificaiton of the scripts. Static content can be retrieved by bots
without causing any trouble.
Consider it as a special mirror.
I'm not volunteering though ;-)
g.