On Thu, 2003-09-11 at 17:44, Hr. Daniel Mikkelsen wrote:
Let's say you have 10 Wikipedia servers. All of
them dispense articles for
reading directly. When an article is about to be edited, the title is hashed,
and the corresponding server is contacted.
That way, each article "belongs" on one of X server, so a lot of consistency
problems disappear. That server can again notify the others about changes in
its "own" articles.
We've got about 3-4 edits per minute. Distributing writes sounds like a
lot more trouble than it's worth, possibly leading to all kinds of
consistency troubles on shared resources (link tables...).
Tacking on replicated database server(s) to handle read-only requests
(hundreds per minute) would be simpler and less fragile (slave dies,
just take it out of rotation and -no- pages become inaccessible; master
dies, just declare one of the slaves the new master).
Before we bother about anything like that, we just need a decently fast
machine for the web server!
-- brion vibber (brion @
pobox.com)