On Sat, 03 Jan 2004 11:37:04 -0800, Tim Thorpe wrote:
The obstacle is the DB server, to have an offsite dump
24/7 of the DB
server would effective double the bandwidth used for each transaction. I
work for an ISP N+1 takes care of most DC internal issues, the DC that is
being used is Verio which is a VERY large hosting company, I don't see
them going down any time soon. Some advanced routers have a dial up
redundancy capability where it can phone on a separate land line an
off-site router to inform it that there is a network issue on one end and
that all data needs to be re-routed to the secondary stack.
You can never eliminate all single points of failure, but I also see that
some of us are loosing our heads when it comes to solutions. Some
suggested solutions would cost not in the tens of thousands but hundreds
of thousands to implement.
Remember the golden rule to network engineering, KEEP IT SIMPLE STUPID!
;).
I agree.
The idea of a WikiTella thing is fascinating but seems to be very hard to
implement now.
Factors that would help WikiTella:
* hardware performance grows really fast
* bandwidth gets really cheap
If somebody would manage to get a prototype of this working under load and
with little bandwidth requirements i would be all for this solution, but i
have some doubts that this will happen soon.
A 'cheap computer' that would work in such a setup would most propably
need to be quite a few times quicker than the current cheap ones. And
wikipedia's demand might grow quicker than cheap computer's
performance. Mirrors could hardly be simple (old) boxes provided by a
university or isp. They would have to be brand new machines bought by
wikipedia or sponsored. Or a similar setup as the current one with
multiple machines and the associated administration work (but this might
buy more horsepower for the money).
Gabriel Wicke