On Tue, 13 Jan 2004 00:07:50 +0000, Nick Hill wrote:
Gabriel Wicke wrote:
The delays in the wikiserver system are caused by waiting for I/O- the
time taken for mechanical devices to seek a particular block of data. If
the data is being served from a squid cache rather than from a cache on
the wiki server, how will this reduce the overall I/O blocking problem?
If we have an old machine Squid will serve anything cached straight from
memory (small objects) or disk (images) without ever contacting the
database. That's a speedup of at least 50x over the current disk cache
with DB lookup etc. The bigger the Ram for the Squid the better of course,
but 500Mb will already hold a lot of compressed html.
The most commonly used pages are going to be in the
memory of the
database server so these are not costly to serve. The costly pages to
serve are those which need disk seeks to serve. The more I/O seek
operations a page requires, the more costly it is to serve.
Yup. So lets avoid them.
The proxy server will need to make a database lookup (for the URL)
Nope. Only if a page is *not* in the cache or marked as not cacheable.
If performance is the criteria, I suggest a proxy
isn't a good idea.
Well- please read up some docs. Or benchmark
http://www.aulinx.de/ -
commodity server (Celeron 2Ghz) running Squid.
--
Gabriel Wicke