Nick Reinking wrote:
I didn't say having more entries will reduce performance. A select on
one of the columns probably doesn't take a lot of time. Just having a
lot of records in old isn't making MySQL slow. But, it is trying to
keep each individual recard in memory. Just look at the bz2 files
available for download. The cur table is 73MB, and old is 740MB. The
old table makes the database 10x as big. Ouch! Since it is that big,
the processes grow and grow, eventually causing the machine to swap like
mad.
The database have to handle that. And I think the database is used because
of the fact that a lot of data is involved. Possibly you can setup the
database with appropriate memory usage options, but I think thats done.
Even with the new machine, I think it would be helpful to perhaps split
old into two databases. The database we have now would contain
everything we have now, plus a key to articles on the other database.
That way, looking at comments and dates of old articles (ie, Page
History) would still be fast, and we could reduce the memory footprint
of the primary server. If you wanted to retrieve an older article, it
would take that key, connect to the other database and pull it down.
That other database could have perhaps a lower priority, and a much
lower memory buffer. Or, we could store it on the filesystem, whatever.
In the evening (1:00 AM GMT), I see:
MemTotal: 2059088 kB
MemFree: 93096 kB
MemShared: 40 kB
Buffers: 14892 kB
SwapTotal: 2047992 kB
SwapFree: 1289140 kB
That's pretty heavy swap usage (33%), and heavy memory usage.
I hope you know that this values are very useless. You have to know the
ratio of pagefaults to pagehits. Linux often has a lot of nearly never used
things in swapspace.
Smurf
--
Schlumpf(a)AdamAnt.mud.de
------------------------- Anthill inside! ---------------------------