Also, I don't see why the size of the old table
has anything to do
with the amount of memory used by MySQL. There's almost no difference
in performance between a database with 10,000 entries and one with
100,000 entries. A more significant win is likely to be reducing the
size of the /individual records/, perhaps by putting the full text in
the filesystem and just having pointers and stats in the database.
I didn't say having more entries will reduce performance. A select on
one of the columns probably doesn't take a lot of time. Just having a
lot of records in old isn't making MySQL slow. But, it is trying to
keep each individual recard in memory. Just look at the bz2 files
available for download. The cur table is 73MB, and old is 740MB. The
old table makes the database 10x as big. Ouch! Since it is that big,
the processes grow and grow, eventually causing the machine to swap like
mad.
Even with the new machine, I think it would be helpful to perhaps split
old into two databases. The database we have now would contain
everything we have now, plus a key to articles on the other database.
That way, looking at comments and dates of old articles (ie, Page
History) would still be fast, and we could reduce the memory footprint
of the primary server. If you wanted to retrieve an older article, it
would take that key, connect to the other database and pull it down.
That other database could have perhaps a lower priority, and a much
lower memory buffer. Or, we could store it on the filesystem, whatever.
In the evening (1:00 AM GMT), I see:
MemTotal: 2059088 kB
MemFree: 93096 kB
MemShared: 40 kB
Buffers: 14892 kB
SwapTotal: 2047992 kB
SwapFree: 1289140 kB
That's pretty heavy swap usage (33%), and heavy memory usage.
--
Nick Reinking -- eschewing obfuscation since 1981 -- Minneapolis, MN