As I understand it, both savings diffs and compressing do essentially the same
thing: Reduce redundancy to cut down the space needed to store information.
The method is different though:
Storing diffs works form step to step of changing versions, which means that
this is a very fast and easy way to store each version and it gives rather good
"compression" by leaving out unchanged pieces. It is however rather costly to
retrieve a certain version, because you have to go through each and every diff
(unless you employ a rather clever scheme which reduces the "compression" ratio
to some degree for the sake of speed).
Compressing is another matter. It cuts out redundancy as well but does this by
storing unchanged parts only once. Other than storing diffs, this works on the
whole data at once. The algorithm can analyze the whole range of data to
determine the best way to compress ist. This means that it is rather complex to
add new information but gives a very good compression ratio and makes it fairly
easy to retrieve any given data.
If compression is applied In A Clever Way - meaning compressing ALL versions of
an article at once, not each version for itself - it should give equal or even
better compression combined with a quick way to retrieve it.
I recognize that all depends very much on the kind of the used algorithm but
theoretically I think this is how it works.
Just my 2c.
PeterW