"Erik Zachte" wrote:
I proposed doing the largest dumps in incremental steps (say one job per
letter of the alphabet and concat at the end), so that rerun after error
would be less costly
but Brion says there are no disk resources for that
Why not? 26 files of 1/26 of the db would fill the same as a full dump.
It may not be the same if they're stored directly compressed (then the full
dumps
are done twice, one on bz2 and another on 7z?) but at least bz2 allows
storing of
multiple bz2 at once (in fact files are stored in independent blocks) though
its files
are much larger.