[PLUG] Moving/copying old home to new machine
Russell Senior
russell at personaltelco.net
Tue Sep 12 20:47:46 UTC 2017
>>>>> "Michael" == Michael <michael at jamhome.us> writes:
Michael> Great description Paul
Michael> On 2017-09-12 08:46, Paul Heinlein wrote:
>> On Tue, 12 Sep 2017, Michael wrote:
>>
>>> Paul, Russel,
>>>
>>> Both of you have used relative terms in describing sizes.
>>>
>>> What is small-ish? What is VERY LARGE?
>>>
>>> Please describe in terms file count, aggregate size of data, or
>>> other metrics.
>>
>> The big caveat is that said metrics are somewhat hardware-dependent.
>> The greater the amount of system RAM, the quicker various checksum
>> and inode calculations can be made. With fast SSDs, all sorts of
>> operations go faster. Ditto with low-latency I/O or network
>> connections.
>>
>> When I mentioned a "small-ish directory tree," I was thinking of no
>> more than an couple GB in aggregate size, no complex hard-linking,
>> and files numbering in the dozens or hundreds (not tens of
>> thousands). For hardware, I had in mind a typical home PC, not an
>> extreme gamer unit or a well-appointed enterprise server.
>>
>>> If nothing else it will provide some humor in the future. "They
>>> considered that VERY LARGE? hahahaha" "small-ISH? it's less than a
>>> disk block, that's tiny."
>>
>> Sad but true...
I just want to say here that -ish is my favorite suffix. I use it a
lot. It lets me signal "approximate" while giving a general sense,
without having to do a lot of work to nail something down where that
last increment of precision ends up not having a lot of practical value.
And it's shorter than the similarly wonderful "or something" suffix. Or something.
To elaborate a bit more on the running out of memory problem. I
encountered it a number of years ago while trying to move a backup tree.
That is, I do disk-to-disk backups using rsync --link-dest. This
results in large mazes of many-many hardlinks. In practice this works
delightfully. However, once when I wanted to move this backup tree of
parallel snapshots to a new bigger disk, i found that I was running out
of memory on the machine because tracking all the hardlinks were
overwhelming it. I ended up constructing a script that would do one
snapshot at a time to reduce the overall number of hardlinks it needed
to keep track of at once. This is going to depend on the number of
hardlinks involved and the amount of memory on the machine doing the
rsync operation. These days, with super cheap RAM, it might be less of
an issue than before, but it was painful enough once that I learned to
be aware of the issue.
Or something.
--
Russell Senior, President
russell at personaltelco.net
More information about the PLUG
mailing list