[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: My experience with using cp to copy a lot of files (432 millions, 3
From: |
Christian Jaeger |
Subject: |
Re: My experience with using cp to copy a lot of files (432 millions, 39 TB) |
Date: |
Sun, 14 Sep 2014 18:50:16 +0100 |
(sorry, I didn't reply to all at first)
> could be improved by using a prefix compression method
Yes, a simple method (I've made that suggestion in this[1] post) would
be to store filename+pointer to parent path (basically instead of
doing string concatenation of parent path and filename during readdir,
create a struct that holds both).
[1] https://news.ycombinator.com/item?id=8306299
BTW megacopy may be usable now as long as you don't need sub-second
timestamp precision (Perl does not support that out of the box, but
could easily be fixed when accepting some dependencies); there's room
for improvement, though, currently it's quite wasteful with space
(disk) for the table. I need to do more testing for performance, too.
(I include a script to generate test trees, but it creates
unrealistically long paths right now, I should change that to get more
realistic testing.)
- My experience with using cp to copy a lot of files (432 millions, 39 TB), chrjae, 2014/09/12
- Re: My experience with using cp to copy a lot of files (432 millions, 39 TB), Pádraig Brady, 2014/09/12
- Re: My experience with using cp to copy a lot of files (432 millions, 39 TB), Pádraig Brady, 2014/09/12
- Re: My experience with using cp to copy a lot of files (432 millions, 39 TB), Christian Jaeger, 2014/09/12
- Re: My experience with using cp to copy a lot of files (432 millions, 39 TB), Pádraig Brady, 2014/09/14
- Re: My experience with using cp to copy a lot of files (432 millions, 39 TB),
Christian Jaeger <=
- Re: My experience with using cp to copy a lot of files (432 millions, 39 TB), Pádraig Brady, 2014/09/18