duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Proposal for backgrounded concurrent I/O with new r


From: Peter Schuller
Subject: Re: [Duplicity-talk] Proposal for backgrounded concurrent I/O with new retry support
Date: Fri, 30 Nov 2007 20:47:42 +0100
User-agent: KMail/1.9.7

> As long as the order is maintained, then there should be no problem,

I was not going to care about ordering for putting of chunks, but place 
appropriate barriers at critical points (e.g., so that we don't claim a 
backup has suceeded prematurely due to out-of-order I/O).

The idea would be to not care about concurrency on certain meta operations and 
such, and only optimize the writing of the "bulk" data.

I'll look into exactly what is necessary, but unless there are major surprises 
lurking in the code I don't forsee this being a sticking point.

> but 
> I imagine most people will be happy with a concurrency of 2 or so.  My
> observation is that the build and IO take about the same amount of time
> on a relatively fast network, so you could double your speed there quite
> easily.  On a slow network, you could at least guarantee that there was
> no dead time other than what the network demands.

It was particularly meant for high latency cases; typically remote servers on 
the other end of the globe and such, where individual TCP connection 
throughput becomes an issue even if both ends happen to have a gbit pipe 
(think rsync.net, remote backups of colocated machines in other countries, 
etc).

> For those that run interactively, this would be a good thing.  The only
> requirement I would impose on any implementation is that it must be able
> to run under cron, or similar, without human intervention.

Absolutely. Only when explicitly asked for would it wait for operator 
attention.

> That said, a cron job that somehow interactively prompts the operator
> for help may be a good thing, if it can be done without using the system
> console since some machines have none.  Your --retry-failure-* options
> may be able to feed a monitoring task, in which case, multiple
> duplicity's could be run at the same time to different directories.

Right. Initially I figured I would just implement the flag file that was 
recently discussed, but certainly, any notification can be put in (E-Mail, 
HTTP, run a script, etc).

> All very good ideas and plans!  Do you need a branch on CVS so you can
> work on a separate tree, then merge later?  Or will it be just one big
> release sometime down the road?

Well, the retry changes should be fairly simple and self-contained so those 
are no biggies; I'll just submit patches for it as usual.

For concurrency, that will be a bit more substantial, but it's not enough to 
become unmanagable as working copy development. Though I normally like to do 
development in branches with frequent small commits, I suspect that with CVS 
it will just cause me more trouble than it would help, given that I want to 
keep up to date with HEAD at all times to minimize conflicts.

That said, of course if you would prefer that I work in a branch (e.g., for 
inspection of progress/work) I don't have a problem with it.

-- 
/ Peter Schuller

PGP userID: 0xE9758B7D or 'Peter Schuller <address@hidden>'
Key retrieval: Send an E-Mail to address@hidden
E-Mail: address@hidden Web: http://www.scode.org

Attachment: signature.asc
Description: This is a digitally signed message part.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]