duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Duplicity-talk] Suggested improvement for large backup sets and slow pr


From: Henrik Bohre
Subject: [Duplicity-talk] Suggested improvement for large backup sets and slow processors
Date: Wed, 10 Dec 2008 22:34:33 +0100

I've run duplicity ssh backups daily for some years, and have recently begun getting failures:

sftp command: 'ls -1'
Timeout waiting for response

My investigation shows the following reasons:
1. The response from 'ls -1' is quite large (around 1 MB).
2. The pexpect.expect function is called from the run_sftp_command without searchwindowsize which means that the entire response is searched repeatedly for the sftp prompt after each read.
3. My backup server has a slow processor (200 MHz ARM).

Suggested improvement:
In the run_sftp_command, check the longest pattern in the child.expect response pattern list, and add that to maxread:

def run_sftp_command
    ...
    maxread = 2000 # expect read buffer size
    response_patterns = [self.pexpect.EOF,
    ...
                         "(?i)no such file or directory"]
    max_response_len = max([len(p) for p in response_patterns])
    child = self.pexpect.spawn(commandline, timeout=globals.timeout,
                               maxread=maxread,
                               searchwindowsize=maxread+max_response_len)

    ...
    while 1:
        match = child.expect(response_patterns)
    ...

   
Best regards,
/Henrik


reply via email to

[Prev in Thread] Current Thread [Next in Thread]