duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Out of space error while restoring a file


From: edgar . soldin
Subject: Re: [Duplicity-talk] Out of space error while restoring a file
Date: Mon, 17 Sep 2012 13:17:46 +0200
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:15.0) Gecko/20120907 Thunderbird/15.0.1

On 17.09.2012 11:56, Laurynas Biveinis wrote:
> 2012/9/17  <address@hidden>:
>> On 17.09.2012 09:01, Laurynas Biveinis wrote:
>>> Hi!
>>>
>>> 2012/9/11  <address@hidden>:
>>>> On 11.09.2012 15:25, address@hidden wrote:
>>>>> On 11.09.2012 07:22, Laurynas Biveinis wrote:
>>>>>> 2012/9/7  <address@hidden>:
>>>>>>> On 07.09.2012 07:42, Laurynas Biveinis wrote:
>>>>>>>> 2012/9/4 Laurynas Biveinis <address@hidden>:
>>>>>>>>> 2012/9/4  <address@hidden>:
>>>>>>>>>> On 04.09.2012 20:04, Laurynas Biveinis wrote:
>>>>>>>>>>>> On 04.09.2012 16:03, Laurynas Biveinis wrote:
>>>>>>>>>>>>> 2012/9/4  <address@hidden>:
>>>>>>>>>>>>>> On 04.09.2012 10:00, Laurynas Biveinis wrote:
>>>>>>>>>>>>>>> Hi -
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I'm trying to restore a ~30GB file from backups. The 
>>>>>>>>>>>>>>>>>>>>>>> free space on the
>>>>>>>>>>>>>>>>>>>>>>> drive is about 80GB. Yet on restore I get the error 
>>>>>>>>>>>>>>>>>>>>>>> below. What would
>>>>>>>>>>>>>>>>>>>>>>> be causing this and how much free space do I actually 
>>>>>>>>>>>>>>>>>>>>>>> need to restore?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> $ duplicity -t2D --file-to-restore "path/to/file"
>>>>>>>>>>>>>>>>>>>>>>> --s3-european-buckets --s3-use-new-style s3+http://foo 
>>>>>>>>>>>>>>>>>>>>>>> $HOME/file
>>>>>>>>>>>>>>>>>>>>>>> Local and Remote metadata are synchronized, no sync 
>>>>>>>>>>>>>>>>>>>>>>> needed.
>>>>>>>>>>>>>>>>>>>>>>> Warning, found the following orphaned backup file:
>>>>>>>>>>>>>>>>>>>>>>> [duplicity-inc.20120319T102409Z.to.20120320T010946Z.manifest.part]
>>>>>>>>>>>>>>>>>>>>>>> Last full backup date: Thu Aug 16 00:00:27 2012
>>>>>>>>>>>>>>>>>>>>>>> Traceback (most recent call last):
>>>>>>>>>>>>>>>>>>>>>>>   File "/usr/bin/duplicity", line 1403, in <module>
>>>>>>>>>>>>>>>>>>>>>>>     with_tempdir(main)
>>>>>>>>>>>>>>>>>>>>>>>   File "/usr/bin/duplicity", line 1396, in with_tempdir
>>>>>>>>>>>>>>>>>>>>>>>     fn()
>>>>>>>>>>>>>>>>>>>>>>>   File "/usr/bin/duplicity", line 1330, in main
>>>>>>>>>>>>>>>>>>>>>>>     restore(col_stats)
>>>>>>>>>>>>>>>>>>>>>>>   File "/usr/bin/duplicity", line 623, in restore
>>>>>>>>>>>>>>>>>>>>>>>     restore_get_patched_rop_iter(col_stats)):
>>>>>>>>>>>>>>>>>>>>>>>   File 
>>>>>>>>>>>>>>>>>>>>>>> "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py",
>>>>>>>>>>>>>>>>>>>>>>>  line
>>>>>>>>>>>>>>>>>>>>>>> 522, in Write_ROPaths
>>>>>>>>>>>>>>>>>>>>>>>     for ropath in rop_iter:
>>>>>>>>>>>>>>>>>>>>>>>   File 
>>>>>>>>>>>>>>>>>>>>>>> "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py",
>>>>>>>>>>>>>>>>>>>>>>>  line
>>>>>>>>>>>>>>>>>>>>>>> 495, in integrate_patch_iters
>>>>>>>>>>>>>>>>>>>>>>>     final_ropath = patch_seq2ropath( normalize_ps( 
>>>>>>>>>>>>>>>>>>>>>>> patch_seq ) )
>>>>>>>>>>>>>>>>>>>>>>>   File 
>>>>>>>>>>>>>>>>>>>>>>> "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py",
>>>>>>>>>>>>>>>>>>>>>>>  line
>>>>>>>>>>>>>>>>>>>>>>> 475, in patch_seq2ropath
>>>>>>>>>>>>>>>>>>>>>>>     misc.copyfileobj( current_file, tempfp )
>>>>>>>>>>>>>>>>>>>>>>>   File 
>>>>>>>>>>>>>>>>>>>>>>> "/usr/lib/python2.7/dist-packages/duplicity/misc.py", 
>>>>>>>>>>>>>>>>>>>>>>> line 170,
>>>>>>>>>>>>>>>>>>>>>>> in copyfileobj
>>>>>>>>>>>>>>>>>>>>>>>     outfp.write(buf)
>>>>>>>>>>>>>>>>>>>>>>> IOError: [Errno 28] No space left on device
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> The backup chain looks as follows
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Chain start time: Thu Aug 16 00:00:27 2012
>>>>>>>>>>>>>>>>>>>>>>> Chain end time: Thu Aug 30 00:00:23 2012
>>>>>>>>>>>>>>>>>>>>>>> Number of contained backup sets: 12
>>>>>>>>>>>>>>>>>>>>>>> Total number of contained volumes: 3477
>>>>>>>>>>>>>>>>>>>>>>>  Type of backup set:                            Time:   
>>>>>>>>>>>>>>>>>>>>>>>    Num volumes:
>>>>>>>>>>>>>>>>>>>>>>>                 Full         Thu Aug 16 00:00:27 2012   
>>>>>>>>>>>>>>>>>>>>>>>            2916
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Sun Aug 19 00:00:25 2012   
>>>>>>>>>>>>>>>>>>>>>>>              96
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Mon Aug 20 00:00:28 2012   
>>>>>>>>>>>>>>>>>>>>>>>              33
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Tue Aug 21 00:00:30 2012   
>>>>>>>>>>>>>>>>>>>>>>>              37
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Wed Aug 22 00:00:25 2012   
>>>>>>>>>>>>>>>>>>>>>>>              58
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Thu Aug 23 00:00:30 2012   
>>>>>>>>>>>>>>>>>>>>>>>              62
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Fri Aug 24 00:00:31 2012   
>>>>>>>>>>>>>>>>>>>>>>>              32
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Sat Aug 25 00:00:26 2012   
>>>>>>>>>>>>>>>>>>>>>>>              81
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Sun Aug 26 00:00:28 2012   
>>>>>>>>>>>>>>>>>>>>>>>              75
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Mon Aug 27 00:00:21 2012   
>>>>>>>>>>>>>>>>>>>>>>>               8
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Tue Aug 28 00:00:18 2012   
>>>>>>>>>>>>>>>>>>>>>>>              21
>>>>>>>>>>>>>>>>>>>>>>>          Incremental         Thu Aug 30 00:00:23 2012   
>>>>>>>>>>>>>>>>>>>>>>>              58
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Duplicity is 0.6.18.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Thanks in advance,
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> you need 30GB (size of file to restore) plus the size of 
>>>>>>>>>>>>>>>>>>>>>> one volume in wherever TMP points to.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Thanks. But I have only one partition, TMP is unset (if I 
>>>>>>>>>>>>>>>>>>>>> understand
>>>>>>>>>>>>>>>>>>>>> correctly, then it defaults to /tmp), the volume size is 
>>>>>>>>>>>>>>>>>>>>> default, so
>>>>>>>>>>>>>>>>>>>>> I'd need 30GB + 25MB, and I have 80GB free, but 
>>>>>>>>>>>>>>>>>>>>> apparently that's not
>>>>>>>>>>>>>>>>>>>>> enough?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> should be... runthe restore with maximum verbosity '-v9' 
>>>>>>>>>>>>>>>>>>>> and post the complete output to pastebin (obfuscate 
>>>>>>>>>>>>>>>>>>>> private info in it) and send the link. maybe i'll see 
>>>>>>>>>>>>>>>>>>>> something.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I'm attaching the compressed log. It's 2.5MB uncompressed, 
>>>>>>>>>>>>>>>>>>> that's too
>>>>>>>>>>>>>>>>>>> big for pastebin. Please let me know if I should it send it 
>>>>>>>>>>>>>>>>>>> in some
>>>>>>>>>>>>>>>>>>> other way.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> you might want to monitor the disk usage during the 
>>>>>>>>>>>>>>>>>>>> restore. my guess would be that the final copy of 30GB 
>>>>>>>>>>>>>>>>>>>> fails. maybe duplicity keeps downloaded volumes in temp 
>>>>>>>>>>>>>>>>>>>> until finished?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I got the "low disk space, 200MB remaining" warning on the 
>>>>>>>>>>>>>>>>>>> volume
>>>>>>>>>>>>>>>>>>> which had 80GB free initially. Looking at the log file, I 
>>>>>>>>>>>>>>>>>>> guess it's
>>>>>>>>>>>>>>>>>>> the initial downloading that fails. But why does it have to 
>>>>>>>>>>>>>>>>>>> download
>>>>>>>>>>>>>>>>>>> so much?
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I will attach a volume with some 200GB free, point TMP to 
>>>>>>>>>>>>>>>>>>> it, and
>>>>>>>>>>>>>>>>>>> restart the restore now.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I have retried with TMP pointing to a volume with 244GB and 
>>>>>>>>>>>>>>>>>> the
>>>>>>>>>>>>>>>>>> restore sill fails, although slightly differently. I have 
>>>>>>>>>>>>>>>>>> attached the
>>>>>>>>>>>>>>>>>> compressed log.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> can you verify that during the course of the restore
>>>>>>>>>>>>>>>>>  /media/Sandelys/tmp/duplicity-*-tempdir/
>>>>>>>>>>>>>>>>> fills up the containing file system?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> this is suggested by the debug output. trying to pinpoint 
>>>>>>>>>>>>>>>>> your issue here.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> unlikely but possible.. could you check that
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> - you have enough inodes free (df -i) also during the course 
>>>>>>>>>>>>>>>> of the restore
>>>>>>>>>>>>>>>> - the file systems are sane by fsck'ing them as a precaution 
>>>>>>>>>>>>>>>> action
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks for your help and suggestions.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The big volume is formatted with NTFS. It was probably never 
>>>>>>>>>>>>>>> mounted
>>>>>>>>>>>>>>> in its native environment, so I fired a Windows VM to check it. 
>>>>>>>>>>>>>>> And
>>>>>>>>>>>>>>> indeed there were a few errors, involving the restore process. 
>>>>>>>>>>>>>>> The
>>>>>>>>>>>>>>> "lost+found" contained some downloaded volumes after the check:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> $ ls found.000/dir0000.chk/
>>>>>>>>>>>>>>> duplicity-inc.20110830T210003Z.to.20110831T210003Z.vol4.difftar.gpg
>>>>>>>>>>>>>>> duplicity-inc.20110830T210003Z.to.20110831T210003Z.vol7.difftar.gpg
>>>>>>>>>>>>>>> duplicity-inc.20110830T210003Z.to.20110831T210003Z.vol5.difftar.gpg
>>>>>>>>>>>>>>> duplicity-new-signatures.20110830T210003Z.to.20110831T210003Z.sigtar.gpg
>>>>>>>>>>>>>>> duplicity-inc.20110830T210003Z.to.20110831T210003Z.vol6.difftar.gpg
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> After fixing the volume I repeated the restore, monitoring the 
>>>>>>>>>>>>>>> free
>>>>>>>>>>>>>>> space and inodes by a script that does df -h df -i every 15 
>>>>>>>>>>>>>>> seconds.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The restore failed in the same way as before, the log is 
>>>>>>>>>>>>>>> attached
>>>>>>>>>>>>>>> again. What's interesting is that according to the df the free 
>>>>>>>>>>>>>>> space
>>>>>>>>>>>>>>> did not change at all: http://pastebin.com/PhanxQUX (it 
>>>>>>>>>>>>>>> contains two
>>>>>>>>>>>>>>> partitions, originally it had only $TMP, I couldn't believe its
>>>>>>>>>>>>>>> result, so I added $HOME (same as /) too and retried).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> recheck your outputs, see below. around 09:16:58 
>>>>>>>>>>>>>> /home/laurynas/.Private overflows. my guess is that duplicity 
>>>>>>>>>>>>>> TMP is located there. use the other partition as 80GB is 
>>>>>>>>>>>>>> obviously not enough.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks. Something does not add up here. The restore script does
>>>>>>>>>>>>> export TMPDIR=/media/Sandėlys/tmp
>>>>>>>>>>>>>
>>>>>>>>>>>>> and the output of duplicity has
>>>>>>>>>>>>>
>>>>>>>>>>>>> Using temporary directory 
>>>>>>>>>>>>> /media/Sandėlys/tmp/duplicity-qD6e4q-tempdir
>>>>>>>>>>>>> Registering (mkstemp) temporary file
>>>>>>>>>>>>> /media/Sandėlys/tmp/duplicity-qD6e4q-tempdir/mkstemp-HzlUlf-1
>>>>>>>>>>>>> Temp has 261955923968 available, backup will use approx 34078720.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Could it be that TMPDIR is ignored somewhere?
>>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> find out which data on /home/laurynas fills it up. you are right 
>>>>>>>>>>>> the output above suggests it uses the big partition 
>>>>>>>>>>>> (/media/Sande.lys) but obviously the other one fills up.
>>>>>>>>>>>
>>>>>>>>>>> So I might have found something useful: I started duplicity with
>>>>>>>>>>> strace. Then I did lsof while it's in progress and got one open 
>>>>>>>>>>> handle
>>>>>>>>>>> outside the /media/Sandėlys:
>>>>>>>>>>>
>>>>>>>>>>> duplicity 31759 laurynas   44u   REG     8,1 529465344 15467022
>>>>>>>>>>> /tmp/tmpfxLhqjk (deleted)
>>>>>>>>>>>
>>>>>>>>>>> It is deleted and ever-growing. I looked for it in strace:
>>>>>>>>>>>
>>>>>>>>>>> 31759 stat("/tmp", {st_mode=S_IFDIR|S_ISVTX|0777, st_size=45056, 
>>>>>>>>>>> ...}) = 0
>>>>>>>>>>> 31759 open("/tmp/tmpfxLhqjk", O_RDWR|O_CREAT|O_EXCL, 0600) = 44
>>>>>>>>>>> 31759 unlink("/tmp/tmpfxLhqjk")         = 0
>>>>>>>>>>> 31759 fcntl(44, F_GETFL)                = 0x8002 (flags 
>>>>>>>>>>> O_RDWR|O_LARGEFILE)
>>>>>>>>>>> 31759 fstat(44, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
>>>>>>>>>>>
>>>>>>>>>>> First write contents suggests that this is going to be the final 
>>>>>>>>>>> file:
>>>>>>>>>>>
>>>>>>>>>>> 31759 write(44, "<<< Oracle VM VirtualBox Disk Im"..., 65536 
>>>>>>>>>>> <unfinished ...>
>>>>>>>>>>>
>>>>>>>>>>> Any idea or pointers?
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> try setting all 3 env vars..
>>>>>>>>>> http://duplicity.nongnu.org/duplicity.1.html#sect7
>>>>>>>>>> maybe it's a bug deep down, not respecting TMPDIR only
>>>>>>>>>
>>>>>>>>> export TEMP=/media/Sandėlys/tmp
>>>>>>>>> export TMP=/media/Sandėlys/tmp
>>>>>>>>> export TMPDIR=/media/Sandėlys/tmp
>>>>>>>>>
>>>>>>>>> Does not seem to help:
>>>>>>>>>
>>>>>>>>> duplicity 32278 laurynas   44u   REG     8,1 282394624 15467022
>>>>>>>>> /tmp/tmpf7oQNmc (deleted)
>>>>>>>>>
>>>>>>>>>> still i wonder what's up with /media/Sandėlys versus 
>>>>>>>>>> /media/Sande.lys .. can you explain?
>>>>>>>>>
>>>>>>>>> I am not sure what you mean by "Sande.lys"?.. It's "/media/Sandėlys".
>>>>>>>>> I have a diacritic letter in the path, seems to work OK. I did check
>>>>>>>>> /media while the restore was running that there were no "Sandelys" (no
>>>>>>>>> diacritic) or "Sande.lys" or anything else similar there being
>>>>>>>>> created.
>>>>>>>>>
>>>>>>>>> Thanks again,
>>>>>>>>> --
>>>>>>>>> Laurynas
>>>>>>>>
>>>>>>>> Ping? Thanks.
>>>>>>>>
>>>>>>>
>>>>>>> maybe an issue with the special character in the path? .. try mounting 
>>>>>>> it on a plain path.
>>>>>>>
>>>>>>> '/media/Sandėlys versus /media/Sande.lys' meant that in the df output 
>>>>>>> you sent earlier a mount point called '/media/Sande.lys' was listed. 
>>>>>>> why was that?
>>>>>>
>>>>>> I am unable to find it. I see it in your message quoting from the
>>>>>> pastebin df output. But the pastebin http://pastebin.com/PhanxQUX does
>>>>>> not contain it.
>>>>>>
>>>>>>> ok, i think i found it.. please manually patch 'duplicity/patchdir.py' 
>>>>>>> around line 473
>>>>>>> http://bazaar.launchpad.net/~duplicity-team/duplicity/0.6-series/view/head:/duplicity/patchdir.py#L473
>>>>>>>
>>>>>>> from
>>>>>>>
>>>>>>>  # librsync needs true file
>>>>>>>  tempfp = os.tmpfile()
>>>>>>>  misc.copyfileobj( current_file, tempfp )
>>>>>>>
>>>>>>> to
>>>>>>>
>>>>>>>  # librsync needs true file
>>>>>>>  from duplicity import tempdir
>>>>>>>  tempfp_fd = tempdir.default().mkstemp()[0]
>>>>>>>  tempfp = os.fdopen(tempfp_fd,'w+b')
>>>>>>>  misc.copyfileobj( current_file, tempfp )
>>>>>>>
>>>>>>> i guess the os.tmpfile() is the culprit here and places everything in 
>>>>>>> /tmp.
>>>>>>
>>>>>> Patched and restarted. It still failed, the log is attached. I did not
>>>>>> catch any file in /tmp being opened with strace, but perhaps I looked
>>>>>> too early. Now I will get a full strace and peek around again.
>>>>>>
>>> TL;DR: it worked but there are some gotchas.
>>>
>>> Full story: I downloaded all of my backups from S3 to local storage. I
>>> replaced the previous fix with "tmpspacefix" Launchpad MP patch. I run
>>> the restore and it completed. Interestingly it completed at around 750
>>> volumes processed out of 3400. Also the last 100 volumes took about 24
>>> hours to complete.
>>
>> interesting. could you check which files took that long? do you still have 
>> the output? could you run a full restore with -v9 and post the output 
>> (private strings obfuscated)?
> 
> Attached.

will look at it later.

> 
>>> All in all, I got my files back (yay! Thanks! :), but some questions remain:
>>> 1) Is the free space estimate algorithm wrong? I.e. the one that says
>>> "you need the size of the restored file + one volume".
>>
>> no algorithm there. it just describes what happens. duplicity restores the 
>> oldest version of the file and then starts to patch it via librsync. might 
>> be that the patching doubles the space needed but afaik should happen 
>> directly on the temp file.
> 
> So, "twice the size of the restored file + one volume?" But originally
> it failed with 80GB free for a 30GB file, so it's even more than that.

right, we would have to disect filesystem usage during restoring to find out 
what is going on here. actually the v9 dump tells you when temp files are 
created and deleted (though not their size) so you can work through that if you 
want.

alternatively you have to observe what happens in the tmp folder during restore 
;)

> 
>>> 2) There must be something quadratic or worse in the restore
>>> algorithm. Out of 750 volumes, first 500 complete in perhaps 30
>>> minutes, next 100 in one hour, last 100 in 24 hours.
>>
>> can you check that the system resources do not get used up (cpu,memory) so 
>> that the system starts swapping? there are known issues sometimes that gpg 
>> processes are not ended properly so that the memory fills up.
>> this is mostly an issue with debian/ubuntu if duplicity was installed from 
>> distros repository. they package a wrong GnuPGInterface which does not 
>> cleanly collect and kill gpg instances.
> 
> It is installed from ubuntu repos. A backup is running right now, at
> volume 670 (i.e. in the slow part). Top shows load average: 5.66,
> 5.70, 5.87, which is indeed high, "%MEM" does not show anything
> interesting for restore, "%CPU" has mount.ntfs at 8% (it is an NTFS
> volume where the backup and temp files live), duplicity at 3%.
> 
> The system feels responsive, vmstat shows a bit of swap taken but
> almost no current swapping activity (this is on 8GB of RAM):
> 
> $ sudo vmstat 15
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
>  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
>  1  2 368896 164768 2604236 3340404    4    5   838   917    3   15 10  5 69 
> 16
>  1  3 368896 167880 2608624 3332464    0    0  7207  6472 5175 10289  3  3 46 
> 48
>  0  3 368864 159860 2623384 3325276    2    0  5750  4349 3396 6740  3  2 45 
> 50
>  4  3 368864 157756 2625628 3320288    0    0  7125  6949 6045 11899  7  3 47 
> 43
>  0  2 368864 182048 2607320 3333820    0    0  6651  5969 4908 9608  8  3 46 
> 44
>  0  2 368864 157084 2598944 3346484    0    0  8594  7252 6809 12655  9  4 44 
> 43
>  1  1 368864 177984 2564588 3355944    0    0  8325  8526 6236 12376  4  3 51 
> 42
>  1  1 368864 173236 2570524 3355360    0    0  7991  7596 5657 11252  3  2 51 
> 44
>  1  4 368864 176464 2583108 3332656    0    0  7234  6700 5003 9942  3  2 49 
> 46
>  0  2 368864 180924 2596580 3312436    0    0  8194  7451 6431 12680  7  4 49 
> 41
> 
> $ ps aux | grep [g]pg2
> laurynas  9572  0.0  0.0  27208  1768 pts/2    SL+  06:49   0:01 gpg2
> --status-fd 26 --passphrase-fd 30 --logger-fd 23 --batch --no-tty
> --no-secmem-warning --decrypt
> laurynas  9573  0.0  0.0  27208  1768 pts/2    SL+  06:49   0:00 gpg2
> --status-fd 30 --passphrase-fd 34 --logger-fd 27 --batch --no-tty
> --no-secmem-warning --decrypt
> laurynas  9574  0.0  0.0  27208  1768 pts/2    SL+  06:49   0:00 gpg2
> --status-fd 34 --passphrase-fd 38 --logger-fd 31 --batch --no-tty
> --no-secmem-warning --decrypt
> laurynas  9577  0.0  0.0  27208  1768 pts/2    SL+  06:49   0:01 gpg2
> --status-fd 38 --passphrase-fd 42 --logger-fd 35 --batch --no-tty
> --no-secmem-warning --decrypt
> laurynas 11802  0.0  0.0  27208  1768 pts/2    SL+  07:39   0:01 gpg2
> --status-fd 5 --passphrase-fd 44 --logger-fd 3 --batch --no-tty
> --no-secmem-warning --decrypt
> laurynas 11976  0.0  0.0  27208  1772 pts/2    SL+  09:04   0:01 gpg2
> --status-fd 10 --passphrase-fd 45 --logger-fd 6 --batch --no-tty
> --no-secmem-warning --decrypt
> laurynas 12676  0.0  0.0  27208  1768 pts/2    SL+  10:26   0:00 gpg2
> --status-fd 14 --passphrase-fd 45 --logger-fd 11 --batch --no-tty
> --no-secmem-warning --decrypt
> laurynas 13963  0.0  0.0  27208  1772 pts/2    SL+  11:58   0:01 gpg2
> --status-fd 17 --passphrase-fd 45 --logger-fd 15 --batch --no-tty
> --no-secmem-warning --decrypt
> laurynas 14714  0.0  0.0  27208  1772 pts/2    SL+  12:04   0:01 gpg2
> --status-fd 20 --passphrase-fd 45 --logger-fd 18 --batch --no-tty
> --no-secmem-warning --decrypt
> 
> Can you confirm that this is the unkilled GPG issue?
> 

no. the issue i describe leaves much more (like 100+) gpg processes hanging 
around. btw. running a backup creates much less gpg processes than restoring 
because of the patching per incremental.

..ede/duply.net



reply via email to

[Prev in Thread] Current Thread [Next in Thread]