qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operati


From: Chris Friesen
Subject: Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?
Date: Fri, 18 Jul 2014 10:46:27 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

On 07/18/2014 10:30 AM, Andrey Korolyov wrote:
On Fri, Jul 18, 2014 at 8:26 PM, Chris Friesen
<address@hidden> wrote:
On 07/18/2014 09:54 AM, Andrey Korolyov wrote:

On Fri, Jul 18, 2014 at 6:58 PM, Chris Friesen
<address@hidden> wrote:

Hi,

I've recently run up against an interesting issue where I had a number of
guests running and when I started doing heavy disk I/O on a virtio disk
(backed via ceph rbd) the memory consumption spiked and triggered the
OOM-killer.

I want to reserve some memory for I/O, but I don't know how much it can
use
in the worst-case.

Is there a limit on the number of in-flight I/O operations?  (Preferably
as
a configurable option, but even hard-coded would be good to know as
well.)

Thanks,
Chris


Hi, are you using per-vm cgroups or it was happened on bare system?
Ceph backend have writeback cache setting, may be you hitting it but
it must be set enormously large then.


This is without cgroups.  (I think we had tried cgroups and ran into some
issues.)  Would cgroups even help with iSCSI/rbd/etc?

The "-drive" parameter in qemu was using "cache=none" for the VMs in
question.  But I'm assuming it keeps the buffer around until acked by the
far end in order to be able to handle retries.

Chris



This is probably a bug even if the legitimate mechanisms causing it -
peak memory footprint for an emulator should be predictable. Never hit
something like this on any kind of workload, will try to reproduce by
myself.

The drive parameter would have looked something like this:

-drive file=rbd:volumes/volume-7c1427d4-0758-4384-9431-653aab24a690:auth_supported=none:mon_host=192.168.205.3\:6789\;192.168.205.4\:6789\;192.168.205.5\:6789,if=none,id=drive-virtio-disk0,format=raw,serial=7c1427d4-0758-4384-9431-653aab24a690,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1

When we started running dbench in the guest the qemu RSS jumped significantly. Also, it stayed at the higher value even after the test was stopped--which is not ideal behaviour.

Chris




reply via email to

[Prev in Thread] Current Thread [Next in Thread]