|
From: | Josh Durgin |
Subject: | Re: [Qemu-devel] ?????? ?????? ?????? ?????? ?? ???? qemu vm big network latency when met heavy io |
Date: | Wed, 15 Jan 2014 18:25:58 -0800 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0 |
On 01/15/2014 01:40 AM, ?????? wrote:
Hi Josh there is some issues 1. use 'none' cache mode in xml, and unset 'rbd cache=true' in ceph.conf, the network latency issue not show. 2. use 'writethrough' cache mode in xml, and unset 'rbd cache=true' in ceph.conf, the network latency issue not show. 3. use 'writeback' cache mode in xml, and set 'rbd cache=true' in ceph.conf, the network latency issue was showed. 4. use 'writeback' cache mode in xml, and unset 'rbd cache=true' in ceph.conf, the network latency issue was showed. according to above info, there must something wrong in librbd 's write cache. This must be a BUG in librbd.
It could still be the rbd driver in QEMU, but it's certainly only happening when writeback
caching is used.Could you verify that librbd's asynchronous flush is being used by making sure
rbd_aio_flush appears in 'strings /path/to/qemu/binary | grep rbd_aio'?If qemu isn't using rbd_aio_flush, but the synchronous version, rbd_flush, that's the cause of
the problem, and you just need to recompile qemu.Otherwise, it's a new bug, and we can try to figure out what is taking up time by looking at a log from librbd. Can you add this to the [client.libvirt] section of ceph.conf and attach the logs generated to a new issue on http://tracker.ceph.com:
debug ms = 1 debug objectcacher = 20 debug rbd = 20 log file = /path/writeable/by/user/running/qemu.$pid.log Thanks, Josh
[Prev in Thread] | Current Thread | [Next in Thread] |