qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] iSCSI support for QEMU


From: ronnie sahlberg
Subject: Re: [Qemu-devel] iSCSI support for QEMU
Date: Thu, 21 Apr 2011 19:47:27 +1000

Christoph,

I think you misread my test.
My test is pure reading :

sudo time dd if=/dev/sda of=/dev/null bs=1M

There are no writes involved at all in this test, only a huge number
of READ10 being sent to the target,
or in the case of when using QEMU+openiscsi-mounted-lun sometimes
being served out of the pagecache of the host.


Since open-iscsi mounted LUNs by default perform so very poorly
against libiscsi,  I assume that there are very few blocks that are be
served out
of the cache of the host.
This is based on that a block served out of cache would have
significantly, many orders or magnitudes, lower access latency
than a block that needs to be fetched across a 1GbE network.
As open-iscsi performs so much poorly in this case compared to
libiscsi, I just speculate that very few blocks are delivered by cache
hits.




I have absolutely no idea on why, QEMU+open-iscsi would perform so
much better for a read-intensive workload like this when setting
cache=none,aio=native.  That is for the qemu developers to explain.


Maybe doing READ10 through open-iscsi is very expensive? Maybe
something else in the linux kernel makes reads very expensive unless
you use "cache=none,aio=native"?
Who knows?

I have no idea, other than without using "cache=none,aio=native"  QEMU
performance for read intensive tasks are significantly slower than
QEMU doing the exact same reads using libiscsi.


I really don't care why QEMU+openiscsi performs so bad either. That is
of very little interest to me. As long as libiscsi is not
significantly worse than open-iscsi I care very little about why.


regards
ronnie sahlberg


On Thu, Apr 21, 2011 at 7:09 PM, Christoph Hellwig <address@hidden> wrote:
>> In my patch, there are NO data integrity issues.
>> Data is sent out on the wire immediately as the guest issues the write.
>> Once the guest issues a flush call, the flush call will not terminate
>> until the SYNCCACHE10 task has completed.
>
> No guest will even issue a cache flush, as we claim to be WCE=0 by default.
> Now if you target has WCE=1 it will cache data internally, and your
> iscsi initiator will never flush it out to disk.
>
> We only claim WCE=1 to the guest if cache=writeback or cache=none are
> set.  So ignoring the issue of having a cache on the initiator side
> you must implement stable writes for the default cache=writethrough
> behaviour by either seeting the FUA bit on your writes, or doing
> a cache flush after every write in case the target does not support FUA.
>
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]