On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote:
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote:
Write Results:
==============
I see degraded IO performance when doing sequential IO write
tests with vhost-blk compared to virtio-blk.
# time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct
I get ~110MB/sec with virtio-blk, but I get only ~60MB/sec with
vhost-blk. Wondering why ?
Try to look and number of interrupts and/or number of exits.
I checked interrupts and IO exits - there is no major noticeable
difference between
vhost-blk and virtio-blk scenerios.
It could also be that you are overrunning some queue.
I don't see any exit mitigation strategy in your patch:
when there are already lots of requests in a queue, it's usually
a good idea to disable notifications and poll the
queue as requests complete. That could help performance.
Do you mean poll eventfd for new requests instead of waiting for new
notifications ?
Where do you do that in vhost-net code ?
vhost_disable_notify does this.
Unlike network socket, since we are dealing with a file, there is no
->poll support for it.
So I can't poll for the data. And also, Issue I am having is on the
write() side.
Not sure I understand.
I looked at it some more - I see 512K write requests on the
virtio-queue in both vhost-blk and virtio-blk cases. Both qemu or
vhost is doing synchronous writes to page cache (there is no write
batching in qemu that is affecting this case). I still puzzled on
why virtio-blk outperforms vhost-blk.
Thanks,
Badari
If you say the number of requests is the same, we are left with:
- requests are smaller for some reason?
- something is causing retries?