qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH] Added iopmem device emulation


From: Logan Gunthorpe
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH] Added iopmem device emulation
Date: Tue, 8 Nov 2016 09:46:47 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Icedove/45.4.0

Hey,

On 08/11/16 08:58 AM, Stefan Hajnoczi wrote:
> My concern with the current implementation is that a PCI MMIO access
> invokes a synchronous blk_*() call.  That can pause vcpu execution while
> I/O is happening and therefore leads to unresponsive guests.  QEMU's
> monitor interface is also blocked during blk_*() making it impossible to
> troubleshoot QEMU if it gets stuck due to a slow/hung I/O operation.
> 
> Device models need to use blk_aio_*() so that control is returned while
> I/O is running.  There are a few legacy devices left that use
> synchronous I/O but new devices should not use this approach.

That's fair. I wasn't aware of this and I must have copied a legacy
device. We can certainly make the change in our patch.

> Regarding the hardware design, I think the PCI BAR approach to nvdimm is
> inefficient for virtualization because each memory load/store requires a
> guest<->host transition (vmexit + vmenter).  A DMA approach (i.e.
> message passing or descriptor rings) is more efficient because it
> requires fewer vmexits.
> 
> On real hardware the performance characteristics are different so it
> depends what your target market is.

The performance of the virtual device is completely unimportant. This
isn't something I'd expect anyone to use except to test drivers. On real
hardware, with real applications, DMA would almost certainly be used --
but it would be the DMA engine in another device. eg. an IB NIC would
DMA from the PCI BAR of the iopmem device. This completely bypasses the
CPU so there would be no load/stores to be concerned about.

Thanks,

Logan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]