[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [virtio-dev] Re: guest / host buffer sharing ...
From: |
Dr. David Alan Gilbert |
Subject: |
Re: [virtio-dev] Re: guest / host buffer sharing ... |
Date: |
Thu, 7 Nov 2019 11:16:18 +0000 |
User-agent: |
Mutt/1.12.1 (2019-06-15) |
* Gerd Hoffmann (address@hidden) wrote:
> Hi,
>
> > > This is not about host memory, buffers are in guest ram, everything else
> > > would make sharing those buffers between drivers inside the guest (as
> > > dma-buf) quite difficult.
> >
> > Given it's just guest memory, can the guest just have a virt queue on
> > which it places pointers to the memory it wants to share as elements in
> > the queue?
>
> Well, good question. I'm actually wondering what the best approach is
> to handle long-living, large buffers in virtio ...
>
> virtio-blk (and others) are using the approach you describe. They put a
> pointer to the io request header, followed by pointer(s) to the io
> buffers directly into the virtqueue. That works great with storage for
> example. The queue entries are tagged being "in" or "out" (driver to
> device or visa-versa), so the virtio transport can set up dma mappings
> accordingly or even transparently copy data if needed.
>
> For long-living buffers where data can potentially flow both ways this
> model doesn't fit very well though. So what virtio-gpu does instead is
> transferring the scatter list as virtio payload. Does feel a bit
> unclean as it doesn't really fit the virtio architecture. It assumes
> the host can directly access guest memory for example (which is usually
> the case but explicitly not required by virtio). It also requires
> quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
> in theory should be handled fully transparently by the virtio-pci
> transport.
>
> We could instead have a "create-buffer" command which adds the buffer
> pointers as elements to the virtqueue as you describe. Then simply
> continue using the buffer even after completing the "create-buffer"
> command. Which isn't exactly clean either. It would likewise assume
> direct access to guest memory, and it would likewise need quirks for
> VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
> mappings for the virtqueue entries after command completion.
>
> Comments, suggestions, ideas?
What about not completing the command while the device is using the
memory?
Dave
> cheers,
> Gerd
>
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK
- guest / host buffer sharing ..., Gerd Hoffmann, 2019/11/05
- Re: guest / host buffer sharing ..., Stefan Hajnoczi, 2019/11/06
- Re: guest / host buffer sharing ..., Gerd Hoffmann, 2019/11/06
- Re: guest / host buffer sharing ..., Stefan Hajnoczi, 2019/11/07
- Re: guest / host buffer sharing ..., Frank Yang, 2019/11/07
- Re: guest / host buffer sharing ..., Tomasz Figa, 2019/11/20
- Re: guest / host buffer sharing ..., Gerd Hoffmann, 2019/11/08
- Re: guest / host buffer sharing ..., Stefan Hajnoczi, 2019/11/08
- Re: guest / host buffer sharing ..., Stéphane Marchesin, 2019/11/08
- Re: guest / host buffer sharing ..., Stefan Hajnoczi, 2019/11/09