[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-block] [PATCH 3/4] util: Add VFIO helper library
From: |
Fam Zheng |
Subject: |
Re: [Qemu-block] [PATCH 3/4] util: Add VFIO helper library |
Date: |
Thu, 22 Dec 2016 00:19:34 +0800 |
User-agent: |
Mutt/1.7.1 (2016-10-04) |
On Wed, 12/21 16:46, Paolo Bonzini wrote:
>
>
> On 20/12/2016 17:31, Fam Zheng wrote:
> > + hbitmap_iter_init(&iter, s->free_chunks, 1);
> > + if (contiguous) {
> > + while (true) {
> > + bool satisfy = true;
> > + next = hbitmap_iter_next(&iter);
> > + if (next < 0) {
> > + return NULL;
> > + }
> > + for (i = 1; i < chunks; i++) {
> > + if (!hbitmap_get(s->free_chunks, next + i)) {
> > + satisfy = false;
> > + break;
> > + }
> > + }
> > + if (satisfy) {
> > + break;
> > + }
> > + }
> > + hbitmap_reset(s->free_chunks, next, chunks);
> > + r = g_new(IOVARange, 1);
> > + r->iova = next * pages_per_chunk * getpagesize();
> > + r->nr_pages = pages;
> > + QSIMPLEQ_INSERT_TAIL(&m.iova_list, r, next);
> > + } else {
> > + next = hbitmap_iter_next(&iter);
> > + while (pages) {
> > + uint64_t chunk;
> > + if (next < 0) {
> > + hbitmap_iter_init(&iter, s->free_chunks, 1);
> > + next = hbitmap_iter_next(&iter);
> > + }
> > + assert(next >= 0);
> > + chunk = next;
> > + DPRINTF("using chunk %ld\n", chunk);
> > + next = hbitmap_iter_next(&iter);
> > + hbitmap_reset(s->free_chunks, chunk, 1);
> > + if (r && r->iova + r->nr_pages == chunk * pages_per_chunk) {
> > + r->nr_pages += MIN(pages, pages_per_chunk);
> > + } else {
> > + r = g_new(IOVARange, 1);
> > + r->iova = chunk * pages_per_chunk * getpagesize();
> > + r->nr_pages = MIN(pages, pages_per_chunk);
> > + QSIMPLEQ_INSERT_TAIL(&m.iova_list, r, next);
> > + }
> > + pages -= MIN(pages, pages_per_chunk);
> > + }
>
> I'm not sure HBitmap tracking is useful. If we exhaust the IOVA space,
> we can just throw everything away with a single VFIO_IOMMU_UNMAP_DMA.
> Then replay the RAMBlockNotifier mappings (we need to add this anyway
> for hotplug support) and keep on mapping lazily whatever comes later.
It's clever! It'd be a bit more complicated than that, though. Things like
queues etc in block/nvme.c have to be preserved, and if we already ensure that,
ram blocks can be preserved similarly, but indeed bounce buffers can be handled
that way. I still need to think about how to make sure none of the invalidated
IOVA addresses are in use by other requests.
Also I wonder how expensive the huge VFIO_IOMMU_UNMAP_DMA is. In the worst case
the "throwaway" IOVAs can be limited to a small range.
Fam
>
> Thanks,
>
> Paolo
- [Qemu-block] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device, Fam Zheng, 2016/12/20
- [Qemu-block] [PATCH 4/4] block: Add VFIO based NVMe driver, Fam Zheng, 2016/12/20
- Re: [Qemu-block] [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device, no-reply, 2016/12/20
- Re: [Qemu-block] [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device, no-reply, 2016/12/20
- Re: [Qemu-block] [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device, Tian, Kevin, 2016/12/29