qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] virtio-pci: fix up config interrupt handling


From: Cédric Le Goater
Subject: Re: [PATCH] virtio-pci: fix up config interrupt handling
Date: Mon, 10 Jan 2022 08:20:48 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.3.0

On 1/9/22 18:49, Michael S. Tsirkin wrote:
Fixes a couple of issues with irqfd use by config interrupt:
- Rearrange initialization so cleanup happens in the reverse order
- Don't use irqfd for config when not in use for data path
I am not sure this is a complete fix though: I think we
are better off limiting the effect to vdpa devices
with config interrupt support. Or even bypass irqfd
for config completely and inject into KVM using ioctl?
The advantage would be less FDs used.
This would mean mostly reverting the patchset though.

Fixes: d5d24d859c ("virtio-pci: add support for configure interrupt")
Cc: "Cindy Lu" <lulu@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

On a KVM guest with vhost, I am still seeing at reboot an issue :/

../hw/pci/msix.c:622: msix_unset_vector_notifiers: Assertion 
`dev->msix_vector_use_notifier && dev->msix_vector_release_notifier'


C.

---
  hw/virtio/virtio-pci.c | 12 +++++++-----
  1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 98fb5493ae..b77cd69f97 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1130,15 +1130,15 @@ static int virtio_pci_set_guest_notifiers(DeviceState 
*d, int nvqs, bool assign)
              proxy->vector_irqfd =
                  g_malloc0(sizeof(*proxy->vector_irqfd) *
                            msix_nr_vectors_allocated(&proxy->pci_dev));
+            r = kvm_virtio_pci_vector_config_use(proxy);
+            if (r < 0) {
+                goto config_error;
+            }
              r = kvm_virtio_pci_vector_use(proxy, nvqs);
              if (r < 0) {
                  goto config_assign_error;
              }
          }
-        r = kvm_virtio_pci_vector_config_use(proxy);
-        if (r < 0) {
-            goto config_error;
-        }
          r = msix_set_vector_notifiers(&proxy->pci_dev, 
virtio_pci_vector_unmask,
                                        virtio_pci_vector_mask,
                                        virtio_pci_vector_poll);
@@ -1155,7 +1155,9 @@ notifiers_error:
          kvm_virtio_pci_vector_release(proxy, nvqs);
      }
  config_error:
-    kvm_virtio_pci_vector_config_release(proxy);
+    if (with_irqfd) {
+        kvm_virtio_pci_vector_config_release(proxy);
+    }
  config_assign_error:
      virtio_pci_set_guest_notifier(d, VIRTIO_CONFIG_IRQ_IDX, !assign,
                                    with_irqfd);





reply via email to

[Prev in Thread] Current Thread [Next in Thread]