qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [RFC] memory: emulate ioeventfd


From: Stefan Hajnoczi
Subject: [Qemu-devel] [RFC] memory: emulate ioeventfd
Date: Thu, 23 Jul 2015 11:58:47 +0100

The ioeventfd mechanism is used by vhost, dataplane, and virtio-pci to
turn guest MMIO/PIO writes into eventfd file descriptor events.  This
allows arbitrary threads to be notified when the guest writes to a
specific MMIO/PIO address.

qtest and TCG do not support ioeventfd because memory writes are not
checked against registered ioeventfds in QEMU.  This patch implements
this in memory_region_dispatch_read() so qtest can use ioeventfd.

This patch is suboptimal because the -machine accel=kvm case now
duplicates ioeventfd matching in QEMU userspace.  If kvm.ko didn't match
and we exited to userspace, then matching in QEMU userspace will fail
too.

Signed-off-by: Stefan Hajnoczi <address@hidden>
---
 memory.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

Marc-André asked about this patch so I'm sending it now.

This is a first step to making qtest work with vhost.  I haven't tested it.

Not sure about irqfd either.  Perhaps some work is necessary to make irqfd
work, but I think QEMU already implements that (at least in the virtio-pci
case).

diff --git a/memory.c b/memory.c
index 0acebb1..407fec4 100644
--- a/memory.c
+++ b/memory.c
@@ -1146,6 +1146,32 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     return r;
 }
 
+/* Return true if an ioeventfd was signalled */
+static bool memory_region_dispatch_write_ioeventfds(MemoryRegion *mr,
+                                                    hwaddr addr,
+                                                    uint64_t data,
+                                                    unsigned size,
+                                                    MemTxAttrs attrs)
+{
+    MemoryRegionIoeventfd ioeventfd = {
+        .addr = addrrange_make(int128_make64(addr), int128_make64(size)),
+        .data = data,
+    };
+    unsigned i;
+
+    for (i = 0; i < mr->ioeventfd_nb; i++) {
+        ioeventfd.match_data = mr->ioeventfds[i].match_data;
+        ioeventfd.e = mr->ioeventfds[i].e;
+
+        if (memory_region_ioeventfd_equal(ioeventfd, mr->ioeventfds[i])) {
+            event_notifier_set(ioeventfd.e);
+            return true;
+        }
+    }
+
+    return false;
+}
+
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
@@ -1159,6 +1185,10 @@ MemTxResult memory_region_dispatch_write(MemoryRegion 
*mr,
 
     adjust_endianness(mr, &data, size);
 
+    if (memory_region_dispatch_write_ioeventfds(mr, addr, data, size, attrs)) {
+        return MEMTX_OK;
+    }
+
     if (mr->ops->write) {
         return access_with_adjusted_size(addr, &data, size,
                                          mr->ops->impl.min_access_size,
-- 
2.4.3




reply via email to

[Prev in Thread] Current Thread [Next in Thread]