qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 09/13] block: add BlockRAMRegistrar


From: David Hildenbrand
Subject: Re: [PATCH v6 09/13] block: add BlockRAMRegistrar
Date: Thu, 13 Oct 2022 20:07:20 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.3.1

On 13.10.22 20:05, Stefan Hajnoczi wrote:
On Fri, Oct 07, 2022 at 12:51:21PM +0200, Stefano Garzarella wrote:
On Thu, Oct 06, 2022 at 05:35:03PM -0400, Stefan Hajnoczi wrote:
Emulated devices and other BlockBackend users wishing to take advantage
of blk_register_buf() all have the same repetitive job: register
RAMBlocks with the BlockBackend using RAMBlockNotifier.

Add a BlockRAMRegistrar API to do this. A later commit will use this
from hw/block/virtio-blk.c.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
MAINTAINERS                          |  1 +
include/sysemu/block-ram-registrar.h | 37 ++++++++++++++++++
block/block-ram-registrar.c          | 58 ++++++++++++++++++++++++++++
block/meson.build                    |  1 +
4 files changed, 97 insertions(+)
create mode 100644 include/sysemu/block-ram-registrar.h
create mode 100644 block/block-ram-registrar.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 0dcae6168a..91aed2cdc7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2498,6 +2498,7 @@ F: block*
F: block/
F: hw/block/
F: include/block/
+F: include/sysemu/block-*.h
F: qemu-img*
F: docs/tools/qemu-img.rst
F: qemu-io*
diff --git a/include/sysemu/block-ram-registrar.h 
b/include/sysemu/block-ram-registrar.h
new file mode 100644
index 0000000000..d8b2f7942b
--- /dev/null
+++ b/include/sysemu/block-ram-registrar.h
@@ -0,0 +1,37 @@
+/*
+ * BlockBackend RAM Registrar
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef BLOCK_RAM_REGISTRAR_H
+#define BLOCK_RAM_REGISTRAR_H
+
+#include "exec/ramlist.h"
+
+/**
+ * struct BlockRAMRegistrar:
+ *
+ * Keeps RAMBlock memory registered with a BlockBackend using
+ * blk_register_buf() including hotplugged memory.
+ *
+ * Emulated devices or other BlockBackend users initialize a BlockRAMRegistrar
+ * with blk_ram_registrar_init() before submitting I/O requests with the
+ * BDRV_REQ_REGISTERED_BUF flag set.
+ */
+typedef struct {
+    BlockBackend *blk;
+    RAMBlockNotifier notifier;
+    bool ok;
+} BlockRAMRegistrar;
+
+void blk_ram_registrar_init(BlockRAMRegistrar *r, BlockBackend *blk);
+void blk_ram_registrar_destroy(BlockRAMRegistrar *r);
+
+/* Have all RAMBlocks been registered successfully? */
+static inline bool blk_ram_registrar_ok(BlockRAMRegistrar *r)
+{
+    return r->ok;
+}
+
+#endif /* BLOCK_RAM_REGISTRAR_H */
diff --git a/block/block-ram-registrar.c b/block/block-ram-registrar.c
new file mode 100644
index 0000000000..25dbafa789
--- /dev/null
+++ b/block/block-ram-registrar.c
@@ -0,0 +1,58 @@
+/*
+ * BlockBackend RAM Registrar
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#include "qemu/osdep.h"
+#include "sysemu/block-backend.h"
+#include "sysemu/block-ram-registrar.h"
+#include "qapi/error.h"
+
+static void ram_block_added(RAMBlockNotifier *n, void *host, size_t size,
+                            size_t max_size)
+{
+    BlockRAMRegistrar *r = container_of(n, BlockRAMRegistrar, notifier);
+    Error *err = NULL;
+
+    if (!r->ok) {
+        return; /* don't try again if we've already failed */
+    }

The segfault I was seeing is gone, though, and I'm getting a doubt.

Here we basically just report the error and prevent new regions from being
registered. The VM still starts though and the blkio driver works as if
nothing happened.

For drivers that require all regions to be registered, this can cause
problems, so should we stop the VM in case of failure or put the blkio
driver in a state such that IOs are not submitted?

Or maybe it's okay and then the device will somehow report the error when it
can't find the mapped region?

The BlockDriver supports the fast path for registered bufs but also has
a slow path using bounce buffers. When registered bufs fail it uses
bounce buffers. That's why the guest still boots.

So, that means if we hotplug memory and would be out of slots, it would still continue working? Would there be some kind of a warning to let the user know that this might come with a performance regression?

--
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]