[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL v4 15/27] multi-process: define MPQemuMsg format and transmission
From: |
Stefan Hajnoczi |
Subject: |
[PULL v4 15/27] multi-process: define MPQemuMsg format and transmission functions |
Date: |
Wed, 10 Feb 2021 09:26:16 +0000 |
From: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Defines MPQemuMsg, which is the message that is sent to the remote
process. This message is sent over QIOChannel and is used to
command the remote process to perform various tasks.
Define transmission functions used by proxy and by remote.
Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id:
56ca8bcf95195b2b195b08f6b9565b6d7410bce5.1611938319.git.jag.raman@oracle.com
[Replace struct iovec send[2] = {0} with {} to make clang happy as
suggested by Peter Maydell <peter.maydell@linaro.org>.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
MAINTAINERS | 2 +
meson.build | 1 +
hw/remote/trace.h | 1 +
include/hw/remote/mpqemu-link.h | 63 ++++++++++
include/sysemu/iothread.h | 6 +
hw/remote/mpqemu-link.c | 205 ++++++++++++++++++++++++++++++++
iothread.c | 6 +
hw/remote/meson.build | 1 +
hw/remote/trace-events | 4 +
9 files changed, 289 insertions(+)
create mode 100644 hw/remote/trace.h
create mode 100644 include/hw/remote/mpqemu-link.h
create mode 100644 hw/remote/mpqemu-link.c
create mode 100644 hw/remote/trace-events
diff --git a/MAINTAINERS b/MAINTAINERS
index aad849196c..389693f59a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3211,6 +3211,8 @@ F: hw/pci-host/remote.c
F: include/hw/pci-host/remote.h
F: hw/remote/machine.c
F: include/hw/remote/machine.h
+F: hw/remote/mpqemu-link.c
+F: include/hw/remote/mpqemu-link.h
Build and test automation
-------------------------
diff --git a/meson.build b/meson.build
index c8c07df735..a923f249d8 100644
--- a/meson.build
+++ b/meson.build
@@ -1818,6 +1818,7 @@ if have_system
'net',
'softmmu',
'ui',
+ 'hw/remote',
]
endif
if have_system or have_user
diff --git a/hw/remote/trace.h b/hw/remote/trace.h
new file mode 100644
index 0000000000..5d5e3ac720
--- /dev/null
+++ b/hw/remote/trace.h
@@ -0,0 +1 @@
+#include "trace/trace-hw_remote.h"
diff --git a/include/hw/remote/mpqemu-link.h b/include/hw/remote/mpqemu-link.h
new file mode 100644
index 0000000000..cac699cb42
--- /dev/null
+++ b/include/hw/remote/mpqemu-link.h
@@ -0,0 +1,63 @@
+/*
+ * Communication channel between QEMU and remote device process
+ *
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MPQEMU_LINK_H
+#define MPQEMU_LINK_H
+
+#include "qom/object.h"
+#include "qemu/thread.h"
+#include "io/channel.h"
+
+#define REMOTE_MAX_FDS 8
+
+#define MPQEMU_MSG_HDR_SIZE offsetof(MPQemuMsg, data.u64)
+
+/**
+ * MPQemuCmd:
+ *
+ * MPQemuCmd enum type to specify the command to be executed on the remote
+ * device.
+ *
+ * This uses a private protocol between QEMU and the remote process. vfio-user
+ * protocol would supersede this in the future.
+ *
+ */
+typedef enum {
+ MPQEMU_CMD_MAX,
+} MPQemuCmd;
+
+/**
+ * MPQemuMsg:
+ * @cmd: The remote command
+ * @size: Size of the data to be shared
+ * @data: Structured data
+ * @fds: File descriptors to be shared with remote device
+ *
+ * MPQemuMsg Format of the message sent to the remote device from QEMU.
+ *
+ */
+typedef struct {
+ int cmd;
+ size_t size;
+
+ union {
+ uint64_t u64;
+ } data;
+
+ int fds[REMOTE_MAX_FDS];
+ int num_fds;
+} MPQemuMsg;
+
+bool mpqemu_msg_send(MPQemuMsg *msg, QIOChannel *ioc, Error **errp);
+bool mpqemu_msg_recv(MPQemuMsg *msg, QIOChannel *ioc, Error **errp);
+
+bool mpqemu_msg_valid(MPQemuMsg *msg);
+
+#endif
diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
index 0c5284dbbc..f177142f16 100644
--- a/include/sysemu/iothread.h
+++ b/include/sysemu/iothread.h
@@ -57,4 +57,10 @@ IOThread *iothread_create(const char *id, Error **errp);
void iothread_stop(IOThread *iothread);
void iothread_destroy(IOThread *iothread);
+/*
+ * Returns true if executing withing IOThread context,
+ * false otherwise.
+ */
+bool qemu_in_iothread(void);
+
#endif /* IOTHREAD_H */
diff --git a/hw/remote/mpqemu-link.c b/hw/remote/mpqemu-link.c
new file mode 100644
index 0000000000..0d1899fd94
--- /dev/null
+++ b/hw/remote/mpqemu-link.c
@@ -0,0 +1,205 @@
+/*
+ * Communication channel between QEMU and remote device process
+ *
+ * Copyright © 2018, 2021 Oracle and/or its affiliates.
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#include "qemu/osdep.h"
+#include "qemu-common.h"
+
+#include "qemu/module.h"
+#include "hw/remote/mpqemu-link.h"
+#include "qapi/error.h"
+#include "qemu/iov.h"
+#include "qemu/error-report.h"
+#include "qemu/main-loop.h"
+#include "io/channel.h"
+#include "sysemu/iothread.h"
+#include "trace.h"
+
+/*
+ * Send message over the ioc QIOChannel.
+ * This function is safe to call from:
+ * - main loop in co-routine context. Will block the main loop if not in
+ * co-routine context;
+ * - vCPU thread with no co-routine context and if the channel is not part
+ * of the main loop handling;
+ * - IOThread within co-routine context, outside of co-routine context
+ * will block IOThread;
+ * Returns true if no errors were encountered, false otherwise.
+ */
+bool mpqemu_msg_send(MPQemuMsg *msg, QIOChannel *ioc, Error **errp)
+{
+ ERRP_GUARD();
+ bool iolock = qemu_mutex_iothread_locked();
+ bool iothread = qemu_in_iothread();
+ struct iovec send[2] = {};
+ int *fds = NULL;
+ size_t nfds = 0;
+ bool ret = false;
+
+ send[0].iov_base = msg;
+ send[0].iov_len = MPQEMU_MSG_HDR_SIZE;
+
+ send[1].iov_base = (void *)&msg->data;
+ send[1].iov_len = msg->size;
+
+ if (msg->num_fds) {
+ nfds = msg->num_fds;
+ fds = msg->fds;
+ }
+
+ /*
+ * Dont use in IOThread out of co-routine context as
+ * it will block IOThread.
+ */
+ assert(qemu_in_coroutine() || !iothread);
+
+ /*
+ * Skip unlocking/locking iothread lock when the IOThread is running
+ * in co-routine context. Co-routine context is asserted above
+ * for IOThread case.
+ * Also skip lock handling while in a co-routine in the main context.
+ */
+ if (iolock && !iothread && !qemu_in_coroutine()) {
+ qemu_mutex_unlock_iothread();
+ }
+
+ if (!qio_channel_writev_full_all(ioc, send, G_N_ELEMENTS(send),
+ fds, nfds, errp)) {
+ ret = true;
+ } else {
+ trace_mpqemu_send_io_error(msg->cmd, msg->size, nfds);
+ }
+
+ if (iolock && !iothread && !qemu_in_coroutine()) {
+ /* See above comment why skip locking here. */
+ qemu_mutex_lock_iothread();
+ }
+
+ return ret;
+}
+
+/*
+ * Read message from the ioc QIOChannel.
+ * This function is safe to call from:
+ * - From main loop in co-routine context. Will block the main loop if not in
+ * co-routine context;
+ * - From vCPU thread with no co-routine context and if the channel is not part
+ * of the main loop handling;
+ * - From IOThread within co-routine context, outside of co-routine context
+ * will block IOThread;
+ */
+static ssize_t mpqemu_read(QIOChannel *ioc, void *buf, size_t len, int **fds,
+ size_t *nfds, Error **errp)
+{
+ ERRP_GUARD();
+ struct iovec iov = { .iov_base = buf, .iov_len = len };
+ bool iolock = qemu_mutex_iothread_locked();
+ bool iothread = qemu_in_iothread();
+ int ret = -1;
+
+ /*
+ * Dont use in IOThread out of co-routine context as
+ * it will block IOThread.
+ */
+ assert(qemu_in_coroutine() || !iothread);
+
+ if (iolock && !iothread && !qemu_in_coroutine()) {
+ qemu_mutex_unlock_iothread();
+ }
+
+ ret = qio_channel_readv_full_all_eof(ioc, &iov, 1, fds, nfds, errp);
+
+ if (iolock && !iothread && !qemu_in_coroutine()) {
+ qemu_mutex_lock_iothread();
+ }
+
+ return (ret <= 0) ? ret : iov.iov_len;
+}
+
+bool mpqemu_msg_recv(MPQemuMsg *msg, QIOChannel *ioc, Error **errp)
+{
+ ERRP_GUARD();
+ g_autofree int *fds = NULL;
+ size_t nfds = 0;
+ ssize_t len;
+ bool ret = false;
+
+ len = mpqemu_read(ioc, msg, MPQEMU_MSG_HDR_SIZE, &fds, &nfds, errp);
+ if (len <= 0) {
+ goto fail;
+ } else if (len != MPQEMU_MSG_HDR_SIZE) {
+ error_setg(errp, "Message header corrupted");
+ goto fail;
+ }
+
+ if (msg->size > sizeof(msg->data)) {
+ error_setg(errp, "Invalid size for message");
+ goto fail;
+ }
+
+ if (!msg->size) {
+ goto copy_fds;
+ }
+
+ len = mpqemu_read(ioc, &msg->data, msg->size, NULL, NULL, errp);
+ if (len <= 0) {
+ goto fail;
+ }
+ if (len != msg->size) {
+ error_setg(errp, "Unable to read full message");
+ goto fail;
+ }
+
+copy_fds:
+ msg->num_fds = nfds;
+ if (nfds > G_N_ELEMENTS(msg->fds)) {
+ error_setg(errp,
+ "Overflow error: received %zu fds, more than max of %d fds",
+ nfds, REMOTE_MAX_FDS);
+ goto fail;
+ }
+ if (nfds) {
+ memcpy(msg->fds, fds, nfds * sizeof(int));
+ }
+
+ ret = true;
+
+fail:
+ if (*errp) {
+ trace_mpqemu_recv_io_error(msg->cmd, msg->size, nfds);
+ }
+ while (*errp && nfds) {
+ close(fds[nfds - 1]);
+ nfds--;
+ }
+
+ return ret;
+}
+
+bool mpqemu_msg_valid(MPQemuMsg *msg)
+{
+ if (msg->cmd >= MPQEMU_CMD_MAX && msg->cmd < 0) {
+ return false;
+ }
+
+ /* Verify FDs. */
+ if (msg->num_fds >= REMOTE_MAX_FDS) {
+ return false;
+ }
+
+ if (msg->num_fds > 0) {
+ for (int i = 0; i < msg->num_fds; i++) {
+ if (fcntl(msg->fds[i], F_GETFL) == -1) {
+ return false;
+ }
+ }
+ }
+
+ return true;
+}
diff --git a/iothread.c b/iothread.c
index b9f2751382..7f086387be 100644
--- a/iothread.c
+++ b/iothread.c
@@ -369,3 +369,9 @@ IOThread *iothread_by_id(const char *id)
{
return IOTHREAD(object_resolve_path_type(id, TYPE_IOTHREAD, NULL));
}
+
+bool qemu_in_iothread(void)
+{
+ return qemu_get_current_aio_context() == qemu_get_aio_context() ?
+ false : true;
+}
diff --git a/hw/remote/meson.build b/hw/remote/meson.build
index 197b038646..a2b2fc0e59 100644
--- a/hw/remote/meson.build
+++ b/hw/remote/meson.build
@@ -1,5 +1,6 @@
remote_ss = ss.source_set()
remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('machine.c'))
+remote_ss.add(when: 'CONFIG_MULTIPROCESS', if_true: files('mpqemu-link.c'))
softmmu_ss.add_all(when: 'CONFIG_MULTIPROCESS', if_true: remote_ss)
diff --git a/hw/remote/trace-events b/hw/remote/trace-events
new file mode 100644
index 0000000000..0b23974f90
--- /dev/null
+++ b/hw/remote/trace-events
@@ -0,0 +1,4 @@
+# multi-process trace events
+
+mpqemu_send_io_error(int cmd, int size, int nfds) "send command %d size %d, %d
file descriptors to remote process"
+mpqemu_recv_io_error(int cmd, int size, int nfds) "failed to receive %d size
%d, %d file descriptors to remote process"
--
2.29.2
- [PULL v4 08/27] multi-process: add configure and usage information, (continued)
- [PULL v4 08/27] multi-process: add configure and usage information, Stefan Hajnoczi, 2021/02/10
- [PULL v4 09/27] memory: alloc RAM from file at offset, Stefan Hajnoczi, 2021/02/10
- [PULL v4 10/27] multi-process: Add config option for multi-process QEMU, Stefan Hajnoczi, 2021/02/10
- [PULL v4 11/27] multi-process: setup PCI host bridge for remote device, Stefan Hajnoczi, 2021/02/10
- [PULL v4 12/27] multi-process: setup a machine object for remote device process, Stefan Hajnoczi, 2021/02/10
- [PULL v4 13/27] io: add qio_channel_writev_full_all helper, Stefan Hajnoczi, 2021/02/10
- [PULL v4 14/27] io: add qio_channel_readv_full_all_eof & qio_channel_readv_full_all helpers, Stefan Hajnoczi, 2021/02/10
- [PULL v4 15/27] multi-process: define MPQemuMsg format and transmission functions,
Stefan Hajnoczi <=
- [PULL v4 16/27] multi-process: Initialize message handler in remote device, Stefan Hajnoczi, 2021/02/10
- [PULL v4 18/27] multi-process: setup memory manager for remote device, Stefan Hajnoczi, 2021/02/10
- [PULL v4 17/27] multi-process: Associate fd of a PCIDevice with its object, Stefan Hajnoczi, 2021/02/10
- [PULL v4 19/27] multi-process: introduce proxy object, Stefan Hajnoczi, 2021/02/10
- [PULL v4 20/27] multi-process: add proxy communication functions, Stefan Hajnoczi, 2021/02/10
- [PULL v4 21/27] multi-process: Forward PCI config space acceses to the remote process, Stefan Hajnoczi, 2021/02/10
- [PULL v4 22/27] multi-process: PCI BAR read/write handling for proxy & remote endpoints, Stefan Hajnoczi, 2021/02/10
- [PULL v4 23/27] multi-process: Synchronize remote memory, Stefan Hajnoczi, 2021/02/10
- [PULL v4 24/27] multi-process: create IOHUB object to handle irq, Stefan Hajnoczi, 2021/02/10
- [PULL v4 25/27] multi-process: Retrieve PCI info from remote process, Stefan Hajnoczi, 2021/02/10