qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 07/50] multi-process: define mpqemu-link object


From: Stefan Hajnoczi
Subject: Re: [PATCH v5 07/50] multi-process: define mpqemu-link object
Date: Tue, 10 Mar 2020 16:09:41 +0000

On Mon, Feb 24, 2020 at 03:54:58PM -0500, Jagannathan Raman wrote:
> +/*
> + * TODO: Dont use mpqemu link object since it is
> + * not needed to be created via -object.
> + */

Please investigate and resolve this TODO.

> +struct conf_data_msg {
> +    uint32_t addr;
> +    uint32_t val;
> +    int l;

Please use a self-explanatory field name.  I'm not sure what 'l' is.

conf_data_msg is not used in this patch.  Please introduce things when
they are needed to make the patch series easier to review in a linear
fashion.

> +/*
> + * TODO: make all communications asynchronous and run in the main
> + * loop or existing IOThread.
> + */

Please investigate and decide how to resolve this TODO.

> +void mpqemu_msg_send(MPQemuMsg *msg, MPQemuChannel *chan)
> +{
> +    int rc;
> +    uint8_t *data;
> +    union {
> +        char control[CMSG_SPACE(REMOTE_MAX_FDS * sizeof(int))];
> +        struct cmsghdr align;
> +    } u;
> +    struct msghdr hdr;
> +    struct cmsghdr *chdr;
> +    int sock = chan->sock;
> +    QemuMutex *lock = &chan->send_lock;
> +
> +    struct iovec iov = {
> +        .iov_base = (char *) msg,
> +        .iov_len = MPQEMU_MSG_HDR_SIZE,
> +    };
> +
> +    memset(&hdr, 0, sizeof(hdr));
> +    memset(&u, 0, sizeof(u));
> +
> +    hdr.msg_iov = &iov;
> +    hdr.msg_iovlen = 1;
> +
> +    if (msg->num_fds > REMOTE_MAX_FDS) {
> +        qemu_log_mask(LOG_REMOTE_DEBUG, "%s: Max FDs exceeded\n", __func__);
> +        return;
> +    }
> +
> +    if (msg->num_fds > 0) {
> +        size_t fdsize = msg->num_fds * sizeof(int);
> +
> +        hdr.msg_control = &u;
> +        hdr.msg_controllen = sizeof(u);
> +
> +        chdr = CMSG_FIRSTHDR(&hdr);
> +        chdr->cmsg_len = CMSG_LEN(fdsize);
> +        chdr->cmsg_level = SOL_SOCKET;
> +        chdr->cmsg_type = SCM_RIGHTS;
> +        memcpy(CMSG_DATA(chdr), msg->fds, fdsize);
> +        hdr.msg_controllen = CMSG_SPACE(fdsize);
> +    }
> +
> +    qemu_mutex_lock(lock);
> +
> +    do {
> +        rc = sendmsg(sock, &hdr, 0);
> +    } while (rc < 0 && (errno == EINTR || errno == EAGAIN));
> +
> +    if (rc < 0) {
> +        qemu_log_mask(LOG_REMOTE_DEBUG, "%s - sendmsg rc is %d, errno is %d,"
> +                      " sock %d\n", __func__, rc, errno, sock);
> +        qemu_mutex_unlock(lock);
> +        return;
> +    }
> +
> +    if (msg->bytestream) {
> +        data = msg->data2;
> +    } else {
> +        data = (uint8_t *)msg + MPQEMU_MSG_HDR_SIZE;
> +    }
> +
> +    do {
> +        rc = write(sock, data, msg->size);
> +    } while (rc < 0 && (errno == EINTR || errno == EAGAIN));
> +
> +    qemu_mutex_unlock(lock);

Can this lock be avoided by using a single sendmsg(2) syscall instead of
sendmsg() + write()?  I feel deja vu here, like I maybe have raised this
in a previous revision of this patch series.

> +    msg->num_fds = 0;
> +    for (chdr = CMSG_FIRSTHDR(&hdr); chdr != NULL;
> +         chdr = CMSG_NXTHDR(&hdr, chdr)) {
> +        if ((chdr->cmsg_level == SOL_SOCKET) &&
> +            (chdr->cmsg_type == SCM_RIGHTS)) {
> +            fdsize = chdr->cmsg_len - CMSG_LEN(0);
> +            msg->num_fds = fdsize / sizeof(int);
> +            if (msg->num_fds > REMOTE_MAX_FDS) {
> +                /*
> +                 * TODO: Security issue detected. Sender never sends more
> +                 * than REMOTE_MAX_FDS. This condition should be signaled to
> +                 * the admin
> +                 */

This TODO doesn't seem actionable.  The error is already handled.

> +                qemu_log_mask(LOG_REMOTE_DEBUG,
> +                              "%s: Max FDs exceeded\n", __func__);
> +                return -ERANGE;

The mutex must be released.

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]