qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 10/13] virtiofsd: Custom threadpool for remote blocking posix


From: Stefan Hajnoczi
Subject: Re: [PATCH 10/13] virtiofsd: Custom threadpool for remote blocking posix locks requests
Date: Mon, 4 Oct 2021 15:54:31 +0100

On Thu, Sep 30, 2021 at 11:30:34AM -0400, Vivek Goyal wrote:
> Add a new custom threadpool using posix threads that specifically
> service locking requests.
> 
> In the case of a fcntl(SETLKW) request, if the guest is waiting
> for a lock or locks and issues a hard-reboot through SYSRQ then virtiofsd
> unblocks the blocked threads by sending a signal to them and waking
> them up.
> 
> The current threadpool (GThreadPool) is not adequate to service the
> locking requests that result in a thread blocking. That is because
> GLib does not provide an API to cancel the request while it is
> serviced by a thread. In addition, a user might be running virtiofsd
> without a threadpool (--thread-pool-size=0), thus a locking request
> that blocks, will block the main virtqueue thread that services requests
> from servicing any other requests.
> 
> The only exception occurs when the lock is of type F_UNLCK. In this case
> the request is serviced by the main virtqueue thread or a GThreadPool
> thread to avoid a deadlock, when all the threads in the custom threadpool
> are blocked.
> 
> Then virtiofsd proceeds to cleanup the state of the threads, release
> them back to the system and re-initialize.

Is there another way to cancel SETLKW without resorting to a new thread
pool? Since this only matters when shutting down or restarting, can we
close all plock->fd file descriptors to kick the GThreadPool workers out
of fnctl()?

> 
> Signed-off-by: Ioannis Angelakopoulos <iangelak@redhat.com>
> Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
> ---
>  tools/virtiofsd/fuse_virtio.c         |  90 ++++++-
>  tools/virtiofsd/meson.build           |   1 +
>  tools/virtiofsd/passthrough_seccomp.c |   1 +
>  tools/virtiofsd/tpool.c               | 331 ++++++++++++++++++++++++++
>  tools/virtiofsd/tpool.h               |  18 ++
>  5 files changed, 440 insertions(+), 1 deletion(-)
>  create mode 100644 tools/virtiofsd/tpool.c
>  create mode 100644 tools/virtiofsd/tpool.h
> 
> diff --git a/tools/virtiofsd/fuse_virtio.c b/tools/virtiofsd/fuse_virtio.c
> index 3b720c5d4a..c67c2e0e7a 100644
> --- a/tools/virtiofsd/fuse_virtio.c
> +++ b/tools/virtiofsd/fuse_virtio.c
> @@ -20,6 +20,7 @@
>  #include "fuse_misc.h"
>  #include "fuse_opt.h"
>  #include "fuse_virtio.h"
> +#include "tpool.h"
>  
>  #include <sys/eventfd.h>
>  #include <sys/socket.h>
> @@ -612,6 +613,60 @@ out:
>      free(req);
>  }
>  
> +/*
> + * If the request is a locking request, use a custom locking thread pool.
> + */
> +static bool use_lock_tpool(gpointer data, gpointer user_data)
> +{
> +    struct fv_QueueInfo *qi = user_data;
> +    struct fuse_session *se = qi->virtio_dev->se;
> +    FVRequest *req = data;
> +    VuVirtqElement *elem = &req->elem;
> +    struct fuse_buf fbuf = {};
> +    struct fuse_in_header *inhp;
> +    struct fuse_lk_in *lkinp;
> +    size_t lk_req_len;
> +    /* The 'out' part of the elem is from qemu */
> +    unsigned int out_num = elem->out_num;
> +    struct iovec *out_sg = elem->out_sg;
> +    size_t out_len = iov_size(out_sg, out_num);
> +    bool use_custom_tpool = false;
> +
> +    /*
> +     * If notifications are not enabled, no point in using cusotm lock
> +     * thread pool.
> +     */
> +    if (!se->notify_enabled) {
> +        return false;
> +    }
> +
> +    assert(se->bufsize > sizeof(struct fuse_in_header));
> +    lk_req_len = sizeof(struct fuse_in_header) + sizeof(struct fuse_lk_in);
> +
> +    if (out_len < lk_req_len) {
> +        return false;
> +    }
> +
> +    fbuf.mem = g_malloc(se->bufsize);
> +    copy_from_iov(&fbuf, out_num, out_sg, lk_req_len);

This looks inefficient: for every FUSE request we now malloc se->bufsize
and then copy lk_req_len bytes, only to free the memory again.

Is it possible to keep lk_req_len bytes on the stack instead?

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]