qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH] block/file-posix: ignore fail on unlock bytes


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-block] [PATCH] block/file-posix: ignore fail on unlock bytes
Date: Thu, 28 Mar 2019 07:13:59 +0000

27.03.2019 15:49, Vladimir Sementsov-Ogievskiy wrote:
> bdrv_replace_child() calls bdrv_check_perm() with error_abort on
> loosening permissions. However file-locking operations may fail even
> in this case, for example on NFS. And this leads to Qemu crash.
> 
> Let's ignore such errors, as we do already on permission update commit
> and abort.
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> ---
>   block/file-posix.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/block/file-posix.c b/block/file-posix.c
> index db4cccbe51..403e67fe90 100644
> --- a/block/file-posix.c
> +++ b/block/file-posix.c
> @@ -815,6 +815,20 @@ static int raw_handle_perm_lock(BlockDriverState *bs,
>   
>       switch (op) {
>       case RAW_PL_PREPARE:
> +        if ((s->perm | new_perm) == s->perm &&
> +            (~s->shared_perm | ~new_perm) == ~s->shared_perm)

I was thinking about, resend or not, when I noticed that in second line
there is much more funny mistake: s/new_perm/new_shared :)))) So, decided
to resend.

> +        {
> +            /*
> +             * We are going to unlock bytes, it should not fail. If fail,
> +             * just report it and ignore, like we do for ABORT and COMMIT
> +             * anyway.
> +             */
> +            ret = raw_check_lock_bytes(s->fd, new_perm, new_shared, 
> &local_err);
> +            if (local_err) {
> +                error_report_err(local_err);
> +            }
> +            return 0;
> +        }
>           ret = raw_apply_lock_bytes(s, s->fd, s->perm | new_perm,
>                                      ~s->shared_perm | ~new_shared,
>                                      false, errp);
> 


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]