[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 07/10] hw/rdma: Free all receive buffers when

From: Yuval Shaia
Subject: Re: [Qemu-devel] [PATCH v5 07/10] hw/rdma: Free all receive buffers when QP is destroyed
Date: Tue, 24 Mar 2020 12:04:33 +0200

On Tue, 24 Mar 2020 at 11:56, Yuval Shaia <address@hidden> wrote:

On Mon, 23 Mar 2020 at 12:32, Peter Maydell <address@hidden> wrote:
On Sun, 10 Mar 2019 at 09:25, Yuval Shaia <address@hidden> wrote:
> When QP is destroyed the backend QP is destroyed as well. This ensures
> we clean all received buffer we posted to it.
> However, a contexts of these buffers are still remain in the device.
> Fix it by maintaining a list of buffer's context and free them when QP
> is destroyed.
> Signed-off-by: Yuval Shaia <address@hidden>
> Reviewed-by: Marcel Apfelbaum <address@hidden>

Hi; Coverity has just raised an issue on this code (CID 1421951):

> diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c
> index 0a8abe572d..73f279104c 100644
> --- a/hw/rdma/rdma_utils.c
> +++ b/hw/rdma/rdma_utils.c
> @@ -90,3 +90,32 @@ int64_t rdma_protected_qlist_pop_int64(RdmaProtectedQList *list)
>      return qnum_get_uint(qobject_to(QNum, obj));
>  }
> +
> +void rdma_protected_gslist_init(RdmaProtectedGSList *list)
> +{
> +    qemu_mutex_init(&list->lock);
> +}
> +
> +void rdma_protected_gslist_destroy(RdmaProtectedGSList *list)
> +{
> +    if (list->list) {
> +        g_slist_free(list->list);
> +        list->list = NULL;
> +    }

Coverity wonders whether this function should take the list->lock
before freeing the list, because the other places which manipulate
list->list take the lock.

> +}

This is one of those Coverity checks which is quite prone to
false positives because it's just heuristically saying "you
look like you take the lock when you modify this field elsewhere,
maybe this place should take the lock too". Does this function
need to take a lock, or does the code that uses it guarantee
that it's never possible for another thread to be running
with access to the structure once we decide to destroy it?

It hit a real error here.

Will fix and post a patch soon.


For clarification: The code that uses this API make sure not to access the  qlist after it is destroyed *but* we cannot trust that in the future caller will not do that.

-- PMM

reply via email to

[Prev in Thread] Current Thread [Next in Thread]