qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/5] pvrdma: check number of pages when creating


From: Yuval Shaia
Subject: Re: [Qemu-devel] [PATCH 3/5] pvrdma: check number of pages when creating rings
Date: Tue, 11 Dec 2018 17:38:31 +0200
User-agent: Mutt/1.10.1 (2018-07-13)

On Tue, Dec 11, 2018 at 06:56:40PM +0530, P J P wrote:
> From: Prasad J Pandit <address@hidden>
> 
> When creating CQ/QP rings, an object can have up to
> PVRDMA_MAX_FAST_REG_PAGES=128 pages. Check 'npages' parameter
> to avoid excessive memory allocation or a null dereference.
> 
> Reported-by: Li Qiang <address@hidden>
> Signed-off-by: Prasad J Pandit <address@hidden>
> ---
>  hw/rdma/vmw/pvrdma_cmd.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/hw/rdma/vmw/pvrdma_cmd.c b/hw/rdma/vmw/pvrdma_cmd.c
> index 4faeb21631..ee2888259c 100644
> --- a/hw/rdma/vmw/pvrdma_cmd.c
> +++ b/hw/rdma/vmw/pvrdma_cmd.c
> @@ -273,6 +273,10 @@ static int create_cq_ring(PCIDevice *pci_dev , 
> PvrdmaRing **ring,
>          pr_dbg("Failed to map to CQ page table\n");
>          goto out;
>      }
> +    if (!nchunks || nchunks > PVRDMA_MAX_FAST_REG_PAGES) {
> +        pr_dbg("invalid nchunks: %d\n", nchunks);
> +        goto out;
> +    }
>  
>      r = g_malloc(sizeof(*r));
>      *ring = r;
> @@ -389,6 +393,11 @@ static int create_qp_rings(PCIDevice *pci_dev, uint64_t 
> pdir_dma,
>          pr_dbg("Failed to map to CQ page table\n");
>          goto out;
>      }
> +    if (!spages || spages > PVRDMA_MAX_FAST_REG_PAGES
> +        || !rpages || rpages > PVRDMA_MAX_FAST_REG_PAGES) {
> +        pr_dbg("invalid pages: %d, %d\n", spages, rpages);
> +        goto out;
> +    }
>  

This check (along with the one in create_cq_ring) better be placed before
mapping to page table.

With or without accepting the suggestion fix LGTM.

Reviewed-by: Yuval Shaia <address@hidden>

>      sr = g_malloc(2 * sizeof(*rr));
>      rr = &sr[1];
> -- 
> 2.19.2
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]