qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] net/rocker: Remove the dead error handli


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH v5 1/4] net/rocker: Remove the dead error handling
Date: Thu, 25 May 2017 11:02:16 +1000

On Wed, 24 May 2017 08:01:47 -0400 (EDT)
Marcel Apfelbaum <address@hidden> wrote:

> ----- Original Message -----
> > From: "Markus Armbruster" <address@hidden>
> > To: "Philippe Mathieu-Daudé" <address@hidden>
> > Cc: address@hidden, "Mao Zhongyi" <address@hidden>, address@hidden, 
> > address@hidden, "Michael
> > S. Tsirkin" <address@hidden>, "Marcel Apfelbaum" <address@hidden>
> > Sent: Wednesday, May 24, 2017 8:35:04 AM
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] net/rocker: Remove the dead error 
> > handling
> > 
> > Philippe Mathieu-Daudé <address@hidden> writes:
> >   
> > > Hi Markus,
> > >
> > > On 05/23/2017 06:27 AM, Markus Armbruster wrote:
> > > [...]  
> > >> There's one more cleanup opportunity:
> > >>  
> > > [...]  
> > >>>      if (pci_dma_read(dev, le64_to_cpu(info->desc.buf_addr), info->buf,
> > >>>      size)) {
> > >>>          return NULL;
> > >>>      }  
> > >>
> > >> None of the pci_dma_read() calls outside rocker check the return value.
> > >> Just as well, because it always returns 0.  Please clean this up in a
> > >> separate followup patch.  
> > >
> > > It may be the correct way to do it but this sounds like we are missing
> > > something somewhere... pci_dma_read() calls pci_dma_rw() which always
> > > returns 0. Why not let it returns void? It is inlined and never used
> > > by address. Else we should document why returning 0 is correct, and
> > > what is the reason to not use a void prototype.
> > >
> > > pci_dma_rw() calls dma_memory_rw() which does return a boolean value,
> > > false on success (MEMTX_OK) and true on error
> > > (MEMTX_ERROR/DECODE_ERROR)  
> > 
> > PCI question.  Michael, Marcel?
> >   
> 
> Hi Markus,
> 
> Looking at the git history, pci_dma_rw used to call cpu_physical_memory_rw
> which, at that time (commit ec17457), returned void. Since the interface 
> dictated
> to return int, 0 is returned as "always OK".
> 
> The callers to pci_dma_read did not bother to check it for obvious reasons 
> (even if they should).
> 
> In the meantime the memory API has changed to allow returning errors, but 
> since the callers of
> pci_dma_rw don't check the return value, why bother to update the PCI DMA?
> 
> History aside (and my speculations above), it seems  the right move is to 
> update
> the return value and check it by callers, but honestly I don't have any idea
> if the emulated devices expect pci dma to fail.
> Adding Paolo and David for more insights.

It seems to me that PCI DMA transfers ought to be able to fail, and
devices ought to be able to handle that (to a limited extent).

After all, what will happen if you try to DMA to PCI addresses that
simply aren't mapped.  Or which are in the domain of a vIOMMU which
wither hasn't mapped those addreses, or has them mapped read-only
(meaning host-to-device only in this context).

-- 
David Gibson <address@hidden>
Principal Software Engineer, Virtualization, Red Hat

Attachment: pgpUGp42dFxZl.pgp
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]