qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 1/1] hw/nvme: fix endianness issue for shadow doorbells


From: Peter Maydell
Subject: Re: [PULL 1/1] hw/nvme: fix endianness issue for shadow doorbells
Date: Thu, 20 Jul 2023 09:51:16 +0100

On Thu, 20 Jul 2023 at 09:49, Klaus Jensen <its@irrelevant.dk> wrote:
>
> On Jul 20 09:43, Peter Maydell wrote:
> > On Wed, 19 Jul 2023 at 21:13, Michael Tokarev <mjt@tls.msk.ru> wrote:
> > >
> > > 19.07.2023 10:36, Klaus Jensen wrote:
> > > pu(req->cmd.dptr.prp2);
> > > > +    uint32_t v;
> > >
> > > >           if (sq) {
> > > > +            v = cpu_to_le32(sq->tail);
> > >
> > > > -            pci_dma_write(pci, sq->db_addr, &sq->tail, 
> > > > sizeof(sq->tail));
> > > > +            pci_dma_write(pci, sq->db_addr, &v, sizeof(sq->tail));
> > >
> > > This and similar cases hurts my eyes.
> > >
> > > Why we pass address of v here, but use sizeof(sq->tail) ?
> > >
> > > Yes, I know both in theory should be of the same size, but heck,
> > > this is puzzling at best, and confusing in a regular case.
> > >
> > > Dunno how it slipped in the review, it instantly catched my eye
> > > in a row of applied patches..
> > >
> > > Also, why v is computed a few lines before it is used, with
> > > some expressions between the assignment and usage?
> > >
> > > How about the following patch:
> >
> > If you're going to change this, better to take the approach
> > Philippe suggested in review of using stl_le_pci_dma().
> >
> > 376e5e45-a3e7-0029-603a-b7ad9673fac4@linaro.org/">https://lore.kernel.org/qemu-devel/376e5e45-a3e7-0029-603a-b7ad9673fac4@linaro.org/
> >
>
> Yup, that was my plan for next. But the original patch was already
> verified on hardware and mutiple testes, so wanted to go with that for
> the "fix".
>
> But yes, I will refactor into the much nicer stl/ldl api.

FWIW, I don't think this bug fix was so urgent that we
needed to go with a quick fix and a followup -- we're
not yet that close to 8.1 release.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]