qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support


From: Bin Meng
Subject: Re: [PATCH 2/2] hw/ssi: xilinx_spips: Implement basic QSPI DMA support
Date: Sun, 7 Feb 2021 23:46:39 +0800

Hi Peter,

On Sat, Feb 6, 2021 at 11:28 PM Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Sat, 6 Feb 2021 at 14:38, Bin Meng <bmeng.cn@gmail.com> wrote:
> >
> > From: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> >
> > ZynqMP QSPI supports SPI transfer using DMA mode, but currently this
> > is unimplemented. When QSPI is programmed to use DMA mode, QEMU will
> > crash. This is observed when testing VxWorks 7.
> >
> > Add a basic implementation of QSPI DMA functionality.
> >
> > Signed-off-by: Xuzhou Cheng <xuzhou.cheng@windriver.com>
> > Signed-off-by: Bin Meng <bin.meng@windriver.com>
>
> > +static size_t xlnx_zynqmp_gspips_dma_push(XlnxZynqMPQSPIPS *s,
> > +                                          uint8_t *buf, size_t len, bool 
> > eop)
> > +{
> > +    hwaddr dst = (hwaddr)s->regs[R_GQSPI_DMA_ADDR_MSB] << 32
> > +                 | s->regs[R_GQSPI_DMA_ADDR];
> > +    uint32_t size = s->regs[R_GQSPI_DMA_SIZE];
> > +    uint32_t mlen = MIN(size, len) & (~3); /* Size is word aligned */
> > +
> > +    if (size == 0 || len <= 0) {
> > +        return 0;
> > +    }
> > +
> > +    cpu_physical_memory_write(dst, buf, mlen);
> > +    size = xlnx_zynqmp_gspips_dma_advance(s, mlen, dst);
> > +
> > +    if (size == 0) {
> > +        xlnx_zynqmp_gspips_dma_done(s);
> > +        xlnx_zynqmp_qspips_update_ixr(s);
> > +    }
> > +
> > +   return mlen;
> > +}
>
> > @@ -861,7 +986,7 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
> >          recv_fifo = &s->rx_fifo;
> >      }
> >      while (recv_fifo->num >= 4
> > -           && stream_can_push(rq->dma, xlnx_zynqmp_qspips_notify, rq))
> > +           && xlnx_zynqmp_gspips_dma_can_push(rq))
> >      {
> >          size_t ret;
> >          uint32_t num;
> > @@ -874,7 +999,7 @@ static void xlnx_zynqmp_qspips_notify(void *opaque)
> >
> >          memcpy(rq->dma_buf, rxd, num);
> >
> > -        ret = stream_push(rq->dma, rq->dma_buf, num, false);
> > +        ret = xlnx_zynqmp_gspips_dma_push(rq, rq->dma_buf, num, false);
> >          assert(ret == num);
> >          xlnx_zynqmp_qspips_check_flush(rq);
> >      }
>
> This seems to be removing the existing handling of DMA to the
> TYPE_STREAM_SINK via the stream_* functions -- that doesn't look
> right. I don't know any of the details of this device, but if it
> has two different modes of DMA then we need to support both of them,
> surely ?

This DMA engine is a built-in engine dedicated for QSPI so I think
there is no need to use the stream_* functions.

> If the device really should be doing its own DMA memory
> accesses, please don't use cpu_physical_memory_write() for
> this. The device should take a TYPE_MEMORY_REGION link property,
> and the board code should set this to tell the device what
> its view of the world that it is doing DMA to is. Then the
> device in its realize method calls address_space_init() to create
> an AddressSpace for this MemoryRegion, and does memory accesses
> using functions like address_space_read()/address_space_write()/
> address_space_ld*()/etc. (Examples in hw/dma, eg pl080.c.)
> Note that the address_space* functions have a return value
> indicating whether the access failed, which you should handle.
> (The pl080 code doesn't do that, but that's because it's older code.)

Sure will switch to use DMA AddressSpace in v2.

Regards,
Bin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]