qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH PULL v2 09/10] hw/rdma: Implementation of PVRDMA


From: Yuval Shaia
Subject: Re: [Qemu-devel] [PATCH PULL v2 09/10] hw/rdma: Implementation of PVRDMA device
Date: Sun, 29 Apr 2018 12:38:04 +0300
User-agent: Mutt/1.9.2 (2017-12-15)

On Fri, Apr 27, 2018 at 10:36:33PM +0300, Marcel Apfelbaum wrote:
> On 27/04/2018 17:49, Peter Maydell wrote:
> > On 19 February 2018 at 11:43, Marcel Apfelbaum <address@hidden> wrote:
> >> From: Yuval Shaia <address@hidden>
> >>
> >> PVRDMA is the QEMU implementation of VMware's paravirtualized RDMA device.
> >> It works with its Linux Kernel driver AS IS, no need for any special
> >> guest modifications.
> >>
> >> While it complies with the VMware device, it can also communicate with
> >> bare metal RDMA-enabled machines and does not require an RDMA HCA in the
> >> host, it can work with Soft-RoCE (rxe).
> >>
> >> It does not require the whole guest RAM to be pinned allowing memory
> >> over-commit and, even if not implemented yet, migration support will be
> >> possible with some HW assistance.
> >>
> >> Implementation is divided into 2 components, rdma general and pvRDMA
> >> specific functions and structures.
> >>
> >> The second PVRDMA sub-module - interaction with PCI layer.
> >> - Device configuration and setup (MSIX, BARs etc).
> >> - Setup of DSR (Device Shared Resources)
> >> - Setup of device ring.
> >> - Device management.
> >>
> >> Reviewed-by: Dotan Barak <address@hidden>
> >> Reviewed-by: Zhu Yanjun <address@hidden>
> >> Signed-off-by: Yuval Shaia <address@hidden>
> >> Signed-off-by: Marcel Apfelbaum <address@hidden>
> > 
> > Hi; this is something odd I noticed reading through the code.
> > 
> >> +static void init_bars(PCIDevice *pdev)
> >> +{
> >> +    PVRDMADev *dev = PVRDMA_DEV(pdev);
> >> +
> >> +    /* BAR 0 - MSI-X */
> >> +    memory_region_init(&dev->msix, OBJECT(dev), "pvrdma-msix",
> >> +                       RDMA_BAR0_MSIX_SIZE);
> >> +    pci_register_bar(pdev, RDMA_MSIX_BAR_IDX, 
> >> PCI_BASE_ADDRESS_SPACE_MEMORY,
> >> +                     &dev->msix);
> >> +
> >> +    /* BAR 1 - Registers */
> >> +    memset(&dev->regs_data, 0, sizeof(dev->regs_data));
> >> +    memory_region_init_io(&dev->regs, OBJECT(dev), &regs_ops, dev,
> >> +                          "pvrdma-regs", RDMA_BAR1_REGS_SIZE);
> > 
> > RDMA_BAR1_REGS_SIZE is used in pvrdma.h to declare the regs array:
> >     uint32_t regs_data[RDMA_BAR1_REGS_SIZE];
> > and that and the way that get_reg_val/set_reg_val do "addr >> 2" to
> > convert from an address to an array index suggest that it is the
> > size of the BAR in 32-bit words. However here we're passing it
> > as the size parameter of memory_region_init_io(), which wants a
> > size in bytes. Which is correct ?
> > 
> 
> I think that RDMA_BAR1_REGS_SIZE (256) is the size in bytes,
> this is the layout of the registers (linux kernel):
> 
> /* Register offsets within PCI resource on BAR1. */
> #define PVRDMA_REG_VERSION      0x00    /* R: Version of device. */
> #define PVRDMA_REG_DSRLOW       0x04    /* W: Device shared region low PA. */
> #define PVRDMA_REG_DSRHIGH      0x08    /* W: Device shared region high PA. */
> #define PVRDMA_REG_CTL          0x0c    /* W: PVRDMA_DEVICE_CTL */
> #define PVRDMA_REG_REQUEST      0x10    /* W: Indicate device request. */
> #define PVRDMA_REG_ERR          0x14    /* R: Device error. */
> #define PVRDMA_REG_ICR          0x18    /* R: Interrupt cause. */
> #define PVRDMA_REG_IMR          0x1c    /* R/W: Interrupt mask. */
> #define PVRDMA_REG_MACL         0x20    /* R/W: MAC address low. */
> #define PVRDMA_REG_MACH         0x24    /* R/W: MAC address high. */
> 
> So we don't need 256 32-bit words.
> Yuval can you please confirm?

Correct, will make it smaller while taking care for the reported size in
call to memory_region_init.

Yuval

> 
> 
> Thanks,
> Marcel
> 
> > thanks
> > -- PMM
> > 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]