qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH] spapr_pci: Advertise MSI quota


From: Alexander Graf
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH] spapr_pci: Advertise MSI quota
Date: Wed, 11 Jun 2014 10:10:47 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.5.0


On 11.06.14 10:06, Alexander Graf wrote:

On 11.06.14 10:05, Alexey Kardashevskiy wrote:
From: Badari Pulavarty <address@hidden>

Hotplug of multiple disks fails due to MSI vector quota check.
Number of MSI vectors default to 8 allowing only 4 devices.
This happens on RHEL6.5 guest. RHEL7 and SLES11 guests fallback
to INTX.

One way to workaround the issue is to increase total MSIs,
so that MSI quota check allows us to hotplug multiple disks.

Signed-off-by: Badari Pulavarty <address@hidden>
Signed-off-by: Alexey Kardashevskiy <address@hidden>
---
  hw/ppc/spapr_pci.c | 2 ++
  1 file changed, 2 insertions(+)

diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
index ddfd8bb..ebd92fd 100644
--- a/hw/ppc/spapr_pci.c
+++ b/hw/ppc/spapr_pci.c
@@ -831,6 +831,7 @@ int spapr_populate_pci_dt(sPAPRPHBState *phb,
      int bus_off, i, j;
      char nodename[256];
      uint32_t bus_range[] = { cpu_to_be32(0), cpu_to_be32(0xff) };
+    uint16_t nmsi = 64;

Why 64?


Alex

      struct {
          uint32_t hi;
          uint64_t child;
@@ -879,6 +880,7 @@ int spapr_populate_pci_dt(sPAPRPHBState *phb,
_FDT(fdt_setprop(fdt, bus_off, "ranges", &ranges, sizeof(ranges)));
      _FDT(fdt_setprop(fdt, bus_off, "reg", &bus_reg, sizeof(bus_reg)));
_FDT(fdt_setprop_cell(fdt, bus_off, "ibm,pci-config-space-type", 0x1)); + _FDT(fdt_setprop(fdt, bus_off, "ibm,pe-total-#msi", &nmsi, sizeof(nmsi)));

Also this value will get written with the wrong endianness on an LE host.

Alexey, I'm not happy with you just forwarding random patches from people. It's on you to properly review them before you send them to the list if they go through your hands.


Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]