qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: acpi_pcihp_eject_slot() bug if passed 'slots == 0'


From: Michael S. Tsirkin
Subject: Re: acpi_pcihp_eject_slot() bug if passed 'slots == 0'
Date: Thu, 26 Mar 2020 09:31:09 -0400

On Thu, Mar 26, 2020 at 09:28:27AM -0400, Michael S. Tsirkin wrote:
> On Thu, Mar 26, 2020 at 02:23:17PM +0100, Igor Mammedov wrote:
> > On Thu, 26 Mar 2020 11:52:36 +0000
> > Peter Maydell <address@hidden> wrote:
> > 
> > > Hi; Coverity spots that if hw/acpi/pcihp.c:acpi_pcihp_eject_slot()
> > > is passed a zero 'slots' argument then ctz32(slots) will return 32,
> > > and then the code that does '1U << slot' is C undefined behaviour
> > > because it's an oversized shift. (This is CID 1421896.)
> > > 
> > > Since the pci_write() function in this file can call
> > > acpi_pcihp_eject_slot() with an arbitrary value from the guest,
> > > I think we need to handle 'slots == 0' safely. But what should
> > > the behaviour be?
> > 
> > it also uncovers a bug, where we are not able to eject slot 0 on bridge,
> 
> 
> And that is by design. A standard PCI SHPC register can support up to 31
> hotpluggable slots. So we chose slot 0 as non hotpluggable.
> It's consistent across SHPC, PCI-E, so I made ACPI match.

Sorry I was confused. It's a PCI thing, PCI-E does not have
slot numbers for downstream ports at all.

> You can't hot-add it either.
> 
> > can be reproduced with:
> > 
> >  -enable-kvm -m 4G -device pci-bridge,chassis_nr=1 -global 
> > PIIX4_PM.acpi-pci-hotplug-with-bridge-support=on -device 
> > virtio-net-pci,bus=pci.1,addr=0,id=netdev12
> > 
> > (monitor) device_del netdev12
> > (monitor) qtree # still shows the device
> > 
> > reason is that acpi_pcihp_eject_slot()
> >    if (PCI_SLOT(dev->devfn) == slot) { # doesn't  match (0 != 32)
> > 
> > so device is not deleted
> 
> We should probably teach QEMU that some slots aren't hotpluggable
> even if device in it is hotpluggable in theory. But that is
> a separate issue.
> 
> > > thanks
> > > -- PMM
> > > 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]