qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V14 2/3] pc: add a Virtual Machine Generation ID


From: David Gibson
Subject: Re: [Qemu-devel] [PATCH V14 2/3] pc: add a Virtual Machine Generation ID device
Date: Wed, 11 Mar 2015 16:35:41 +1100

On Wed, 4 Mar 2015 20:12:31 +0100
"Michael S. Tsirkin" <address@hidden> wrote:

> On Wed, Mar 04, 2015 at 05:33:42PM +0100, Igor Mammedov wrote:
> > On Wed, 4 Mar 2015 16:31:39 +0100
> > "Michael S. Tsirkin" <address@hidden> wrote:
> > 
> > > On Wed, Mar 04, 2015 at 04:14:44PM +0100, Igor Mammedov wrote:
> > > > On Wed, 4 Mar 2015 14:49:00 +0100
> > > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > > 
> > > > > On Wed, Mar 04, 2015 at 02:12:32PM +0100, Igor Mammedov wrote:
> > > > > > On Wed, 4 Mar 2015 13:11:48 +0100
> > > > > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > > > > 
> > > > > > > On Tue, Mar 03, 2015 at 09:33:51PM +0100, Igor Mammedov wrote:
> > > > > > > > On Tue, 3 Mar 2015 18:35:39 +0100
> > > > > > > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > > > > > > 
> > > > > > > > > On Tue, Mar 03, 2015 at 05:18:14PM +0100, Igor Mammedov wrote:
> > > > > > > > > > Based on Microsoft's sepecifications (paper can be 
> > > > > > > > > > dowloaded from
> > > > > > > > > > http://go.microsoft.com/fwlink/?LinkId=260709), add a device
> > > > > > > > > > description to the SSDT ACPI table and its implementation.
> > > > > > > > > > 
> > > > > > > > > > The GUID is set using "vmgenid.uuid" property.
> > > > > > > > > > 
> > > > > > > > > > Example of using vmgenid device:
> > > > > > > > > >  -device 
> > > > > > > > > > vmgenid,id=FOO,uuid="324e6eaf-d1d1-4bf6-bf41-b9bb6c91fb87"
> > [...]
> > 
> > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > BTW, why do we need to stick vmgen_buf_paddr in the info?
> > > > > > Because according to MS spec device should have ADDR object
> > > > > > with physical buffer address packed in Package(2). So that
> > > > > > Windows could read value from there.
> > > > > > 
> > > > > > [...]
> > > > > 
> > > > > Yes but why not read the property when and where we
> > > > > need it?
> > > > It's basically to fit the style used in acpi-build.c
> > > > where we collect info by reading properties in
> > > >  acpi_get_pm_info(), acpi_get_misc_info(), acpi_get_pci_info() ...
> > > > and then just use pm, misc, pci in build_ssdt()
> > > > should we drop all above and just inline it in build_ssdt() ?
> > > 
> > > The issue is you have two items to track here:
> > > - addr - you stick that in the info struct
> > > - full object address - you don't
> > > an inconsistency that I dislike.
> > What is "full object address"?
> 
> where you look up the vmgen id pci device.
> 
> > 
> > > > > > > > > > +    name = g_strdup_printf("PCI0%s.S%.02X_", name ? name : 
> > > > > > > > > > "",
> > > > > > > > > > pdev->devfn);
> > > > > > > > > > +    g_free(last);
> > > > > > > > > > +    return name;
> > > > > > > > > > +}
> > > > > > > > > 
> > > > > > > > > Looks tricky, and duplicates logic for device naming.
> > > > > > > > > All this won't be necessary if you just add this as child
> > > > > > > > > of the correct device, without playing with scope.
> > > > > > > > > Why not do it?
> > > > > > > > since vmgenid PCI device is located somewhere on PCI bus we 
> > > > > > > > don't have
> > > > > > > > fixed PATH to it and we need full path to it to send Notivy from
> > > > > > > > "\\_GPE" scope see "aml_notify(aml_name("\\_SB.%s", vgid_path)" 
> > > > > > > > below.
> > > > > > > 
> > > > > > > I see. Still - can't this function return the full aml_name?
> > > > > > it's possible but I'd prefer to return back to 2 ACPI devices as it 
> > > > > > was
> > > > > > in v13 since Windows sees 2 devices anyway, even if they merged 
> > > > > > into one
> > > > > > PCI device description (which probably wrong but windows handles it 
> > > > > > because
> > > > > > PCI Standard RAM controller is driver less) and get rid of
> > > > > > acpi_get_pci_dev_scope_name() thing.
> > > > > 
> > > > > OK but I think it should be under PCI0 at least,
> > > > > since that one claims the relevant resource in its CRS.
> > > > vmgenid device doesn't claim any resource if we use PCI for its
> > > > implementation since corresponding PCI device claims its BAR.
> > > > But I don't see any problem in putting VGID device into PCI0 scope.
> > > > 
> > > > > 
> > > > > > It will also help if vmgenid will be a part of multifunction device,
> > > > > > which current build_append_pci_bus_devices() ignores for now (i.e. 
> > > > > > it
> > > > > > describes only function 0 devices on slot).
> > > > > > 
> > > > > > [...]
> > > > > 
> > > > > OK, though we might need to add the description for the pci device 
> > > > > anyway
> > > > > e.g. in order to mark it hidden.
> > > > Experiments show that Windows ignores _STA for PCI devices,
> > > > it looks like it completely ignores SXX devices in ACPI for present at 
> > > > boot
> > > > devices except of _EJ().
> > > > BTW: I've already tried it, it doesn't hide anything.
> > > >  
> > > > [...]
> > > 
> > > So it boils down to the fact that windows thinks it's RAM,
> > It thinks it's PCI Standard RAM Controller not RAM itself.
> > 
> > > so it binds a generic driver to it, but then we get
> > According to device manager no driver is bound to it and neither needed.
> > 
> > > lucky and it does not try to use it as RAM.
> > Video cards also use a bunch of "PCI Standard RAM Controller"
> > devices I guess to expose additional VRAM,
> > That doesn't mean that BARs are to be used by OS as conventional RAM
> > it's rather for usage by vendor's driver.
> > Same goes for ivshmem which is also expose RAM bar and has the same CLASS 
> > ID,
> > BAR's RAM is used only by means of ivshmem driver.
> > 
> > But yes we get lucky that Windows has stub device description.
> 
> OK. So if you are going to rely on this,
> I think it's a good idea to get ack from David to confirm
> this is solvable for pseries.

I've looked into this a bit more.  We've confirmed it's definitely a
bug in SLOF - but fixing it is a bit more subtle than I thought.

Basically, SLOF is setting the device_type property for all PCI devices
based on the PCI class code - it's device_type = "memory" that causes
the kernel to erroneously pick up the PCI device as regular RAM.

In fact, device_type is supposed to indicate the capabilities of the OF
driver attached to the device, so it should only be set by an actual OF
driver binding to the device, not generically in the PCI code.

The catch is whether we'll break any existing SLOF supported devices is
we remove setting of the device_type.  This will need some testing.

-- 
David Gibson <address@hidden>
Senior Software Engineer, Virtualization, Red Hat

Attachment: pgpxm3g40oWUU.pgp
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]