qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine


From: Claudio Fontana
Subject: Re: [Qemu-devel] [PATCH 3/4] arm: Add PCIe host bridge in virt machine
Date: Mon, 12 Jan 2015 17:23:02 +0100
User-agent: Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Thunderbird/24.0.1

On 08.01.2015 16:01, Claudio Fontana wrote:
> On 08.01.2015 14:26, Alexander Graf wrote:
>>
>>
>> On 08.01.15 13:55, Claudio Fontana wrote:
>>> (added cc: Alvise which I mistakenly assumed was in Cc: already)
>>
>> He was in CC :). Now he's there twice.
>>
>>>
>>> On 07.01.2015 22:47, Alexander Graf wrote:
>>>>
>>>>
>>>> On 07.01.15 16:52, Claudio Fontana wrote:
>>>>> On 06.01.2015 17:03, Alexander Graf wrote:
>>>>>> Now that we have a working "generic" PCIe host bridge driver, we can plug
>>>>>> it into ARMs virt machine to always have PCIe available to normal ARM 
>>>>>> VMs.
>>>>>>
>>>>>> I've successfully managed to expose a Bochs VGA device, XHCI and an e1000
>>>>>> into an AArch64 VM with this and they all lived happily ever after.
>>>>>>
>>>>>> Signed-off-by: Alexander Graf <address@hidden>
>>>>>>
>>>>>> ---
>>>>>>
>>>>>> Linux 3.19 only supports the generic PCIe host bridge driver for 32bit 
>>>>>> ARM
>>>>>> systems. If you want to use it with AArch64 guests, please apply the 
>>>>>> following
>>>>>> patch or wait until upstream cleaned up the code properly:
>>>>>>
>>>>>>   http://csgraf.de/agraf/pci/pci-3.19.patch
>>>>>> ---
>>>>>>  default-configs/arm-softmmu.mak |  2 +
>>>>>>  hw/arm/virt.c                   | 83 
>>>>>> ++++++++++++++++++++++++++++++++++++++---
>>>>>>  2 files changed, 80 insertions(+), 5 deletions(-)
>>>>>>
>>
>> [...]
>>
>>>>>> +    dev = qdev_create(NULL, TYPE_GPEX_HOST);
>>>>>> +
>>>>>> +    qdev_prop_set_uint64(dev, "mmio_window_size", size_mmio);
>>>>>> +    qdev_init_nofail(dev);
>>>>>> +
>>>>>> +    sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, base_ecam);
>>>>>> +    sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_ioport);
>>>>>> +
>>>>>> +    /* Map the MMIO window at the same spot in bus and cpu layouts */
>>>>>> +    mmio_alias = g_new0(MemoryRegion, 1);
>>>>>> +    mmio_reg = sysbus_mmio_get_region(SYS_BUS_DEVICE(dev), 1);
>>>>>> +    memory_region_init_alias(mmio_alias, OBJECT(dev), "pcie-mmio",
>>>>>> +                             mmio_reg, base_mmio, size_mmio);
>>>>>> +    memory_region_add_subregion(get_system_memory(), base_mmio, 
>>>>>> mmio_alias);
>>>>>> +
>>>>>> +    sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, pic[irq]);
>>>>>> +
>>>>>> +    nodename = g_strdup_printf("/address@hidden" PRIx64, base);
>>>>>> +    qemu_fdt_add_subnode(vbi->fdt, nodename);
>>>>>> +    qemu_fdt_setprop_string(vbi->fdt, nodename,
>>>>>> +                            "compatible", "pci-host-ecam-generic");
>>>>>
>>>>> is this the only compatible string we should set here? Is this not legacy 
>>>>> pci compatible?
>>>>> In other device trees I see this mentioned as compatible = 
>>>>> "arm,versatile-pci-hostbridge", "pci" for example,
>>>>> would it be sensible to make it a list and include "pci" as well?
>>>>
>>>> I couldn't find anything that defines what an "pci" compatible should
>>>> look like. We definitely don't implement the legacy PCI config space
>>>> accessor registers.
>>>
>>> I see, I assumed this patch would support both CAM and ECAM as 
>>> configuration methods, while now I understand
>>> Alvise's patches support only CAM, while these support only ECAM..
>>> So basically I should look at the compatible string and then choose 
>>> configuration method accordingly.
>>> I wonder if I should deal with the case where the compatible string 
>>> contains both ECAM and CAM.
>>
>> Well, original PCI didn't even do CAM. You only had 2 32bit ioport
>> registers that you tunnel all config space access through.
>>
>>>
>>>>>
>>>>>> +    qemu_fdt_setprop_string(vbi->fdt, nodename, "device_type", "pci");
>>>>>> +    qemu_fdt_setprop_cell(vbi->fdt, nodename, "#address-cells", 3);
>>>>>> +    qemu_fdt_setprop_cell(vbi->fdt, nodename, "#size-cells", 2);
>>>>>> +    qemu_fdt_setprop_cells(vbi->fdt, nodename, "bus-range", 0, 1);
>>>>>> +
>>>>>> +    qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "reg",
>>>>>> +                                 2, base_ecam, 2, size_ecam);
>>>>>> +    qemu_fdt_setprop_sized_cells(vbi->fdt, nodename, "ranges",
>>>>>> +                                 1, 0x01000000, 2, 0,
>>>>>> +                                 2, base_ioport, 2, size_ioport,
>>>>>> +
>>>>>> +                                 1, 0x02000000, 2, base_mmio,
>>>>>> +                                 2, base_mmio, 2, size_mmio);
>>>>>> +
>>>>>> +    qemu_fdt_setprop_cell(vbi->fdt, nodename, "#interrupt-cells", 1);
>>>>>> +    qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map",
>>>>>> +                           0, 0, 0, /* device */
>>>>>> +                           0,       /* PCI irq */
>>>>>> +                           gic_phandle, GIC_FDT_IRQ_TYPE_SPI, irq,
>>>>>> +                             GIC_FDT_IRQ_FLAGS_LEVEL_HI /* system irq 
>>>>>> */);
>>>>>> +    qemu_fdt_setprop_cells(vbi->fdt, nodename, "interrupt-map-mask",
>>>>>> +                           0, 0, 0, /* device */
>>>>>> +                           0        /* PCI irq */);
>>>>>
>>>>> Interrupt map does not seem to work for me; incidentally this ends up 
>>>>> being the same kind of undocumented blob that Alvise posted in his series.
>>>>
>>>> How exactly is this undocumented? The "mask" is a mask over the first
>>>> fields of an interrupt-map row plus an IRQ offset. So the mask above
>>>> means "Any device with any function and any IRQ on it, map to device IRQ
>>>> 0" which maps to vbi->irqmap[VIRT_PCIE] (IRQ 3).
>>>
>>> (see my answer to Peter below in thread)
>>>
>>> this is a bit different to what Alvise's series is doing I think (see 
>>> later).
>>
>> Yes, but it's easier :).
>>
>>>
>>>>
>>>>> Can you add a good comment about what the ranges property contains (the 
>>>>> 0x01000000, 0x02000000 which I suspect means IO vs MMIO IIRC, but there 
>>>>> is no need to be cryptic about it).
>>>>
>>>> You're saying you'd prefer a define?
>>>
>>> Yes that would be helpful :)
>>>
>>>>
>>>>> How does your interrupt map implementation differ from the patchset 
>>>>> posted by Alvise? I ask because that one works for me (tm).
>>>>
>>>> His implementation explicitly listed every PCI slot, while mine actually
>>>> makes use of the mask and simply routes everything to a single IRQ line.
>>>>
>>>> The benefit of masking devfn out is that you don't need to worry about
>>>> the number of slots you support - and anything performance critical
>>>> should go via MSI-X anyway ;).
>>>
>>> The benefit for me (but for me only probably..) is that with one IRQ per 
>>> slot I didn't have to implement shared irqs and msi / msi-x in the guest 
>>> yet. But that should be done eventually anyway..
>>
>> You most likely wouldn't get one IRQ per slot anyway. Most PHBs expose 4
>> outgoing IRQ lines, so you'll need to deal with sharing IRQs regardless.
>>
>> Also, sharing IRQ lines isn't incredibly black magic and quite a
>> fundamental PCI IRQ concept, so I'm glad I'm pushing you into the
>> direction of implementing it early on :).


Ok I have tentatively implemented this and I tested both Alvise's series and 
yours and both work ok for my use case.

Note that I did not test any MSI/MSI-X, only the INTx method.

>>
> 
> damn open source making me work lol :)
> 
> It's actually good to have these two implementations, so I can validate my 
> guest support against both series and ensure that I don't hardcode too many 
> shortcuts.
> 
> Thanks,
> 
> Claudio
> 
>>>
>>>>
>>>> So what exactly do you test it with? I've successfully received IRQs
>>>> with a Linux guest, so I'm slightly puzzled you're running into problems.
>>>
>>> Of course, I am just implementing PCI guest support for OSv/AArch64.
>>> Works with the legacy pci support using Alvise's series to do virtio-pci 
>>> with block, net, rng etc,
>>> but I am now considering going for PCIE support directly although that 
>>> means more implementation work for me.
>>
>> Both should look quite similar from your point of view. If you do the
>> level interrupt handling, irq masking and window handling correctly the
>> only difference should be CAM vs ECAM.
>>
>> I'm glad to hear that it breaks your implementation though and not
>> something already existing (like Linux, BSD, etc). That means there's a
>> good chance my patches are correct and you just took some shortcuts
>> which can be easily fixed :).
>>
>>
>> Alex
>>
> 
> 


-- 
Claudio Fontana
Server Virtualization Architect
Huawei Technologies Duesseldorf GmbH
Riesstraße 25 - 80992 München

office: +49 89 158834 4135
mobile: +49 15253060158



reply via email to

[Prev in Thread] Current Thread [Next in Thread]