qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [PATCH v2 2/2] hw/arm: Add Arm Enterprise ma


From: Ard Biesheuvel
Subject: Re: [Qemu-arm] [Qemu-devel] [PATCH v2 2/2] hw/arm: Add Arm Enterprise machine type
Date: Thu, 26 Jul 2018 13:15:15 +0200

On 26 July 2018 at 13:11, Andrew Jones <address@hidden> wrote:
> On Thu, Jul 26, 2018 at 12:35:08PM +0200, Ard Biesheuvel wrote:
>> On 26 July 2018 at 12:28, Andrew Jones <address@hidden> wrote:
>> > On Thu, Jul 26, 2018 at 05:22:14PM +0800, Hongbo Zhang wrote:
>> >> On 25 July 2018 at 19:26, Andrew Jones <address@hidden> wrote:
>> >> > On Wed, Jul 25, 2018 at 06:22:17PM +0800, Hongbo Zhang wrote:
>> >> >> On 25 July 2018 at 17:54, Andrew Jones <address@hidden> wrote:
>> >> >> > On Wed, Jul 25, 2018 at 01:30:52PM +0800, Hongbo Zhang wrote:
>> >> >> >> For the Aarch64, there is one machine 'virt', it is primarily meant 
>> >> >> >> to
>> >> >> >> run on KVM and execute virtualization workloads, but we need an
>> >> >> >> environment as faithful as possible to physical hardware, for 
>> >> >> >> supporting
>> >> >> >> firmware and OS development for pysical Aarch64 machines.
>> >> >> >>
>> >> >> >> This patch introduces new machine type 'Enterprise' with main 
>> >> >> >> features:
>> >> >> >>  - Based on 'virt' machine type.
>> >> >> >>  - Re-designed memory map.
>> >> >> >>  - EL2 and EL3 are enabled by default.
>> >> >> >>  - GIC version 3 by default.
>> >> >> >>  - AHCI controller attached to system bus, and then CDROM and hard 
>> >> >> >> disc
>> >> >> >>    can be added to it.
>> >> >> >>  - EHCI controller attached to system bus, with USB mouse and key 
>> >> >> >> board
>> >> >> >>    installed by default.
>> >> >> >>  - E1000E ethernet card on PCIE bus.
>> >> >> >>  - VGA display adaptor on PCIE bus.
>> >> >> >>  - Default CPU type cortex-a57, 4 cores, and 1G bytes memory.
>> >> >> >>  - No virtio functions enabled, since this is to emulate real 
>> >> >> >> hardware.
>> >> >> >
>> >> >> > In the last review it was pointed out that using virtio-pci should 
>> >> >> > still
>> >> >> > be "real" enough, so there's not much reason to avoid it. Well, 
>> >> >> > unless
>> >> >> > there's some concern as to what drivers are available in the 
>> >> >> > firmware and
>> >> >> > guest kernel. But that concern usually only applies to legacy 
>> >> >> > firmwares
>> >> >> > and kernels, and therefore shouldn't apply to AArch64.
>> >> >> >
>> >> >> In real physical arm hardware, *HCI are system memory mapped, not on 
>> >> >> PCIE.
>> >> >> we need a QEMU platform like that. We need firmware developed on this
>> >> >> QEMU platform can run on real hardware without change(or only a minor
>> >> >> change)
>> >> >
>> >> > virtio-pci has nothing to do with *HCI. You're adding an E1000e to the
>> >> > PCIe bus instead of a virtio-pci nic. Why?
>> >> >
>> >> No virtio devices are need on this platform, so no virtio-pci either,
>> >> on the real Arm server hardware, a NIC is inserted into PCIE, and
>> >> E1000E is a typical one.
>> >
>> > It is possible to make a real piece of hardware that really goes in a PCIe
>> > slot which knows how to talk VirtIO. The fact that an E1000e driver will
>> > drive an E1000e QEMU model instead of a VirtIO driver driving a VirtIO
>> > backend is, to me, pretty arbitrary. The only reason it should matter for
>> > the guest firmware/kernel is whether or not the firmware/kernel will have
>> > VirtIO drivers available. Do we know that? Is it documented somewhere
>> > that the guest firmware/kernel is guaranteed to have E1000e drivers, but
>> > VirtIO drivers are optional, or even forbidden? If so, where's that
>> > document?
>> >
>>
>> It is not pretty arbitrary. One is paravirtualization and one is not.
>
> But the paravirtness is a driver detail, not a platform detail. The
> virtio-pci device is just a PCIe device to the platform. Drive it or
> not, drive it with knowledge that it's paravirt or not, the platform
> doesn't care.
>

That may be true. But we'll still end up with a UEFI build that has
OVMF virtio bus drivers and device drivers included, blurring the line
between emulation and virtualization.

>>
>> >>
>> >> >> Concern is not only available firmwares, but more emphasis is on new
>> >> >> firmwares to be developed on this platform (target is developing
>> >> >> firmware for hardware, but using qemu if hardware is not available
>> >> >> temporarily), if virtio device used, then the newly developed firmware
>> >> >> must include virtio front-end codes, but it isn't needed while run on
>> >> >> real hardware at last.
>> >> >
>> >> > Right. The new firmwares and kernels would need to include virtio-pci 
>> >> > nic
>> >> > and scsi drivers. Is that a problem? Anyway, this is all the more reason
>> >> > not to hard code peripherals. If a particular peripheral is a problem
>> >> > for a given firmware, then simply don't add it to the command line, add 
>> >> > a
>> >> > different one.
>> >> >
>> >> Yes that is problem, for newly developed firmwares, extra efforts will
>> >> be wasted on frontend codes (or glue-layer, whatever we call it), we
>> >> want firmwares developed on this platform can run easily on real
>> >> hardware, without such change.
>> >> Requirement is: some Linaro members just want a qemu platform as true
>> >> as possible with real hardware, there should be no problem with such
>> >> requirement, problem is 'virt' machine cannot satisfy the requirement,
>> >> so a new one is needed.
>> >
>> > It sounds like somebody knows what drivers are available and what
>> > drivers aren't. If that's not already documented, then it should
>> > be, and a pointer to it should be in this patch series.
>> >
>>
>> Available where?
>
> Available in UEFI, ARM-TF, and the target guest kernel. What software
> stack is this machine model targeting? I get the impression people
> know what they need, but knowing and specifying with a document are
> two different things.
>

Right.

>>
>> UEFI has drivers for ?HCI industry standard hardware. As for the
>> networking side, we should review whether E1000e is the most
>> appropriate or this, given the lack of open source drivers. However, I
>> do agree that discoverable hardware should not be hardcoded, and we
>> should even try to use the emulated option ROM to provide a UEFI
>> driver.
>
> Amen to that.
>
>>
>> >>
>> >> >>
>> >> >> >>  - No paravirtualized fw_cfg device either.
>> >> >> >>
>> >> >> >> Arm Trusted Firmware and UEFI porting to this are done accordingly.
>> >> >> >>
>> >> >> >
>> >> >> > How will UEFI get the ACPI tables from QEMU without fw-cfg? I didn't
>> >> >> > see any sort of reserved ROM region in the patch for them.
>> >> >> >
>> >> >> UEFI gets ACPI and kernel from network or mass storage, just like the
>> >> >> real hardware.
>> >> >
>> >> > Hmm. I thought for real hardware that the ACPI tables were built into
>> >> > the firmware. So, assuming UEFI knows how to read ACPI tables from
>> >> > some storage, then how do the QEMU generated ACPI tables get into that
>> >> > storage?
>> >> >
>> >> I should say "mass storage and flash"
>> >> There was fw_cfg in v1 patch, it is removed in v2.
>> >> If no fw_cfg, just like real hardware, UEFI should include ACPI
>> >> support for this SBSA platform, and UEFI/ACPI is load via -pflash,
>> >> then the QEMU built-in ACPI isn't used.
>> >> But there are side effects too, command 'qemu -bios uefi -kernel'
>> >> won't work, I need extra time to evaluate this change.
>> >
>> > Right. Neither ACPI nor '-bios ... -kernel ...' can work without fw-cfg.
>> > This patch either needs to keep fw-cfg, or to remove the ACPI changes.
>> > I can't see how an SBSA reference platform would be much use without ACPI
>> > though.
>> >
>>
>> Even if mach-virt's ACPI code depends on fw_cfg currently, there is no
>> reason whatsoever that this sbsa machine should not implement it like
>> real hardware does, i.e., hard coded tables.
>>
>
> I don't disagree, but there's no point in making QEMU ACPI generation
> code changes that will never be consumed. This patch adds tables for
> the hard coded ?HCI controllers to ACPI. We don't need those changes for
> the virt machine and, without fw-cfg, you can't use them on the reference
> machine.
>

Ah, indeed. I missed that bit.

We should not include any changes that modify the DT node or ACPI
table generation for mach-virt.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]