qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] ipxe and arm


From: Laszlo Ersek
Subject: Re: [Qemu-devel] ipxe and arm
Date: Thu, 12 May 2016 18:13:54 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.0

On 05/12/16 15:29, Shannon Zhao wrote:
> On 2016年05月11日 22:44, Laszlo Ersek wrote:
>> On 05/11/16 15:51, Shannon Zhao wrote:
>>>> On 2016年05月11日 21:38, Laszlo Ersek wrote:
>>>>>> On 05/11/16 15:03, Gerd Hoffmann wrote:
>>>>>>>>   Hi,
>>>>>>>>
>>>>>>>> ipxe gained support for arm and aarch64 efi platforms.  So we could add
>>>>>>>> support to our nic pci roms with the next ipxe update.
>>>>>>>>
>>>>>>>> But: The question is whenever that makes sense in the first place.
>>>>>>>> Support for virtio-net is in edk2, so that is covered already.  The
>>>>>>>> other pci nics are not, but given that virtio-net predates arm
>>>>>>>> virtualization all guests should be able to handle virtio-net just 
>>>>>>>> fine.
>>>>>>>> And I doubt anybody seriously prefers rtl8139 or e1000 over virtio-net
>>>>>>>> unless the lack of guest driver support mandates it ...
>>>>>>>>
>>>>>>>> Comments anyone?
>>>>>>
>>>>>> AFAIK all aarch64 OS installers will come with virtio-net drivers on the
>>>>>> install media.
>>>>>>
>>>> But if the user doesn't specify a virtio-net nic, then ipxe will fail,
>>>> right?
>> I don't understand the question, sorry. How can ipxe fail if ipxe is not
>> made available to the guest, in any NIC's PCI option ROM BAR?
> What I meat to say is that on x86 user can use rtl8139 or e1000 to use
> ipxe. If uefi doesn't support rtl8139 or e1000, user can't use ipxe with
> uefi when he only uses rtl8139 or e1000.

Netbooting is a multi-stage process.

* In the first stage, you have two things:
- platform firmware,
- a PCI expansion ROM on your NIC.

* In the second stage, you can have whatever your first stage managed to
boot for you, over the network.

Let me list a few common configurations.

(1)

* First stage:
  - platform firmware: legacy BIOS
  - NIC oprom: complete PXE implementation from the NICs vendor
* Second stage:
  - full-fledged iPXE build

In this case, you use your physical hardware as-is to chain-load iPXE
via TFTP, and then you can netboot whatever your heart desires, with
iPXE, over iSCSI, HTTPS and so on.

http://ipxe.org/howto/chainloading

(2)

* First stage:
  - platform firmware: legacy BIOS
  - NIC oprom: full-fledged iPXE build
* Second stage:
  - whatever your heart desires

In this case, a full-fledged iPXE binary (built for legacy BIOS) is
physcally flashed on your NIC. The platform firmware will dispatch it
from the NIC's oprom, and then you can immediately netboot whatever you
want, with the many features iPXE has.

http://ipxe.org/howto/romburning

Notably, this is what you get when QEMU runs SeaBIOS, and the iPXE
binary (matching the NIC) is automatically presented to the guest in the
NIC's ROM BAR.

(3)

* First stage:
  - platform firmware: UEFI
  - NIC oprom: low level UEFI network driver (SNP driver) for the NIC,
               from the NIC vendor
* Second stage:
  - full-fledged iPXE build

In this case, the UEFI platform firmware contains the higher level
network protocol implementations (DHCP, PXE, UDP, IP, TCP, HTTP), and
the NIC's option ROM provides just a low-level network driver. The
platform firmware will use the low-level SNP driver (from the NIC's
vendor) to boot a second stage boot loader via TFTP (or HTTP). That
second stage boot loader can be a full-fledged iPXE build, which can
boot absolutely anything over anything.

(4)

* First stage:
  - platform firmware: UEFI
  - NIC oprom: full-fledged iPXE build
* Second stage:
  - whatever your heart desires

In this case, the full-fledged iPXE UEFI binary, from the NIC's oprom,
inserts itself into the UEFI boot process, and offers you the full
functionality of iPXE immediately. On the other hand, it can (and has)
expose incompatibilities between iPXE and UEFI, and/or your UEFI
environment may no longer be considered UEFI (iPXE "hijacks" the UEFI
boot process).

(5)

* First stage:
  - platform firmware: UEFI
  - NIC oprom: stripped down iPXE build (only SNP driver)
* Second stage:
  - full-fledged iPXE build

This is identical to case (3), except that the low-level NIC driver is
not shipped by the NICs manufacturer, instead you flash the stripped
down UEFI build of iPXE to the NIC. iPXE is only used for providing
low-level UEFI network drivers.

This is the case with QEMU, when QEMU runs OVMF (as platform firmware),
and you allow QEMU to automatically load the bundled iPXE binary into
the virtual NICs option ROM BAR.

For this, the iPXE binary is built with CONFIG=qemu.

The supported NICs are e1000, ne2k_pci, pcnet, rtl8139, and virtio-net-pci.

The UEFI platform firmware (for example, OVMF) can be used to netboot a
second stage boot loader (PXE or HTTP): for example, a full-fledged iPXE
binary, or "shim" (-> grub), etc.

(6)

* First stage:
  - platform firmware: UEFI
  - NIC oprom: absent
* Second stage:
  - full-fledged iPXE build

In this case, we assume that the platform firmware itself contains an
SNP driver that is compatible with your NIC. In other words, the
platform firmware provides the complete network stack. It can netboot
the second stage boot loader "without help" (over PXE or HTTP): a
full-fledged iPXE build, of shim / grub, etc.

This is what you get when QEMU runs OVMF, and you pass "rombar=0" to the
virtio-net-pci device. OVMF's builtin VirtioNetDxe driver provides an
SNP interface to the virtio-net NIC.

----*----

Okay, after seeing the common use cases, here's a side point first, then
we'll come to your question.

The side point is: the iPXE binaries that are bundled with QEMU are
/combined/ PCI expansion ROMs. Meaning, per NIC type, you have just one
(combined) binary that QEMU loads into the ROM BAR, and the platform
firmware (SeaBIOS vs. OVMF) will dispatch just the half that is
appropriate (--> tell apart cases (2) and (5)).

Getting to your question:

Thus far iPXE has only been available for x86 guest. *Plus*, on aarch64,
there's only UEFI (that we're willing to call "firmware" anyway). This
filters out cases (1) through (5), and leaves only case (6).

Namely, if you wanted to netboot an aarch64 virtual machine, you had to
use a virtio-net NIC (over virtio-mmio or virtio-pci), use AAVMF, and
rely on AAVMF's VirtioNetDxe driver to provide the lowest level NIC
driver. And for second stage, you could only use "shim" / "grub" etc;
not a full-fledged iPXE.

This has now changed. With iPXE gaining aarch64 support, cases (4) and
(5) have opened up. The question is if QEMU wants to support them.

Case (4) should not be supported for aarch64 VMs for the exact same
reason as for x86 VMs: the UEFI boot process should not be interfered
with; we want low level network drivers and nothing more.

Case (5) *could* be supported -- but what for? The only advantage (5)
offers above (6) would be that you could netboot with the following
(virtual) NICs: e1000, ne2k_pci, pcnet, rtl8139.

Why would anyone want to use such virtual NICs in an aarch64 virtual
machine? For x86 guests, those NICs make sense, because your guest OS
might have (internal or even external) drivers for those NICs only, and
not for virtio-net. Fine.

But aarch64 guest OSes will come with builtin virtio-net drivers, so (6)
should be enough for everything.

In case you want full-fledged iPXE in an aarch64 VM, (6) can perfectly
accommodate that -- same as in x86 guests --: use virtio-net, and
chain-load the full-fledged iPXE binary. QEMU need not bundle aarch64
iPXE binaries for that.

Thanks
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]