[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [libvirt] How to best handle the reoccurring of rom cha

From: Daniel P. Berrange
Subject: Re: [Qemu-devel] [libvirt] How to best handle the reoccurring of rom changes breaking cross version migrations?
Date: Thu, 2 Nov 2017 15:34:24 +0000
User-agent: Mutt/1.9.1 (2017-09-22)

On Thu, Nov 02, 2017 at 04:14:06PM +0100, Christian Ehrhardt wrote:
> Ping - since there wasn't any reply so far - any best practices one could
> share?
> Let me add a TL;DR:
> - bump of ipxe rom versions change the size of virtio-net-pci.rom
> - that breaks on migration "Length mismatch"
> I'd guess the size of that rom has to be fixed up on the fly, but if that
> is really ok and how/where is the question.

The actual ROM contents will be transferred in the migration stream, so
the fact that the target host has ROMs with different content is not
important. The key thing that matters is that QEMU the target host loads
the ROMs at the same location, so that when the ROM contents is overwritten
with data from the incoming migration scheme, it all ends up at the same
place as it was on the source.

Getting this to happen requires pre-planning when building the ROMs. By
the time you hit the size change during migration it is too late and your
VM is basically doomed. When building you need to add padding. IIUC for
RHEL we artificially increased the size of the seabios and ipxe ROMs to
256k. So when later RHEL updates ship new seabios/ipxe they still fit in
the 256k region previously allowed for.

... QEMU could really benefit from more formal docs around migration to
describe how users / vendors can protect themselves from the many pitfalls
like this...

|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

reply via email to

[Prev in Thread] Current Thread [Next in Thread]