qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug wit


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH v4 0/7] Fix QEMU crash during memory hotplug with vhost=on
Date: Thu, 16 Jul 2015 13:24:35 +0300

On Thu, Jul 16, 2015 at 11:42:36AM +0200, Igor Mammedov wrote:
> On Thu, 16 Jul 2015 10:35:33 +0300
> "Michael S. Tsirkin" <address@hidden> wrote:
> 
> > On Thu, Jul 16, 2015 at 09:26:21AM +0200, Igor Mammedov wrote:
> > > On Wed, 15 Jul 2015 19:32:31 +0300
> > > "Michael S. Tsirkin" <address@hidden> wrote:
> > > 
> > > > On Wed, Jul 15, 2015 at 05:12:01PM +0200, Igor Mammedov wrote:
> > > > > On Thu,  9 Jul 2015 13:47:17 +0200
> > > > > Igor Mammedov <address@hidden> wrote:
> > > > > 
> > > > > there also is yet another issue with vhost-user. It also has
> > > > > very low limit on amount of memory regions (if I recall correctly 8)
> > > > > and it's possible to trigger even without memory hotplug.
> > > > > one just need to start QEMU with a several -numa memdev= options
> > > > > to create a necessary amount of memory regions to trigger it.
> > > > > 
> > > > > lowrisk option to fix it would be increasing limit in vhost-user
> > > > > backend.
> > > > > 
> > > > > another option is disabling vhost and fall-back to virtio,
> > > > > but I don't know much about vhost if it's possible to 
> > > > > to switch it off without loosing packets guest was sending
> > > > > at the moment and if it will work at all with vhost.
> > > > 
> > > > With vhost-user you can't fall back to virtio: it's
> > > > not an accelerator, it's the backend.
> > > > 
> > > > Updating the protocol to support a bigger table
> > > > is possible but old remotes won't be able to support it.
> > > > 
> > > it looks like increasing limit is the only option left.
> > > 
> > > it's not ideal that old remotes /with hardcoded limit/
> > > might not be able to handle bigger table but at least
> > > new ones and ones that handle VhostUserMsg payload
> > > dynamically would be able to work without crashing.
> > 
> > I think we need a way for hotplug to fail gracefully.  As long as we
> > don't implement the hva trick, it's needed for old kernels with vhost in
> > kernel, too.
> I don't see a reliable way to fail hotplug though.
> 
> In case of hotplug failure path comes from memory listener
> which can't fail by design but it fails in vhost case, i.e.
> vhost side doesn't follow protocol.
> 
> We already have considered idea of querying vhost, for limit
> from memory hotplug handler before mapping memory region
> but it has drawbacks:
>  1. amount of memory ranges changes during guest lifecycle
>    as it initializes different devices.
>    which leads to a case when we can hotplug more pc-dimms
>    than cold-plug.
>    Which leads to inability to migrate guest with hotplugged
>    pc-dimms since target QEMU won't start with that amount
>    of dimms from source due to hitting limit.
>  2. it's ugly hack to query random 'vhost' entity when plugging
>    dimm device from modeling pov, but we can live with it
>    if it helps QEMU not to crash.
> 
> If it's acceptable to break/ignore #1 issue, I can post related
> QEMU patches that I have, at least qemu won't crash with old
> vhost backends.

Old kvm has lower limits on 3 of slots as well. How is this handled?

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]