qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Hotplug ram and vhost-user


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] Hotplug ram and vhost-user
Date: Thu, 7 Dec 2017 18:56:19 +0200

On Thu, Dec 07, 2017 at 05:35:13PM +0100, Maxime Coquelin wrote:
> 
> 
> On 12/07/2017 04:56 PM, Michael S. Tsirkin wrote:
> > On Thu, Dec 07, 2017 at 04:52:18PM +0100, Maxime Coquelin wrote:
> > > Hi David,
> > > 
> > > On 12/05/2017 06:41 PM, Dr. David Alan Gilbert wrote:
> > > > Hi,
> > > >     Since I'm reworking the memory map update code I've been
> > > > trying to test it with hot adding RAM; but even on upstream
> > > > I'm finding that hot adding RAM causes the guest to stop passing
> > > > packets with vhost-user-bridge;  have either of you seen the same
> > > > thing?
> > > 
> > > No, I have never tried this.
> > > 
> > > > I'm doing:
> > > > ./tests/vhost-user-bridge -u /tmp/vubrsrc.sock
> > > > $QEMU -enable-kvm -m 1G,maxmem=2G,slots=4 -smp 2 -object 
> > > > memory-backend-file,id=mem,size=1G,mem-path=/dev/shm,share=on -numa 
> > > > node,memdev=mem -mem-prealloc -trace events=vhost-trace-file -chardev 
> > > > socket,id=char0,path=/tmp/vubrsrc.sock -netdev 
> > > > type=vhost-user,id=mynet1,chardev=char0,vhostforce -device 
> > > > virtio-net-pci,netdev=mynet1 $IMAGE -net none
> > > > 
> > > > (with a f27 guest) and then doing:
> > > > (qemu) object_add 
> > > > memory-backend-file,id=mem1,size=256M,mem-path=/dev/shm
> > > > (qemu) device_add pc-dimm,id=dimm1,memdev=mem1
> > > > 
> > > > but then not getting any responses inside the guest.
> > > > 
> > > > I can see the code sending another set-mem-table with the
> > > > extra chunk of RAM and fd, and I think I can see the bridge
> > > > mapping it.
> > > 
> > > I think there are at least two problems.
> > > The first one is that vhost-user-bridge does not support vhost-user
> > > protocol's reply-ack feature. So when QEMU sends the requests, it cannot
> > > know whether/when it has been handled by the backend.
> > > 
> > > It had been fixed by sending a GET_FEATURE requests to be sure the
> > > SET_MEM_TABLE was handled, as messages are processed in order. The problem
> > > is that it caused some test failures when using TCG, so it got
> > > reverted.
> > > 
> > > The initial fix:
> > > 
> > > commit 28ed5ef16384f12500abd3647973ee21b03cbe23
> > > Author: Prerna Saxena <address@hidden>
> > > Date:   Fri Aug 5 03:53:51 2016 -0700
> > > 
> > >      vhost-user: Attempt to fix a race with set_mem_table.
> > > 
> > > The revert:
> > > 
> > > commit 94c9cb31c04737f86be29afefbff401cd23bc24d
> > > Author: Michael S. Tsirkin <address@hidden>
> > > Date:   Mon Aug 15 16:35:24 2016 +0300
> > > 
> > >      Revert "vhost-user: Attempt to fix a race with set_mem_table."
> > 
> > It's a question of stress-testing it and finding out why did
> > it cause tests fail esp when run within a container.
> 
> Actually I did work on fixing it last year, and proposed below series:
> http://lists.gnu.org/archive/html/qemu-devel/2016-09/msg01704.html
> 
> It felt through the cracks though. Maybe we could just revert your
> revert (patch 1 of my series) now that TCG is no more used by vhost-
> user-test?
> 
> Maxime

Canyou pls explain this:

Analysis of the race shows that it would happen only when QEMU relies
on TCG.

what is the reason it only happens with tcg?

> > > 
> > > Another problem is that memory mmapped with previous call does not seems
> > > to be unmapped, but that should not cause other problems than leaking
> > > virtual memory.
> > > 
> > > Maxime
> > > > Dave
> > > > 
> > > > --
> > > > Dr. David Alan Gilbert / address@hidden / Manchester, UK
> > > > 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]