qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] vhost: Can we change synchronize_rcu to call_rcu


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC] vhost: Can we change synchronize_rcu to call_rcu in vhost_set_memory() in vhost kernel module?
Date: Mon, 12 May 2014 13:08:14 +0300

On Mon, May 12, 2014 at 11:57:32AM +0200, Paolo Bonzini wrote:
> Il 12/05/2014 11:28, Gonglei (Arei) ha scritto:
> >From previous discussion:
> >https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg04925.html
> >we know that you are going to replace RCU in KVM_SET_GSI_ROUTING with SRCU. 
> >Though
> >SRCU is quite better than originally RCU, in our test case this cannot 
> >satisfy our needs. Our VMs
> >work in telecom scenario, VMs report CPU and memory usage to balance node 
> >each second, and
> >balance node dispatch works to different VMs according to VM load. Since 
> >this balance needs
> >high accuracy, IRQ affinity settings in VM also need high accuracy, so we 
> >balance IRQ affinity in
> >every 0.5s. So for telecom scenario, KVM_SET_GSI_ROUTING IOCTL needs much 
> >optimization.
> >And in live migration case, VHOST_SET_MEM_TABLE needs attention.
> >
> >We tried to change synchronize_rcu() to call_rcu() with rate limit, but rate 
> >limit is not easy to
> >configure. Do you have better ideas to achieve this? Thanks.
> 
> Perhaps we can check for cases where only the address is changing,
> and poke at an existing struct kvm_kernel_irq_routing_entry without
> doing any RCU synchronization?

I suspect interrupts can get lost then: e.g. if address didn't match any
cpus, not it matches some. No?

> As long as kvm_set_msi_irq only reads address_lo once, it should work.
> 
> VHOST_SET_MEM_TABLE is a different problem.  What happens in
> userspace that leads to calling that ioctl?  Can we remove it
> altogether, or delay it to after the destination has started
> running?
> 
> Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]