qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC][PATCH 28/45] qemu-kvm: msix: Drop tracking of use


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC][PATCH 28/45] qemu-kvm: msix: Drop tracking of used vectors
Date: Tue, 18 Oct 2011 13:58:13 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Mon, Oct 17, 2011 at 09:28:12PM +0200, Jan Kiszka wrote:
> On 2011-10-17 17:48, Michael S. Tsirkin wrote:
> > On Mon, Oct 17, 2011 at 11:28:02AM +0200, Jan Kiszka wrote:
> >> This optimization was only required to keep KVM route usage low. Now
> >> that we solve that problem via lazy updates, we can drop the field. We
> >> still need interfaces to clear pending vectors, though (and we have to
> >> make use of them more broadly - but that's unrelated to this patch).
> >>
> >> Signed-off-by: Jan Kiszka <address@hidden>
> > 
> > Lazy updates should be an implementation detail.
> > IMO resource tracking of vectors makes sense
> > as an API. Making devices deal with pending
> > vectors as a concept, IMO, does not.
> 
> There is really no use for tracking the vector lifecycle once we have
> lazy updates (except for static routes). It's a way too invasive
> concept, and it's not needed for anything but KVM.

I think it's needed. The PCI spec states that when the device
does not need an interrupt anymore, it should clear the pending
bit. The use/unuse is IMO a decent API for this,
because it uses a familiar resource tracking concept.
Exposing this knowledge of msix to devices seems
like a worse API.

> 
> If you want an example, check
> http://permalink.gmane.org/gmane.comp.emulators.kvm.devel/70915 and
> compare it to the changes done to hpet in this series.
> 
> Jan
> 

This seems to be a general argument that lazy updates are good?
I have no real problem with them, besides the fact that
we need an API to reserve space in the routing
table so that device setup can fail upfront.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]