qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
Date: Wed, 14 Mar 2012 08:57:01 +0000

On Wed, Mar 14, 2012 at 12:30 AM, Amos Kong <address@hidden> wrote:
> ----- Original Message -----
>> On Tue, Mar 13, 2012 at 2:47 PM, Amos Kong <address@hidden> wrote:
>
> ...
>
> Hi, Stefan
>
>> > diff --git a/kvm-all.c b/kvm-all.c
>> > index 77eadf6..7157e78 100644
>> > --- a/kvm-all.c
>> > +++ b/kvm-all.c
>> > @@ -771,6 +771,8 @@ static void
>> > kvm_io_ioeventfd_add(MemoryRegionSection *sectio
>> >
>> >     r = kvm_set_ioeventfd_pio_word(fd,
>> >     section->offset_within_address_space,
>> >                                    data, true);
>> > +    if (r == -ENOSPC)
>> > +        return;
>>
>> The caller needs a way to detect the failure.
>
> Yes, about memory API
>
>> >     if (r < 0) {
>> >         abort();
>> >     }
>> >
>> >
>> >
>> >> Basically we need a way for ioeventfd to fail if we hit rlimit or
>> >> the
>> >> in-kernel io bus device limit.  Suggestions?
>> >
>> >
>> >
>> >> (The reason I don't like using has_many_ioeventfds() is because
>> >> it's
>> >> ugly and inefficient to open and then close file descriptors -
>> >> especially in a multithreaded program where file descriptor
>> >> availability can change while we're executing.)
>> >
>> > I identified it's too ugly ;)
>> > but I want to reserve it for older kernel (iobus limit is 6)
>> >
>> > Can we remove the check for old kernel (iobus limit is 6)?
>> > user can disable ioeventfd through qemu cmdline
>> >  virtio-net-pci.ioeventfd=on/off
>> >  virtio-blk-pci.ioeventfd=on/off
>>
>> Why do you want to remove the iobus limit 6 check?  The point of that
>> check is to allow vhost-net to always work since it requires an
>> ioeventfd.
>
>
> ### -device virtio-blk-pci,ioeventfd=off,drive=drive-virtio0-0-0,id=id1 
> -drive ...
>     this blk dev will not use ioeventfd for io notification.
>
> ### -device virtio-net-pci,netdev=he,ioeventfd=off -netdev tap,id=he
>     this net dev will not use ioeventfd
>
> ### -device virtio-net-pci,netdev=he,vhost=on -netdev tap,id=he
>     this dev will take 2 ioeventfd (service notifications from rx/tx queues)
>
> ioeventfd paramenter is a way for user to limit ioeventfd usage.
>
>
> ### qemu-kvm -net none -device 
> virtio-blk-pci,ioeventfd=on,drive=drive-virtio0-0-0,id=id1 -drive ...
> For some odd users, they don't use network, and they need a better disk io 
> performance.
> but we could not satisfy them if we reserve the checking of 6 iobus devs.
>
> Strategy should be decided by users.

We're discussing the wrong thing here, the 6 ioeventfd workaround is
unrelated to the problem you are trying to solve.

Graceful fallback for ioeventfd failure previously existed but it
seems the new memory API broke it.  The thing that needs fixing is the
regression caused by the new memory API code.

An error code path needs to be added to the memory API ioeventfd code
so that virtio-pci.c can deal with failure and QEMU no longer aborts.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]