qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio guest] vring_need_event() from virtqueue_kick_p


From: Catalin Vasile
Subject: Re: [Qemu-devel] [virtio guest] vring_need_event() from virtqueue_kick_prepare()
Date: Tue, 7 Jul 2015 14:43:06 +0300

My vhost module respects the format vhost-net uses:

/* <code> summary */
mutex_lock(&vq->mutex);
vhost_disable_notify();
for (;;) {
        head = vhost_get_vq_desc();
       if (head == vq->num) {
            if (unlikely(vhost_enable_notify())) {
                vhost_disable_notify();
                continue;
            }
            break;
       }
       vhost_add_used_and_signal();
}
mutex_unlock(&vq->mutex);
/* </code> */

I have made a lot of printk() calls and the first job gets processed
completely, and gets through all those calls:
1. it goes into a first loop and processes the first job (get
descriptor, work with the descriptor, add used and signal).
2. On the second loop it hits head == vq->num, and goes back to
listening to notifications (successfully, it does not get into the
fallback).

Now in the guest:
1. sends first job and the paramers used to call vring_need_event() are:
vring_avail_event=0, new=1, old=0 (which makes the function evaluate to "0 < 1")
2. the queue is kicked and vhost does its job.
3. the guest driver reaches the end of the first job, and lets the
following job take its course, only this time vring_need_event()
receives the following parameters:
vring_avail_event=0, new=2, old=1 (which makes the function evaluate to "1 < 1")
so a kick is not actually sent because vring_need_event() returns
false. From what I see as the definition for vring_need_event(), it
does not actually look at flags.
"if (vq->event) {" evaluates to true in both cases, so it always
verifies those indexes (it does not go on the branch which verifies
flags).
I am also pretty sure the jobs are serialized in the guest driver, and
do not cross each other's path. One of the reasons is that every
function that sends a job must hold a mutex that protects the
virtqueue.
The guest driver blocks awaiting an interrupt for the job being
finished, but vhost does not get woken up to process the job in the
first places, because a notification is not actually triggered because
of what I have explained above.

On Tue, Jul 7, 2015 at 1:17 PM, Stefan Hajnoczi <address@hidden> wrote:
> On Mon, Jul 06, 2015 at 06:13:29PM +0300, Catalin Vasile wrote:
>> What is the logic behind vring_need_event() when used with
>> virtqueue_kick_prepare()?
>> What does the keyword >>just<< refer to from the following context:
>> /* The following is used with USED_EVENT_IDX and AVAIL_EVENT_IDX */
>> /* Assuming a given event_idx value from the other size, if
>>  * we have just incremented index from old to new_idx,
>>  * should we trigger an event? */
>> ?
>
> "just" means since the last time the host/guest-visible index field was
> changed.  After avail or used rings have been processed, the index field
> for that ring is published to the host/guest.  At that point a check is
> made whether the other side needs to be kicked.
>
>> I am sending 2 jobs, one after another, and the second one just does
>> not want to kick, although the first one finished completely and the
>> backend went back to interrupt mode, all because vring_need_event()
>> returns false.
>
> Maybe the vhost driver called vhost_disable_notify() and hasn't
> re-enabled notify yet?
>
> This could happen if the guest adds buffers to the virtqueue while the
> host is processing the virtqueue.  Take a look at the vhost_net code for
> how to correctly disable and re-enable notify without race conditions on
> the host.
>
> The idea behind disabling notify is to eliminate unnecessary
> vmexits/notifications since the host is already processing the virtqueue
> and will see new buffers.  It's like a polling vs interrupt mode.
>
> If the vhost driver on the host doesn't implement it correctly, then the
> device could stop responding to the avail ring.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]