|
From: | Maxime Coquelin |
Subject: | Re: [Qemu-devel] [RFC v2 8/8] virtio: guest driver reload for vhost-net |
Date: | Thu, 20 Sep 2018 22:39:57 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 |
Hi Wei, Jason, On 06/19/2018 09:53 AM, Wei Xu wrote:
On Wed, Jun 06, 2018 at 11:48:19AM +0800, Jason Wang wrote:On 2018年06月06日 03:08, address@hidden wrote:From: Wei Xu <address@hidden> last_avail, avail_wrap_count, used_idx and used_wrap_count are needed to support vhost-net backend, all these are either 16 or bool variables, since state.num is 64bit wide, so here it is possible to put them to the 'num' without introducing a new case while handling ioctl. Unload/Reload test has been done successfully with a patch in vhost kernel.You need a patch to enable vhost. And I think you can only do it for vhost-kenrel now since vhost-user protocol needs some extension I believe.OK.Signed-off-by: Wei Xu <address@hidden> --- hw/virtio/virtio.c | 42 ++++++++++++++++++++++++++++++++++-------- 1 file changed, 34 insertions(+), 8 deletions(-) diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index 4543974..153f6d7 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -2862,33 +2862,59 @@ hwaddr virtio_queue_get_used_size(VirtIODevice *vdev, int n) } } -uint16_t virtio_queue_get_last_avail_idx(VirtIODevice *vdev, int n) +uint64_t virtio_queue_get_last_avail_idx(VirtIODevice *vdev, int n) { - return vdev->vq[n].last_avail_idx; + uint64_t num; + + num = vdev->vq[n].last_avail_idx; + if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) { + num |= ((uint64_t)vdev->vq[n].avail_wrap_counter) << 16; + num |= ((uint64_t)vdev->vq[n].used_idx) << 32; + num |= ((uint64_t)vdev->vq[n].used_wrap_counter) << 48;So s.num is 32bit, I don't think this can even work.I mistakenly checked out s.num is 64bit, will add a new case in next version.
Wouldn't be enough to just get/set avail_wrap_counter? Something like this so that it fits into 32 bits: if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) { num |= ((uint64_t)vdev->vq[n].avail_wrap_counter) << 31; } Regards, Maxime
WeiThanks+ } + + return num; } -void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx) +void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint64_t num) { - vdev->vq[n].last_avail_idx = idx; - vdev->vq[n].shadow_avail_idx = idx; + vdev->vq[n].shadow_avail_idx = vdev->vq[n].last_avail_idx = (uint16_t)(num); + + if (virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) { + vdev->vq[n].avail_wrap_counter = (uint16_t)(num >> 16); + vdev->vq[n].used_idx = (uint16_t)(num >> 32); + vdev->vq[n].used_wrap_counter = (uint16_t)(num >> 48); + } } void virtio_queue_restore_last_avail_idx(VirtIODevice *vdev, int n) { rcu_read_lock(); - if (vdev->vq[n].vring.desc) { + if (!vdev->vq[n].vring.desc) { + goto out; + } + + if (!virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) { vdev->vq[n].last_avail_idx = vring_used_idx(&vdev->vq[n]); - vdev->vq[n].shadow_avail_idx = vdev->vq[n].last_avail_idx; } + vdev->vq[n].shadow_avail_idx = vdev->vq[n].last_avail_idx; + +out: rcu_read_unlock(); } void virtio_queue_update_used_idx(VirtIODevice *vdev, int n) { rcu_read_lock(); - if (vdev->vq[n].vring.desc) { + if (!vdev->vq[n].vring.desc) { + goto out; + } + + if (!virtio_vdev_has_feature(vdev, VIRTIO_F_RING_PACKED)) { vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]); } + +out: rcu_read_unlock(); }
[Prev in Thread] | Current Thread | [Next in Thread] |