qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Libvirt driver iothread property for virtio-scsi disks


From: Sergio Lopez
Subject: Re: Libvirt driver iothread property for virtio-scsi disks
Date: Wed, 4 Nov 2020 17:42:21 +0100

On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> The docs[1] say:
> 
> - The optional iothread attribute assigns the disk to an IOThread as defined 
> by
>   the range for the domain iothreads value. Multiple disks may be assigned to
>   the same IOThread and are numbered from 1 to the domain iothreads value.
>   Available for a disk device target configured to use "virtio" bus and "pci"
>   or "ccw" address types. Since 1.2.8 (QEMU 2.1)
> 
> Does it mean that virtio-scsi disks do not use iothreads?

virtio-scsi disks can use iothreads, but they are configured in the
scsi controller, not in the disk itself. All disks attached to the
same controller will share the same iothread, but you can also attach
multiple controllers.

> I'm experiencing a horrible performance using nested vms (up to 2 levels of
> nesting) when accessing NFS storage running on one of the VMs. The NFS
> server is using scsi disk.
> 
> My theory is:
> - Writing to NFS server is very slow (too much nesting, slow disk)
> - Not using iothreads (because we don't use virtio?)
> - Guest CPU is blocked by slow I/O

I would discard the lack of iothreads as the culprit. They do improve
the performance, but without them the performance should be quite
decent anyway. Probably something else is causing the trouble.

I would do a step by step analysis, testing the NFS performance from
outside the VM first, and then elaborating upwards from that.

Sergio.

> Does this make sense?
> 
> [1] https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms
> 
> Nir
> 

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]