qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Libvirt driver iothread property for virtio-scsi disks


From: Nir Soffer
Subject: Re: Libvirt driver iothread property for virtio-scsi disks
Date: Wed, 4 Nov 2020 20:00:50 +0200

On Wed, Nov 4, 2020 at 6:54 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
>
> On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> > The docs[1] say:
> >
> > - The optional iothread attribute assigns the disk to an IOThread as 
> > defined by
> >   the range for the domain iothreads value. Multiple disks may be assigned 
> > to
> >   the same IOThread and are numbered from 1 to the domain iothreads value.
> >   Available for a disk device target configured to use "virtio" bus and 
> > "pci"
> >   or "ccw" address types. Since 1.2.8 (QEMU 2.1)
> >
> > Does it mean that virtio-scsi disks do not use iothreads?
> >
> > I'm experiencing a horrible performance using nested vms (up to 2 levels of
> > nesting) when accessing NFS storage running on one of the VMs. The NFS
> > server is using scsi disk.
>
> When you say  2 levels of nesting do you definitely have KVM enabled at
> all levels, or are you ending up using TCG emulation, because the latter
> would certainly explain terrible performance.

Good point, I'll check that out, thanks.

> > My theory is:
> > - Writing to NFS server is very slow (too much nesting, slow disk)
> > - Not using iothreads (because we don't use virtio?)
> > - Guest CPU is blocked by slow I/O
>
> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]