[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [Bug 1686980] Re: qemu is very slow when adding 16, 384 vir

From: Daniel Berrange
Subject: [Qemu-devel] [Bug 1686980] Re: qemu is very slow when adding 16, 384 virtio-scsi drives
Date: Fri, 28 Apr 2017 10:30:21 -0000

The first place where it ages an insane amount of time is simply
processing -drive options. The stack trace I see is this

(gdb) bt
#0  0x00005583b596719a in drive_get (address@hidden, address@hidden, 
address@hidden) at blockdev.c:223
#1  0x00005583b59679bd in drive_new (all_opts=0x5583b890e080, 
block_default_type=<optimized out>) at blockdev.c:996
#2  0x00005583b5971641 in drive_init_func (opaque=<optimized out>, 
opts=<optimized out>, errp=<optimized out>)
    at vl.c:1154
#3  0x00005583b5c1149a in qemu_opts_foreach (list=<optimized out>, 
func=0x5583b5971630 <drive_init_func>, opaque=0x5583b9980030, errp=0x0) at 
#4  0x00005583b5830d30 in main (argc=<optimized out>, argv=<optimized out>, 
envp=<optimized out>) at vl.c:4499

We're iterating over every -drive option. Now because we're using if=none, and 
thus unit==0, line 996 of blockdev.c looks calling drive_get() until we find a 
matching drive, in order to identify the unit number. So we have a loop over 
every drive, calling drive_new which loops over every drive calling drive_get 
which loops over every drive. So about O(N*N*N)

You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.

  qemu is very slow when adding 16,384 virtio-scsi drives

Status in QEMU:

Bug description:
  qemu runs very slowly when adding many virtio-scsi drives.  I have
  attached a small reproducer shell script which demonstrates this.

  Using perf shows the following stack trace taking all the time:

      72.42%    71.15%  qemu-system-x86  qemu-system-x86_64       [.] drive_get

      21.70%    21.34%  qemu-system-x86  qemu-system-x86_64       [.] 

       3.65%     3.59%  qemu-system-x86  qemu-system-x86_64       [.] blk_next

To manage notifications about this bug go to:

reply via email to

[Prev in Thread] Current Thread [Next in Thread]