qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] 答复: 答复: 答复: 答复: quest ion about performance of datapla


From: Abel Gordon
Subject: Re: [Qemu-devel] 答复: 答复: 答复: 答复: quest ion about performance of dataplane
Date: Mon, 8 Apr 2013 14:04:17 +0300

Zhangleiqiang <address@hidden> wrote on 08/04/2013 12:06:17 PM:

> I think maybe Anthony is right. In previous benchmarks, maybe the
> non-dataplane already reached the physical disk's IOPS upper limit.

Yep, agree. Try to run the same benchmark in the host to see
what is the bare-metal performance of your system (upper limit)
and how far are dataplane and non-dataplane from this value.
Note your are currently focusing on throughput but you should also
consider latency and CPU utilization.

> So I did another benchmark which ensures the vcpus is less than the
> host's cores, but also make continuous IO pressure by one VM when
> testing in the other VM. The result showed that dataplane did have
> some advantage over non-dataplane.
>
> 1. IO Pressure Mode:  8 worker, 16K IO size, 25% Read, 100% Random,
> and 50 outstanding IO
> 2. Benchmark Mode:  8 worker, 16K IO size, 0% Read,  100% Random,
> and 50 outstanding IO
> 2. Testing Results:
>    a). IOPS:     178.324867 (non-dataplane)  vs  230.956328 (dataplane)
>    b). MBPS:     2.786326 (non-dataplane)  vs  3.608693 (dataplane)

Note that running other VM just to "synthetically" degrade the
performance of the system may cause some side effects and confuse the
results (e.g. the "other" VM may stress the system differently and
do more pressure when you use dataplane than when you don't use
dataplane)

Last thing, IMHO, you should also evaluate scalability:
how dataplane and no-dataplane perform  when you run multiple VMs ?

For example,
  first  1 VM  with 2 VCPUs
  then   2 VMs with 2 VCPUs each
  then   3 VMs with 2 VCPUs each
  ...
  up to 12 VMs with 2 VCPUs each

It seems like you unintentionally tested what happens with 2 VMs when
you added the "other" VM to create I/O pressure.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]