qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [External] Re: [PATCH] hostmem: change default prealloc threads numb


From: Nan Wang
Subject: Re: [External] Re: [PATCH] hostmem: change default prealloc threads number
Date: Tue, 28 Sep 2021 00:47:01 +0800
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:91.0) Gecko/20100101 Thunderbird/91.1.1



On 2021/9/27 11:16, David Hildenbrand wrote:
On 27.09.21 15:19, Nan Wang wrote:
From: "wangnan.light" <wangnan.light@bytedance.com>

the default number of prealloc threads is 1, for huge memory backend
file, single thread touch page is really slow.
We can adjust thread number by prealloc-threads property, but if the
default value updated to MachineState::smp::cpus may be better.
For example, old version of qemu(prealloc-threads have not been
introduced yet), the value of threads num is MachineState::smp::cpus,
so if someone use the same commandline to start current verion of qemu
and old version of qemu which will lead to different behaviors.

The introducing patch mentions:

commit ffac16fab33bb42f17e47624985220c1fd864e9d
Author: Igor Mammedov <imammedo@redhat.com>
Date:   Wed Feb 19 11:09:50 2020 -0500

     hostmem: introduce "prealloc-threads" property

     the property will allow user to specify number of threads to use
     in pre-allocation stage. It also will allow to reduce implicit
     hostmem dependency on current_machine.
     On object creation it will default to 1, but via machine
     compat property it will be updated to MachineState::smp::cpus
     to keep current behavior for hostmem and main RAM (which is
     now also hostmem based).

So it looks like we want to do the latter via compat properties eventually.

However, I'd like to note that more prealloc threads might be good for large backends, and might be bad for small backends. To me, it feels like a workload that relies on this should really do this manually. So I am still not sure if this is the right thing to do.
Yes, I agree with you "more prealloc threas are good for large backends, and bad for small backends". But I think most situation large backends always with large vcpu numbers and small backens always with small vcpu numbers, because most users will not create a vm with large vcpu numbers with small memory.



Note that qapi/qom.json:

"@prealloc-threads: number of CPU threads to use for prealloc (default: 1", so that doc would be wrong now.

Why exactly can't workload that cares not simply set this manually? Performance tuning smells like something to be done manually for a specific workload.

It is a simply way that let workload set the prealloc threads manually. For example, for large backends it set many prealloc threads, and set 1 prealloc threads manually for small backends. Yes, workload can `maunally` set prealloc thread to 1, rather than use `default` value 1. So when workload want to(or maybe just forget specify the prealloc-threads property) use the default value, I think the MachineState::smp::cpus maybe better than 1.



Signed-off-by: wangnan.light <wangnan.light@bytedance.com>
---
  backends/hostmem.c | 2 +-
  hw/core/machine.c  | 5 +++++
  2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/backends/hostmem.c b/backends/hostmem.c
index 4c05862ed5..c4a249b7e6 100644
--- a/backends/hostmem.c
+++ b/backends/hostmem.c
@@ -273,7 +273,7 @@ static void host_memory_backend_init(Object *obj)
      backend->merge = machine_mem_merge(machine);
      backend->dump = machine_dump_guest_core(machine);
      backend->reserve = true;
-    backend->prealloc_threads = 1;
+    backend->prealloc_threads = machine_smp_cpus(machine);
  }
  static void host_memory_backend_post_init(Object *obj)
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 067f42b528..95ba5b1477 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -1065,6 +1065,11 @@ bool machine_dump_guest_core(MachineState *machine)
      return machine->dump_guest_core;
  }
+bool machine_smp_cpus(MachineState *machine)
+{
+    return machine->smp.cpus;
+}
+
  bool machine_mem_merge(MachineState *machine)
  {
      return machine->mem_merge;






reply via email to

[Prev in Thread] Current Thread [Next in Thread]