qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH V6 13/18] cpu: introduce async_run_safe_work


From: Frederic Konrad
Subject: Re: [Qemu-devel] [RFC PATCH V6 13/18] cpu: introduce async_run_safe_work_on_cpu.
Date: Fri, 26 Jun 2015 18:09:27 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.5.0

On 26/06/2015 17:35, Paolo Bonzini wrote:
On 26/06/2015 16:47, address@hidden wrote:
diff --git a/cpu-exec.c b/cpu-exec.c
index de256d6..d6442cd 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
Nice solution.  However I still have a few questions that need
clarification.

@@ -382,6 +382,11 @@ int cpu_exec(CPUArchState *env)
      volatile bool have_tb_lock = false;
  #endif
+ if (async_safe_work_pending()) {
+        cpu->exit_request = 1;
+        return 0;
+    }
Perhaps move this to cpu_can_run()?
Yes why not.


      if (cpu->halted) {
          if (!cpu_has_work(cpu)) {
              return EXCP_HALTED;
diff --git a/cpus.c b/cpus.c
index 5f13d73..aee445a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -75,7 +75,7 @@ bool cpu_is_stopped(CPUState *cpu)
bool cpu_thread_is_idle(CPUState *cpu)
  {
-    if (cpu->stop || cpu->queued_work_first) {
+    if (cpu->stop || cpu->queued_work_first || cpu->queued_safe_work_first) {
          return false;
      }
      if (cpu_is_stopped(cpu)) {
@@ -892,6 +892,69 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void 
*data), void *data)
      qemu_cpu_kick(cpu);
  }
+void async_run_safe_work_on_cpu(CPUState *cpu, void (*func)(void *data),
+                                void *data)
+{
Do you need a mutex to protect this data structure?  I would use one
even if not strictly necessary, to avoid introducing new BQL-protected
structures.

For the moment it's called by tb_invalidate and tb_flush_safe the second lacks a tb_lock/unlock which should be added. I don't need an other mutex expect if this is
used elsewhere?

Also, can you add a count of how many such work items exist in the whole
system, in order to speed up async_safe_work_pending?

Yes that makes sense.
+    struct qemu_work_item *wi;
+
+    wi = g_malloc0(sizeof(struct qemu_work_item));
+    wi->func = func;
+    wi->data = data;
+    wi->free = true;
+    if (cpu->queued_safe_work_first == NULL) {
+        cpu->queued_safe_work_first = wi;
+    } else {
+        cpu->queued_safe_work_last->next = wi;
+    }
+    cpu->queued_safe_work_last = wi;
+    wi->next = NULL;
+    wi->done = false;
+
+    CPU_FOREACH(cpu) {
+        qemu_cpu_kick_thread(cpu);
+    }
+}
+
+static void flush_queued_safe_work(CPUState *cpu)
+{
+    struct qemu_work_item *wi;
+    CPUState *other_cpu;
+
+    if (cpu->queued_safe_work_first == NULL) {
+        return;
+    }
+
+    CPU_FOREACH(other_cpu) {
+        if (other_cpu->tcg_executing != 0) {
This causes the thread to busy wait until everyone has exited, right?
Not a big deal, but worth a comment.

Right.

Fred
Paolo

+            return;
+        }
+    }
+
+    while ((wi = cpu->queued_safe_work_first)) {
+        cpu->queued_safe_work_first = wi->next;
+        wi->func(wi->data);
+        wi->done = true;
+        if (wi->free) {
+            g_free(wi);
+        }
+    }
+    cpu->queued_safe_work_last = NULL;
+    qemu_cond_broadcast(&qemu_work_cond);
+}
+
+bool async_safe_work_pending(void)
+{
+    CPUState *cpu;
+
+    CPU_FOREACH(cpu) {
+        if (cpu->queued_safe_work_first) {
+            return true;
+        }
+    }
+
+    return false;
+}
+




reply via email to

[Prev in Thread] Current Thread [Next in Thread]