qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 10/48] exec: export do_tb_flush


From: Emilio G. Cota
Subject: Re: [Qemu-devel] [RFC 10/48] exec: export do_tb_flush
Date: Mon, 26 Nov 2018 18:56:49 -0500
User-agent: Mutt/1.9.4 (2018-02-28)

On Mon, Nov 26, 2018 at 11:11:53 +0000, Alex Bennée wrote:
> 
> Emilio G. Cota <address@hidden> writes:
> 
> > On Thu, Nov 22, 2018 at 17:09:22 +0000, Alex Bennée wrote:
> >>
> >> Emilio G. Cota <address@hidden> writes:
> >>
> >> > This will be used by plugin code to flush the code cache as well
> >> > as doing other bookkeeping in a safe work environment.
> >>
> >> This seems a little excessive given the plugin code could just call
> >> tb_flush() directly. Wouldn't calling tb_flush after scheduling the
> >> plugin_destroy be enough?
> >>
> >> If there is a race condition here maybe we could build some sort of
> >> awareness into tb_flush as to the current run state. But having two
> >> entry points to this rather fundamental action seems likely to either be
> >> misused or misunderstood.
> >
> > We have to make sure that no callback left in the generated code is
> > called once a plugin has been uninstalled. To me, using the same safe
> > work window to both flush the TB and uninstall the plugin seems the
> > simplest way to do this.
> 
> I still think making tb_flush() aware that it can run in an exclusive
> period would be a better solution than exposing two functions for the
> operation. So tb_flush could be something like:
> 
>   void tb_flush(CPUState *cpu)
>   {
>       if (tcg_enabled()) {
>           unsigned tb_flush_count = atomic_mb_read(&tb_ctx.tb_flush_count);
>           if (cpu_current_and_exclusive(cpu)) {
>               do_tb_flush(RUN_ON_CPU_HOST_INT(tb_flush_count))
>           } else {
>               async_safe_run_on_cpu(cpu, do_tb_flush,
>                                     RUN_ON_CPU_HOST_INT(tb_flush_count));
>           }
>       }
>   }
> 
> Or possibly push that logic down into async_safe_run_on_cpu()?

The latter option would be much harder, because in async_safe_run_on_cpu
we always queue the work and kick the CPU (which could be ourselves).
IOW the job is always asynchronous, as the name implies.

I've thus implemented the former in v2, as follows (I'm using a hole
in struct CPUState to add the bool):

@@ -1277,8 +1277,13 @@ void tb_flush(CPUState *cpu)
 {
     if (tcg_enabled()) {
         unsigned tb_flush_count = atomic_mb_read(&tb_ctx.tb_flush_count);
-        async_safe_run_on_cpu(cpu, do_tb_flush,
-                              RUN_ON_CPU_HOST_INT(tb_flush_count));
+
+        if (cpu_in_exclusive_work_context(cpu)) {
+            do_tb_flush(cpu, RUN_ON_CPU_HOST_INT(tb_flush_count));
+        } else {
+            async_safe_run_on_cpu(cpu, do_tb_flush,
+                                  RUN_ON_CPU_HOST_INT(tb_flush_count));
+        }
     }
 }

+++ b/cpus-common.c
@@ -386,7 +386,9 @@ static void process_queued_cpu_work_locked(CPUState *cpu)
                 qemu_mutex_unlock_iothread();
             }
             start_exclusive();
+            cpu->in_exclusive_work_context = true;
             wi->func(cpu, wi->data);
+            cpu->in_exclusive_work_context = false;
             end_exclusive();

I've also fixed a couple of unrelated bugs when uninstalling a plugin
with memory callbacks enabled.

Thanks,

                Emilio



reply via email to

[Prev in Thread] Current Thread [Next in Thread]