qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/6] cputlb: ensure we save the IOTLB data in case of rese


From: Emilio G. Cota
Subject: Re: [PATCH v2 3/6] cputlb: ensure we save the IOTLB data in case of reset
Date: Sun, 21 Jun 2020 16:33:07 -0400

On Wed, Jun 10, 2020 at 16:55:06 +0100, Alex Bennée wrote:
> Any write to a device might cause a re-arrangement of memory
> triggering a TLB flush and potential re-size of the TLB invalidating
> previous entries. This would cause users of qemu_plugin_get_hwaddr()
> to see the warning:
> 
>   invalid use of qemu_plugin_get_hwaddr
> 
> because of the failed tlb_lookup which should always succeed. To
> prevent this we save the IOTLB data in case it is later needed by a
> plugin doing a lookup.
> 
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> 
> ---
> v2
>   - save the entry instead of re-running the tlb_fill.
> 
> squash! cputlb: ensure we save the IOTLB entry in case of reset
> ---
>  accel/tcg/cputlb.c | 63 ++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 61 insertions(+), 2 deletions(-)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index eb2cf9de5e6..9bf9e479c7c 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1058,6 +1058,47 @@ static uint64_t io_readx(CPUArchState *env, 
> CPUIOTLBEntry *iotlbentry,
>      return val;
>  }
>  
> +#ifdef CONFIG_PLUGIN
> +
> +typedef struct SavedIOTLB {
> +    struct rcu_head rcu;
> +    struct SavedIOTLB **save_loc;
> +    MemoryRegionSection *section;
> +    hwaddr mr_offset;
> +} SavedIOTLB;
> +
> +static void clean_saved_entry(SavedIOTLB *s)
> +{
> +    atomic_rcu_set(s->save_loc, NULL);

This will race with the CPU thread that sets saved_for_plugin in
save_iotlb_data().

> +    g_free(s);
> +}
> +
> +static __thread SavedIOTLB *saved_for_plugin;

Apologies if this has been discussed, but why is this using TLS
variables and not state embedded in CPUState?
I see that qemu_plugin_get_hwaddr does not take a cpu_index, but
maybe it should? We could then just embed the RCU pointer in CPUState.

> +
> +/*
> + * Save a potentially trashed IOTLB entry for later lookup by plugin.
> + *
> + * We also need to track the thread storage address because the RCU
> + * cleanup that runs when we leave the critical region (the current
> + * execution) is actually in a different thread.
> + */
> +static void save_iotlb_data(MemoryRegionSection *section, hwaddr mr_offset)
> +{
> +    SavedIOTLB *s = g_new(SavedIOTLB, 1);
> +    s->save_loc = &saved_for_plugin;
> +    s->section = section;
> +    s->mr_offset = mr_offset;
> +    atomic_rcu_set(&saved_for_plugin, s);
> +    call_rcu(s, clean_saved_entry, rcu);

Here we could just publish the new pointer and g_free_rcu the old
one, if any.

> +}
> +
> +#else
> +static void save_iotlb_data(MemoryRegionSection *section, hwaddr mr_offset)
> +{
> +    /* do nothing */
> +}
> +#endif
> +
>  static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                        int mmu_idx, uint64_t val, target_ulong addr,
>                        uintptr_t retaddr, MemOp op)
> @@ -1077,6 +1118,12 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry 
> *iotlbentry,
>      }
>      cpu->mem_io_pc = retaddr;
>  
> +    /*
> +     * The memory_region_dispatch may trigger a flush/resize
> +     * so for plugins we save the iotlb_data just in case.
> +     */
> +    save_iotlb_data(section, mr_offset);
> +
>      if (mr->global_locking && !qemu_mutex_iothread_locked()) {
>          qemu_mutex_lock_iothread();
>          locked = true;
> @@ -1091,6 +1138,7 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry 
> *iotlbentry,
>                                 MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
>                                 retaddr);
>      }
> +

Stray whitespace change.

>      if (locked) {
>          qemu_mutex_unlock_iothread();
>      }
> @@ -1366,8 +1414,11 @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr 
> addr,
>   * in the softmmu lookup code (or helper). We don't handle re-fills or
>   * checking the victim table. This is purely informational.
>   *
> - * This should never fail as the memory access being instrumented
> - * should have just filled the TLB.
> + * This almost never fails as the memory access being instrumented
> + * should have just filled the TLB. The one corner case is io_writex
> + * which can cause TLB flushes and potential resizing of the TLBs
> + * loosing the information we need. In those cases we need to recover
> + * data from a thread local copy of the io_tlb entry.
>   */
>  
>  bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx,
> @@ -1391,6 +1442,14 @@ bool tlb_plugin_lookup(CPUState *cpu, target_ulong 
> addr, int mmu_idx,
>              data->v.ram.hostaddr = addr + tlbe->addend;
>          }
>          return true;
> +    } else {
> +        SavedIOTLB *saved = atomic_rcu_read(&saved_for_plugin);
> +        if (saved) {
> +            data->is_io = true;
> +            data->v.io.section = saved->section;
> +            data->v.io.offset = saved->mr_offset;
> +            return true;
> +        }

Shouldn't we check that the contents of the saved IOTLB match the
parameters of the lookup? Otherwise passing a random address is likely
to land here.

Thanks,
                Emilio



reply via email to

[Prev in Thread] Current Thread [Next in Thread]