qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [qemu-s390x] [Qemu-devel] [PATCH v1 8/9] exec.c: Factor out core log


From: David Hildenbrand
Subject: Re: [qemu-s390x] [Qemu-devel] [PATCH v1 8/9] exec.c: Factor out core logic of check_watchpoint()
Date: Fri, 23 Aug 2019 20:25:06 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0

On 23.08.19 18:09, Richard Henderson wrote:
> On 8/23/19 4:27 AM, David Hildenbrand wrote:
>> On 23.08.19 12:07, David Hildenbrand wrote:
>>> We want to perform the same checks in probe_write() to trigger a cpu
>>> exit before doing any modifications. We'll have to pass a PC.
>>>
>>> Signed-off-by: David Hildenbrand <address@hidden>
>>> ---
>>>  exec.c                | 23 +++++++++++++++++------
>>>  include/hw/core/cpu.h |  2 ++
>>>  2 files changed, 19 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/exec.c b/exec.c
>>> index 1df966d17a..d233a4250b 100644
>>> --- a/exec.c
>>> +++ b/exec.c
>>> @@ -2810,12 +2810,10 @@ static const MemoryRegionOps notdirty_mem_ops = {
>>>      },
>>>  };
>>>  
>>> -/* Generate a debug exception if a watchpoint has been hit.  */
>>> -static void check_watchpoint(int offset, int len, MemTxAttrs attrs, int 
>>> flags)
>>> +void cpu_check_watchpoint(CPUState *cpu, vaddr vaddr, int len,
>>> +                          MemTxAttrs attrs, int flags, uintptr_t ra)
>>>  {
>>> -    CPUState *cpu = current_cpu;
>>>      CPUClass *cc = CPU_GET_CLASS(cpu);
>>> -    target_ulong vaddr;
>>>      CPUWatchpoint *wp;
>>>  
>>>      assert(tcg_enabled());
>>> @@ -2826,7 +2824,7 @@ static void check_watchpoint(int offset, int len, 
>>> MemTxAttrs attrs, int flags)
>>>          cpu_interrupt(cpu, CPU_INTERRUPT_DEBUG);
>>>          return;
>>>      }
>>> -    vaddr = (cpu->mem_io_vaddr & TARGET_PAGE_MASK) + offset;
>>> +
>>>      vaddr = cc->adjust_watchpoint_address(cpu, vaddr, len);
>>>      QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
>>>          if (cpu_watchpoint_address_matches(wp, vaddr, len)
>>> @@ -2851,11 +2849,14 @@ static void check_watchpoint(int offset, int len, 
>>> MemTxAttrs attrs, int flags)
>>>                  if (wp->flags & BP_STOP_BEFORE_ACCESS) {
>>>                      cpu->exception_index = EXCP_DEBUG;
>>>                      mmap_unlock();
>>> -                    cpu_loop_exit(cpu);
>>> +                    cpu_loop_exit_restore(cpu, ra);
>>>                  } else {
>>>                      /* Force execution of one insn next time.  */
>>>                      cpu->cflags_next_tb = 1 | curr_cflags();
>>>                      mmap_unlock();
>>> +                    if (ra) {
>>> +                        cpu_restore_state(cpu, ra, true);
>>> +                    }
>>>                      cpu_loop_exit_noexc(cpu);
>>>                  }
>>>              }
>>> @@ -2865,6 +2866,16 @@ static void check_watchpoint(int offset, int len, 
>>> MemTxAttrs attrs, int flags)
>>>      }
>>>  }
>>>  
>>> +/* Generate a debug exception if a watchpoint has been hit.  */
>>> +static void check_watchpoint(int offset, int len, MemTxAttrs attrs, int 
>>> flags)
>>> +{
>>> +    CPUState *cpu = current_cpu;
>>> +    vaddr vaddr;
>>> +
>>> +    vaddr = (cpu->mem_io_vaddr & TARGET_PAGE_MASK) + offset;
>>> +    cpu_check_watchpoint(cpu, vaddr, len, attrs, flags, 0);
>>> +}
>>> +
>>>  /* Watchpoint access routines.  Watchpoints are inserted using TLB tricks,
>>>     so these check for a hit then pass through to the normal out-of-line
>>>     phys routines.  */
>>> diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
>>> index 77fca95a40..3a2d76b32c 100644
>>> --- a/include/hw/core/cpu.h
>>> +++ b/include/hw/core/cpu.h
>>> @@ -1070,6 +1070,8 @@ static inline bool cpu_breakpoint_test(CPUState *cpu, 
>>> vaddr pc, int mask)
>>>      return false;
>>>  }
>>>  
>>> +void cpu_check_watchpoint(CPUState *cpu, vaddr vaddr, int len,
>>> +                          MemTxAttrs attrs, int flags, uintptr_t ra);
>>>  int cpu_watchpoint_insert(CPUState *cpu, vaddr addr, vaddr len,
>>>                            int flags, CPUWatchpoint **watchpoint);
>>>  int cpu_watchpoint_remove(CPUState *cpu, vaddr addr,
>>>
>>
>> As we will have bigger accesses with probe_write(), we should do
>>
>> diff --git a/exec.c b/exec.c
>> index d233a4250b..4f8cc62a5f 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -2834,7 +2834,7 @@ void cpu_check_watchpoint(CPUState *cpu, vaddr
>> vaddr, int len,
>>              } else {
>>                  wp->flags |= BP_WATCHPOINT_HIT_WRITE;
>>              }
>> -            wp->hitaddr = vaddr;
>> +            wp->hitaddr = MAX(vaddr, wp->vaddr);
>>              wp->hitattrs = attrs;
>>              if (!cpu->watchpoint_hit) {
>>                  if (wp->flags & BP_CPU &&
>>
>> I guess, to make sure we actually indicate the watchpoint.
> 
> Yes, that looks right.
> 
> As for your changes to use cpu_loop_exit_restore...  Those are so right that I
> didn't even recognize how wrong this code is when I was looking through it the
> other day.  Watchpoints must not actually be working at all at the moment, 
> really.

We keep finding surprises ... guess we payed the wrong area of QEMU src
code a visit :)

> 
> I suspect that we need to use a page flag for this and not use I/O memory at
> all.  Or convert to read/write_with_attrs and use magic MemTxResult values, 
> but
> that seems sketchier than page flags.  Either way is the only way that we can
> get access to the host return address so that we can unwind and return to the
> main loop.

I wonder if we can pass through the RA from the callers somehow. But I
have to admit, how the whole MemOps (+MemTX) stuff is fits together is
not yet 100% clear to me.

> 
> But this is a good step, in the right direction.  We'll fix the rest later.
> 
> With the MAX,
> Reviewed-by: Richard Henderson <address@hidden>
> 

Thanks!

I'll fixup the smaller things and resend next week. I already have
another set of patches (e.g., probe_read() and return void* from
probe_write() lying around). The s390x part is still in the works.

-- 

Thanks,

David / dhildenb



reply via email to

[Prev in Thread] Current Thread [Next in Thread]