qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/intc/arm_gic: Allow to use QTest without crashing


From: Peter Maydell
Subject: Re: [PATCH] hw/intc/arm_gic: Allow to use QTest without crashing
Date: Thu, 28 Jan 2021 18:05:41 +0000

On Thu, 28 Jan 2021 at 17:46, Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
>
> On 1/28/21 6:18 PM, Alexander Bulekov wrote:
> > On 210128 1714, Philippe Mathieu-Daudé wrote:
> >> Alexander reported an issue in gic_get_current_cpu() using the
> >> fuzzer. Yet another "deref current_cpu with QTest" bug, reproducible
> >> doing:
> >>
> >>   $ echo readb 0xf03ff000 | qemu-system-arm -M npcm750-evb,accel=qtest 
> >> -qtest stdio
> >>   [I 1611849440.651452] OPENED
> >>   [R +0.242498] readb 0xf03ff000
> >>   hw/intc/arm_gic.c:63:29: runtime error: member access within null 
> >> pointer of type 'CPUState' (aka 'struct CPUState')
> >>   SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior 
> >> hw/intc/arm_gic.c:63:29 in
> >>   AddressSanitizer:DEADLYSIGNAL
> >>   =================================================================
> >>   ==3719691==ERROR: AddressSanitizer: SEGV on unknown address 
> >> 0x0000000082a0 (pc 0x5618790ac882 bp 0x7ffca946f4f0 sp 0x7ffca946f4a0 T0)
> >>   ==3719691==The signal is caused by a READ memory access.
> >>       #0 0x5618790ac882 in gic_get_current_cpu hw/intc/arm_gic.c:63:29
> >>       #1 0x5618790a8901 in gic_dist_readb hw/intc/arm_gic.c:955:11
> >>       #2 0x5618790a7489 in gic_dist_read hw/intc/arm_gic.c:1158:17
> >>       #3 0x56187adc573b in memory_region_read_with_attrs_accessor 
> >> softmmu/memory.c:464:9
> >>       #4 0x56187ad7903a in access_with_adjusted_size 
> >> softmmu/memory.c:552:18
> >>       #5 0x56187ad766d6 in memory_region_dispatch_read1 
> >> softmmu/memory.c:1426:16
> >>       #6 0x56187ad758a8 in memory_region_dispatch_read 
> >> softmmu/memory.c:1449:9
> >>       #7 0x56187b09e84c in flatview_read_continue softmmu/physmem.c:2822:23
> >>       #8 0x56187b0a0115 in flatview_read softmmu/physmem.c:2862:12
> >>       #9 0x56187b09fc9e in address_space_read_full 
> >> softmmu/physmem.c:2875:18
> >>       #10 0x56187aa88633 in address_space_read 
> >> include/exec/memory.h:2489:18
> >>       #11 0x56187aa88633 in qtest_process_command softmmu/qtest.c:558:13
> >>       #12 0x56187aa81881 in qtest_process_inbuf softmmu/qtest.c:797:9
> >>       #13 0x56187aa80e02 in qtest_read softmmu/qtest.c:809:5
> >>
> >> current_cpu is NULL because QTest accelerator does not use CPU.
> >>
> >> Fix by skipping the check and returning the first CPU index when
> >> QTest accelerator is used, similarly to commit c781a2cc423
> >> ("hw/i386/vmport: Allow QTest use without crashing").
> >>
> >> Reported-by: Alexander Bulekov <alxndr@bu.edu>
> >
> > Reviewed-by: Alexander Bulekov <alxndr@bu.edu>
> >
> > For reference, some older threads about similar issues in the GDB stub
> > and monitor:
> > https://bugs.launchpad.net/qemu/+bug/1602247
>
> This one is different. I thought this issue was fixed by
> the series around commit 7cf48f6752e ("gdbstub: add multiprocess
> support to (f|s)ThreadInfo and ThreadExtraInfo").
>
> When using physical addresses with gdbstub, we should be able to
> select a particular address space.

Yes, but the problem with the GIC device is that it does not
use AddressSpaces to identify which CPU is accessing it.
We would either need to make it do that, or else add
support for using the MemTxAttrs requester_id to identify
which CPU is making a memory access and get the GIC to use
that instead. (This is more or less how the h/w does it.)

> Maybe this fixes pmemsave accessing MMIO here:
> https://bugs.launchpad.net/qemu/+bug/1751674

Nope, because the monitor pmemsave command goes via
cpu_physical_memory_rw(), which does an access to
address_space_memory. So unlike the gdbstub it's not
even trying to say which CPU it cares about. (The monitor
does have a "current CPU" concept, via mon_get_cpu(). But
it's not used in the pmemsave codepath.)

gdbstub direct-physical-memory-access is also via
cpu_physical_memory_rw(), incidentally.

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]