qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] qemu-system-aarch64 crash from kernel null pointer


From: Peter Maydell
Subject: Re: [Qemu-devel] qemu-system-aarch64 crash from kernel null pointer
Date: Fri, 29 Jun 2018 18:22:55 +0100

On 29 June 2018 at 00:30, Richard Henderson
<address@hidden> wrote:
> Given a debian standard 4.16.0 kernel,
>
> https://github.com/rth7680/qemu/tree/tgt-arm-sve-c
>
> will crash qemu:
>
> $ gdb --args ../bld/aarch64-softmmu/qemu-system-aarch64 \
>   -cpu max -M virt -m 4G -smp 8 \
>   -drive if=virtio,file=./deb-arm64.img,format=raw \
>   -bios /usr/share/edk2/aarch64/QEMU_EFI.fd
>
> (gdb) bt 5
> #0  0x00005555558017b3 in address_space_lookup_region (d=0x0, addr=0,
> resolve_subpage=false) at /home/rth/work/qemu/qemu/exec.c:416
> #1  0x00005555558018dc in address_space_translate_internal (d=0x0, addr=0,
> xlat=0x7fffdaefb478, plen=0x7fffdaefb540, resolve_subpage=false)
>     at /home/rth/work/qemu/qemu/exec.c:440
> #2  0x00005555558022b5 in address_space_translate_for_iotlb
> (cpu=0x7ffff7e2f010, asidx=1, addr=0, xlat=0x7fffdaefb548, 
> plen=0x7fffdaefb540,
> attrs=..., prot=0x7fffdaefb520)
>     at /home/rth/work/qemu/qemu/exec.c:753
> #3  0x000055555587c5a7 in tlb_set_page_with_attrs (cpu=0x7ffff7e2f010, 
> vaddr=0,
> paddr=0, attrs=..., prot=7, mmu_idx=3, size=4096)
>     at /home/rth/work/qemu/qemu/accel/tcg/cputlb.c:634
> #4  0x00005555559fe957 in arm_tlb_fill (cs=0x7ffff7e2f010, address=0,
> access_type=MMU_INST_FETCH, mmu_idx=3, fi=0x7fffdaefb680)
>     at /home/rth/work/qemu/qemu/target/arm/helper.c:10446
> #5  0x00005555559e6e7c in tlb_fill (cs=0x7ffff7e2f010, addr=1536, size=0,
> access_type=MMU_INST_FETCH, mmu_idx=3, retaddr=0)
>     at /home/rth/work/qemu/qemu/target/arm/op_helper.c:178

Yeah, we really shouldn't crash here, so we should investigate this.
I don't have a kernel binary so it would save me a bit of time if
you provided it (or you could investigate the crash yourself ;-))

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]