qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] target/arm/helper: Fix timer interrupt masking when HCR_EL2.


From: Peter Maydell
Subject: Re: [PATCH] target/arm/helper: Fix timer interrupt masking when HCR_EL2.E2H == 0
Date: Fri, 7 Feb 2025 15:45:51 +0000

On Thu, 6 Feb 2025 at 15:41, Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Tue, 20 Aug 2024 at 12:30, Florian Lugou <florian.lugou@provenrun.com> 
> wrote:
> >
> > > > $ aarch64-none-elf-gcc -ffreestanding -nostdlib -T 
> > > > qemu/tests/tcg/aarch64/system/kernel.ld -o test test.S
> > > >
> > > > $ qemu-system-aarch64 \
> > > >         -machine virt,secure=on,gic-version=3 \
> > > >         -cpu cortex-a57 \
> > > >         -kernel test \
> > > >         -display none \
> > > >         -semihosting
> > > >
> > > > $ # Exits after ~1s
> > > >
> > > > $ qemu-system-aarch64 \
> > > >         -machine virt,secure=on,gic-version=3 \
> > > >         -cpu cortex-a57 \
> > > >         -kernel test \
> > > >         -display none \
> > > >         -semihosting \
> > > >         -icount shift=0,sleep=off
> > > >
> > > > ... (hangs until QEMU is killed)

> Somebody else reported a similar issue to this to me this week,
> which reminded me of a different bug that we'd found and fixed
> in the interim, which was enough of a hint that I figured out
> why this wasn't reproducible for me.
>
> This bug only reproduces if your QEMU binary isn't compiled
> with slirp support (which will happen if your host system
> doesn't have libslirp-dev or libslirp-devel installed).
> If slirp is present then the usermode networking backend
> will be present and it always has an active timer. Without
> slirp, the problem manifests when there are no active timers.
>
> You can also repro it on a with-slirp compile by adding
> "-net none" to the command line.
>
> I'll have a look at what the underlying bug is... thanks again
> for the handy test case.

So I've looked at this, and I can see *why* it hangs, but
it looks like a structural problem with icount, which I'm
not super familiar with. Richard, Alex, any suggestions?

The sequence of events is that the test case sets up the
timer with an expiry in the future, enables interrupts,
and then executes a WFI insn. What's supposed to happen is
that the interrupt fires and the test case exits.

In helper_wfi, we set cs->halted and do a cpu_loop_exit(),
so the WFI definitely goes to sleep.

We do fire the timer's callback:
#0  arm_gt_stimer_cb (opaque=0x55870482d670) at ../../target/arm/helper.c:2962
#1  0x00005587013c3c86 in timerlist_run_timers
(timer_list=0x5587042f5950) at ../../util/qemu-timer.c:566
#2  0x00005587013c3d3c in qemu_clock_run_timers
(type=QEMU_CLOCK_VIRTUAL) at ../../util/qemu-timer.c:580
#3  0x0000558701102f4c in icount_notify_aio_contexts () at
../../accel/tcg/tcg-accel-ops-icount.c:73
#4  0x0000558701102faf in icount_handle_deadline () at
../../accel/tcg/tcg-accel-ops-icount.c:88
#5  0x0000558701103ac9 in rr_cpu_thread_fn (arg=0x55870482d670) at
../../accel/tcg/tcg-accel-ops-rr.c:234

and the timer count is correct for the timer expiry, so
this part is OK.

First the arm generic timer code reprograms the QEMU timer
(for an expiry at INT64_MAX, ie "far far future")
by calling timer_mod_ns(). timer_mod_ns() ends up calling
timerlist_rearm(), which calls icount_start_warp_timer().

This is where things go wrong -- icount_start_warp_timer()
notices that all CPU threads are currently idle, and
decides it needs to warp the timer forwards to the
next deadline, which is at the end of time -- INT64_MAX.

But once timer_mod_ns() returns, the generic timer code
is going to raise an interrupt (this goes through the GIC
code and comes back into the CPU which calls cpu_interrupt()),
so we don't want to warp the timer at all. The clock should
stay exactly at the value it has and the CPU is going to
have more work to do.

How is this supposed to work? Shouldn't we only be doing
the "start moving the icount forward to the next deadline"
once we've completed all the "run timers and AIO stuff" that
icount_handle_deadline() triggers, not randomly in the middle
of that when this timer callback or some other one might do
something to trigger an interrupt?

If the arm_gt_stimer_cb() was written in the other order,
so that it first raised the interrupt and then modified
the QEMU timer second, this would happen to work, because
raising the interrupt sets cpu->interrupt_request, which
means that arm_cpu_has_work() returns true, so when
icount_start_warp_timer() calls all_cpu_threads_idle() it
returns false and icount_start_warp_timer() returns without
doing anything. But I don't think there's any reason why
timer callbacks should be obliged to reprogram their timers
last, and in any case you can imagine scenarios where there
are multiple timer callbacks for different timers and it's
only the second timer that raises an interrupt...

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]