[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v10 32/73] cpu: define cpu_interrupt_request helpers
From: |
Robert Foley |
Subject: |
[PATCH v10 32/73] cpu: define cpu_interrupt_request helpers |
Date: |
Wed, 17 Jun 2020 17:01:50 -0400 |
From: "Emilio G. Cota" <cota@braap.org>
Add a comment about how atomic_read works here. The comment refers to
a "BQL-less CPU loop", which will materialize toward the end
of this series.
Note that the modifications to cpu_reset_interrupt are there to
avoid deadlock during the CPU lock transition; once that is complete,
cpu_interrupt_request will be simple again.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Robert Foley <robert.foley@linaro.org>
---
hw/core/cpu.c | 27 +++++++++++++++++++++------
include/hw/core/cpu.h | 37 +++++++++++++++++++++++++++++++++++++
2 files changed, 58 insertions(+), 6 deletions(-)
diff --git a/hw/core/cpu.c b/hw/core/cpu.c
index 64a1bf3e92..d3223f6d42 100644
--- a/hw/core/cpu.c
+++ b/hw/core/cpu.c
@@ -99,14 +99,29 @@ static void cpu_common_get_memory_mapping(CPUState *cpu,
* BQL here if we need to. cpu_interrupt assumes it is held.*/
void cpu_reset_interrupt(CPUState *cpu, int mask)
{
- bool need_lock = !qemu_mutex_iothread_locked();
+ bool has_bql = qemu_mutex_iothread_locked();
+ bool has_cpu_lock = cpu_mutex_locked(cpu);
- if (need_lock) {
- qemu_mutex_lock_iothread();
+ if (has_bql) {
+ if (has_cpu_lock) {
+ atomic_set(&cpu->interrupt_request, cpu->interrupt_request &
~mask);
+ } else {
+ cpu_mutex_lock(cpu);
+ atomic_set(&cpu->interrupt_request, cpu->interrupt_request &
~mask);
+ cpu_mutex_unlock(cpu);
+ }
+ return;
+ }
+
+ if (has_cpu_lock) {
+ cpu_mutex_unlock(cpu);
}
- cpu->interrupt_request &= ~mask;
- if (need_lock) {
- qemu_mutex_unlock_iothread();
+ qemu_mutex_lock_iothread();
+ cpu_mutex_lock(cpu);
+ atomic_set(&cpu->interrupt_request, cpu->interrupt_request & ~mask);
+ qemu_mutex_unlock_iothread();
+ if (!has_cpu_lock) {
+ cpu_mutex_unlock(cpu);
}
}
diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
index 92069ebc59..6f2c005171 100644
--- a/include/hw/core/cpu.h
+++ b/include/hw/core/cpu.h
@@ -522,6 +522,43 @@ static inline void cpu_halted_set(CPUState *cpu, uint32_t
val)
cpu_mutex_unlock(cpu);
}
+/*
+ * When sending an interrupt, setters OR the appropriate bit and kick the
+ * destination vCPU. The latter can then read interrupt_request without
+ * acquiring the CPU lock, because once the kick-induced completes, they'll
read
+ * an up-to-date interrupt_request.
+ * Setters always acquire the lock, which guarantees that (1) concurrent
+ * updates from different threads won't result in data races, and (2) the
+ * BQL-less CPU loop will always see an up-to-date interrupt_request, since the
+ * loop holds the CPU lock.
+ */
+static inline uint32_t cpu_interrupt_request(CPUState *cpu)
+{
+ return atomic_read(&cpu->interrupt_request);
+}
+
+static inline void cpu_interrupt_request_or(CPUState *cpu, uint32_t mask)
+{
+ if (cpu_mutex_locked(cpu)) {
+ atomic_set(&cpu->interrupt_request, cpu->interrupt_request | mask);
+ return;
+ }
+ cpu_mutex_lock(cpu);
+ atomic_set(&cpu->interrupt_request, cpu->interrupt_request | mask);
+ cpu_mutex_unlock(cpu);
+}
+
+static inline void cpu_interrupt_request_set(CPUState *cpu, uint32_t val)
+{
+ if (cpu_mutex_locked(cpu)) {
+ atomic_set(&cpu->interrupt_request, val);
+ return;
+ }
+ cpu_mutex_lock(cpu);
+ atomic_set(&cpu->interrupt_request, val);
+ cpu_mutex_unlock(cpu);
+}
+
static inline void cpu_tb_jmp_cache_clear(CPUState *cpu)
{
unsigned int i;
--
2.17.1
- [PATCH v10 17/73] arm: convert to cpu_halted, (continued)
- [PATCH v10 17/73] arm: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 16/73] hw/semihosting: convert to cpu_halted_set, Robert Foley, 2020/06/17
- [PATCH v10 18/73] ppc: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 19/73] sh4: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 21/73] lm32: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 20/73] i386: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 26/73] sparc: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 27/73] xtensa: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 25/73] s390x: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 29/73] openrisc: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 32/73] cpu: define cpu_interrupt_request helpers,
Robert Foley <=
- [PATCH v10 22/73] m68k: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 23/73] mips: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 28/73] gdbstub: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 34/73] exec: use cpu_reset_interrupt, Robert Foley, 2020/06/17
- [PATCH v10 30/73] cpu-exec: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 33/73] ppc: use cpu_reset_interrupt, Robert Foley, 2020/06/17
- [PATCH v10 31/73] cpu: convert to cpu_halted, Robert Foley, 2020/06/17
- [PATCH v10 35/73] i386: use cpu_reset_interrupt, Robert Foley, 2020/06/17
- [PATCH v10 37/73] openrisc: use cpu_reset_interrupt, Robert Foley, 2020/06/17
- [PATCH v10 39/73] i386: convert to cpu_interrupt_request, Robert Foley, 2020/06/17