[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 14/17] tcg: cpu_exec_{enter,exit} helpers
From: |
Eduardo Habkost |
Subject: |
[PULL 14/17] tcg: cpu_exec_{enter,exit} helpers |
Date: |
Thu, 17 Dec 2020 13:46:17 -0500 |
Move invocation of CPUClass.cpu_exec_*() to separate helpers,
to make it easier to refactor that code later.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Claudio Fontana <cfontana@suse.de>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20201212155530.23098-10-cfontana@suse.de>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
accel/tcg/cpu-exec.c | 23 ++++++++++++++++++-----
1 file changed, 18 insertions(+), 5 deletions(-)
diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index c2c26489c7..58117f175a 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -236,9 +236,22 @@ static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
}
#endif
-void cpu_exec_step_atomic(CPUState *cpu)
+static void cpu_exec_enter(CPUState *cpu)
+{
+ CPUClass *cc = CPU_GET_CLASS(cpu);
+
+ cc->cpu_exec_enter(cpu);
+}
+
+static void cpu_exec_exit(CPUState *cpu)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
+
+ cc->cpu_exec_exit(cpu);
+}
+
+void cpu_exec_step_atomic(CPUState *cpu)
+{
TranslationBlock *tb;
target_ulong cs_base, pc;
uint32_t flags;
@@ -257,11 +270,11 @@ void cpu_exec_step_atomic(CPUState *cpu)
/* Since we got here, we know that parallel_cpus must be true. */
parallel_cpus = false;
- cc->cpu_exec_enter(cpu);
+ cpu_exec_enter(cpu);
/* execute the generated code */
trace_exec_tb(tb, pc);
cpu_tb_exec(cpu, tb);
- cc->cpu_exec_exit(cpu);
+ cpu_exec_exit(cpu);
} else {
/*
* The mmap_lock is dropped by tb_gen_code if it runs out of
@@ -713,7 +726,7 @@ int cpu_exec(CPUState *cpu)
rcu_read_lock();
- cc->cpu_exec_enter(cpu);
+ cpu_exec_enter(cpu);
/* Calculate difference between guest clock and host clock.
* This delay includes the delay of the last cycle, so
@@ -775,7 +788,7 @@ int cpu_exec(CPUState *cpu)
}
}
- cc->cpu_exec_exit(cpu);
+ cpu_exec_exit(cpu);
rcu_read_unlock();
return ret;
--
2.28.0
- Re: [PULL 07/17] i386: move hyperv_vendor_id initialization to x86_cpu_realizefn(), (continued)
[PULL 09/17] i386: move hyperv_version_id initialization to x86_cpu_realizefn(), Eduardo Habkost, 2020/12/17
[PULL 08/17] i386: move hyperv_interface_id initialization to x86_cpu_realizefn(), Eduardo Habkost, 2020/12/17
[PULL 13/17] i386: tcg: remove inline from cpu_load_eflags, Eduardo Habkost, 2020/12/17
[PULL 10/17] i386: move hyperv_limits initialization to x86_cpu_realizefn(), Eduardo Habkost, 2020/12/17
[PULL 12/17] i386: move TCG cpu class initialization to tcg/, Eduardo Habkost, 2020/12/17
[PULL 15/17] tcg: make CPUClass.cpu_exec_* optional, Eduardo Habkost, 2020/12/17
[PULL 11/17] x86/cpu: Add AVX512_FP16 cpu feature, Eduardo Habkost, 2020/12/17
[PULL 14/17] tcg: cpu_exec_{enter,exit} helpers,
Eduardo Habkost <=
[PULL 17/17] cpu: Remove unnecessary noop methods, Eduardo Habkost, 2020/12/17
[PULL 16/17] tcg: Make CPUClass.debug_excp_handler optional, Eduardo Habkost, 2020/12/17
Re: [PULL 00/17] x86 queue, 2020-12-17, Peter Maydell, 2020/12/17