On 06/04/2014 02:09 PM, Alexander Graf wrote:
@@ -9030,13 +8820,10 @@ static inline void gen_evmwumi(DisasContext *ctx)
t1 = tcg_temp_new_i64();
/* t0 := rA; t1 := rB */
-#if defined(TARGET_PPC64)
- tcg_gen_ext32u_tl(t0, cpu_gpr[rA(ctx->opcode)]);
- tcg_gen_ext32u_tl(t1, cpu_gpr[rB(ctx->opcode)]);
-#else
tcg_gen_extu_tl_i64(t0, cpu_gpr[rA(ctx->opcode)]);
+ tcg_gen_ext32u_i64(t0, t0);
Better in one step:
tcg_gen_ext32u_i64(t0, cpu_gpr[rA(ctx->opcode)]);
@@ -9112,13 +8899,10 @@ static inline void gen_evmwsmi(DisasContext *ctx)
t1 = tcg_temp_new_i64();
/* t0 := rA; t1 := rB */
+ tcg_gen_extu_tl_i64(t0, cpu_gpr[rA(ctx->opcode)]);
+ tcg_gen_ext32s_i64(t0, t0);
+ tcg_gen_extu_tl_i64(t1, cpu_gpr[rB(ctx->opcode)]);
+ tcg_gen_ext32s_i64(t1, t1);
Similarly.
Although I do wonder about using the actual 32-bit widening multiply
primitives: tcg_gen_mulu2_i32 / tcg_gen_muls2_i32.
I suspect the optimizer would clean up the unsigned version, but it hasn't got
a chance with the signed version.