qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/6] target/arm: Fix SVE signed division vs x86


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH 1/6] target/arm: Fix SVE signed division vs x86 overflow exception
Date: Fri, 29 Jun 2018 09:29:42 +0100

On 29 June 2018 at 01:15, Richard Henderson
<address@hidden> wrote:
> We already check for the same condition within the normal integer
> sdiv and sdiv64 helpers.  Use a slightly different formation that
> does not require deducing the expression type.
>
> Fixes: f97cfd596ed
> Signed-off-by: Richard Henderson <address@hidden>
> ---
>  target/arm/sve_helper.c | 16 +++++++++++-----
>  1 file changed, 11 insertions(+), 5 deletions(-)
>
> diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
> index 790cbacd14..7d7fc90566 100644
> --- a/target/arm/sve_helper.c
> +++ b/target/arm/sve_helper.c
> @@ -369,7 +369,13 @@ void HELPER(NAME)(void *vd, void *vn, void *vm, void 
> *vg, uint32_t desc) \
>  #define DO_MIN(N, M)  ((N) >= (M) ? (M) : (N))
>  #define DO_ABD(N, M)  ((N) >= (M) ? (N) - (M) : (M) - (N))
>  #define DO_MUL(N, M)  (N * M)
> -#define DO_DIV(N, M)  (M ? N / M : 0)
> +
> +/* The zero divisor case is architectural; the -1 divisor case works
> + * around the x86 INT_MIN / -1 overflow exception without having to
> + * deduce the minimum integer for the type of the expression.
> + */

It works around INT_MIN / -1 being C undefined behaviour: the
need to special-case this is not x86-specific... The required
answer for Arm is just as architectural as the required answer
for division-by-zero (which is also C UB).

> +#define DO_SDIV(N, M) (unlikely(M == 0) ? 0 : unlikely(M == -1) ? -N : N / M)
> +#define DO_UDIV(N, M) (unlikely(M == 0) ? 0 : N / M)
>
>  DO_ZPZZ(sve_and_zpzz_b, uint8_t, H1, DO_AND)
>  DO_ZPZZ(sve_and_zpzz_h, uint16_t, H1_2, DO_AND)
> @@ -477,11 +483,11 @@ DO_ZPZZ(sve_umulh_zpzz_h, uint16_t, H1_2, do_mulh_h)
>  DO_ZPZZ(sve_umulh_zpzz_s, uint32_t, H1_4, do_mulh_s)
>  DO_ZPZZ_D(sve_umulh_zpzz_d, uint64_t, do_umulh_d)
>
> -DO_ZPZZ(sve_sdiv_zpzz_s, int32_t, H1_4, DO_DIV)
> -DO_ZPZZ_D(sve_sdiv_zpzz_d, int64_t, DO_DIV)
> +DO_ZPZZ(sve_sdiv_zpzz_s, int32_t, H1_4, DO_SDIV)
> +DO_ZPZZ_D(sve_sdiv_zpzz_d, int64_t, DO_SDIV)
>
> -DO_ZPZZ(sve_udiv_zpzz_s, uint32_t, H1_4, DO_DIV)
> -DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_DIV)
> +DO_ZPZZ(sve_udiv_zpzz_s, uint32_t, H1_4, DO_UDIV)
> +DO_ZPZZ_D(sve_udiv_zpzz_d, uint64_t, DO_UDIV)
>
>  /* Note that all bits of the shift are significant
>     and not modulo the element size.  */

Other than quibbling about the comment,
Reviewed-by: Peter Maydell <address@hidden>

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]