[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v5 26/45] target/arm: Implement FMOPA, FMOPS (widening)
From: |
Peter Maydell |
Subject: |
Re: [PATCH v5 26/45] target/arm: Implement FMOPA, FMOPS (widening) |
Date: |
Thu, 7 Jul 2022 10:50:05 +0100 |
On Wed, 6 Jul 2022 at 10:26, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> +static float32 f16_dotadd(float32 sum, uint32_t e1, uint32_t e2,
> + float_status *s_std, float_status *s_odd)
> +{
> + float64 e1r = float16_to_float64(e1 & 0xffff, true, s_std);
> + float64 e1c = float16_to_float64(e1 >> 16, true, s_std);
> + float64 e2r = float16_to_float64(e2 & 0xffff, true, s_std);
> + float64 e2c = float16_to_float64(e2 >> 16, true, s_std);
> + float64 t64;
> + float32 t32;
> +
> + /*
> + * The ARM pseudocode function FPDot performs both multiplies
> + * and the add with a single rounding operation. Emulate this
> + * by performing the first multiply in round-to-odd, then doing
> + * the second multiply as fused multiply-add, and rounding to
> + * float32 all in one step.
> + */
I guess if we find we're not producing quite bit-accurate results
we can come back and revisit this :-)
> + t64 = float64_mul(e1r, e2r, s_odd);
> + t64 = float64r32_muladd(e1c, e2c, t64, 0, s_std);
> +
> + /* This conversion is exact, because we've already rounded. */
> + t32 = float64_to_float32(t64, s_std);
> +
> + /* The final accumulation step is not fused. */
> + return float32_add(sum, t32, s_std);
> +}
> +
> +void HELPER(sme_fmopa_h)(void *vza, void *vzn, void *vzm, void *vpn,
> + void *vpm, void *vst, uint32_t desc)
> +{
> + intptr_t row, col, oprsz = simd_maxsz(desc);
> + uint32_t neg = simd_data(desc) << 15;
> + uint16_t *pn = vpn, *pm = vpm;
> + float_status fpst_odd, fpst_std = *(float_status *)vst;
> +
> + set_default_nan_mode(true, &fpst_std);
> + fpst_odd = fpst_std;
> + set_float_rounding_mode(float_round_to_odd, &fpst_odd);
> +
> + for (row = 0; row < oprsz; ) {
> + uint16_t pa = pn[H2(row >> 4)];
> + do {
> + void *vza_row = vza + tile_vslice_offset(row);
> + uint32_t n = *(uint32_t *)(vzn + row);
More missing H macros.
> +
> + n = f16mop_adj_pair(n, pa, neg);
> +
> + for (col = 0; col < oprsz; ) {
> + uint16_t pb = pm[H2(col >> 4)];
> + do {
> + if ((pa & 0b0101) == 0b0101 || (pb & 0b0101) == 0b0101) {
Wrong condition again?
> + uint32_t *a = vza_row + col;
> + uint32_t m = *(uint32_t *)(vzm + col);
> +
> + m = f16mop_adj_pair(m, pb, neg);
> + *a = f16_dotadd(*a, n, m, &fpst_std, &fpst_odd);
> +
> + col += 4;
> + pb >>= 4;
> + }
> + } while (col & 15);
> + }
> + row += 4;
> + pa >>= 4;
> + } while (row & 15);
> + }
> +}
thanks
-- PMM
- Re: [PATCH v5 20/45] target/arm: Implement SME LD1, ST1, (continued)
- [PATCH v5 21/45] target/arm: Export unpredicated ld/st from translate-sve.c, Richard Henderson, 2022/07/06
- [PATCH v5 22/45] target/arm: Implement SME LDR, STR, Richard Henderson, 2022/07/06
- [PATCH v5 23/45] target/arm: Implement SME ADDHA, ADDVA, Richard Henderson, 2022/07/06
- [PATCH v5 18/45] target/arm: Implement SME ZERO, Richard Henderson, 2022/07/06
- [PATCH v5 19/45] target/arm: Implement SME MOVA, Richard Henderson, 2022/07/06
- [PATCH v5 26/45] target/arm: Implement FMOPA, FMOPS (widening), Richard Henderson, 2022/07/06
- Re: [PATCH v5 26/45] target/arm: Implement FMOPA, FMOPS (widening),
Peter Maydell <=
- [PATCH v5 24/45] target/arm: Implement FMOPA, FMOPS (non-widening), Richard Henderson, 2022/07/06
- [PATCH v5 27/45] target/arm: Implement SME integer outer product, Richard Henderson, 2022/07/06
- [PATCH v5 29/45] target/arm: Implement REVD, Richard Henderson, 2022/07/06
- [PATCH v5 32/45] target/arm: Enable SME for -cpu max, Richard Henderson, 2022/07/06
- [PATCH v5 30/45] target/arm: Implement SCLAMP, UCLAMP, Richard Henderson, 2022/07/06
- [PATCH v5 25/45] target/arm: Implement BFMOPA, BFMOPS, Richard Henderson, 2022/07/06
- [PATCH v5 35/45] linux-user/aarch64: Add SM bit to SVE signal context, Richard Henderson, 2022/07/06