qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3] target/s390x: support PRNO_TRNG instruction


From: David Hildenbrand
Subject: Re: [PATCH v3] target/s390x: support PRNO_TRNG instruction
Date: Wed, 20 Jul 2022 20:41:48 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.11.0

On 20.07.22 14:08, Jason A. Donenfeld wrote:
> In order for hosts running inside of TCG to initialize the kernel's
> random number generator, we should support the PRNO_TRNG instruction,
> backed in the usual way with the qemu_guest_getrandom helper. This is
> confirmed working on Linux 5.19-rc6.
> 
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Richard Henderson <richard.henderson@linaro.org>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Harald Freudenberger <freude@linux.ibm.com>
> Cc: Holger Dengler <dengler@linux.ibm.com>
> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
> ---
>  target/s390x/cpu_models.c        |  2 --
>  target/s390x/gen-features.c      |  2 ++
>  target/s390x/tcg/crypto_helper.c | 32 ++++++++++++++++++++++++++++++++
>  3 files changed, 34 insertions(+), 2 deletions(-)
> 
> diff --git a/target/s390x/cpu_models.c b/target/s390x/cpu_models.c
> index 1a562d2801..90aac3d795 100644
> --- a/target/s390x/cpu_models.c
> +++ b/target/s390x/cpu_models.c
> @@ -421,8 +421,6 @@ static void check_consistency(const S390CPUModel *model)
>          { S390_FEAT_DFP_FAST, S390_FEAT_DFP },
>          { S390_FEAT_TRANSACTIONAL_EXE, S390_FEAT_STFLE_49 },
>          { S390_FEAT_EDAT_2, S390_FEAT_EDAT},
> -        { S390_FEAT_MSA_EXT_5, S390_FEAT_KIMD_SHA_512 },
> -        { S390_FEAT_MSA_EXT_5, S390_FEAT_KLMD_SHA_512 },
>          { S390_FEAT_MSA_EXT_4, S390_FEAT_MSA_EXT_3 },
>          { S390_FEAT_SIE_CMMA, S390_FEAT_CMM },
>          { S390_FEAT_SIE_CMMA, S390_FEAT_SIE_GSLS },
> diff --git a/target/s390x/gen-features.c b/target/s390x/gen-features.c
> index ad140184b9..3d333e2789 100644
> --- a/target/s390x/gen-features.c
> +++ b/target/s390x/gen-features.c
> @@ -749,6 +749,8 @@ static uint16_t qemu_V7_0[] = {
>   */
>  static uint16_t qemu_MAX[] = {
>      S390_FEAT_VECTOR_ENH2,
> +    S390_FEAT_MSA_EXT_5,
> +    S390_FEAT_PRNO_TRNG,
>  };
>  
>  /****** END FEATURE DEFS ******/
> diff --git a/target/s390x/tcg/crypto_helper.c 
> b/target/s390x/tcg/crypto_helper.c
> index 138d9e7ad9..afd29f9cf0 100644
> --- a/target/s390x/tcg/crypto_helper.c
> +++ b/target/s390x/tcg/crypto_helper.c
> @@ -12,12 +12,38 @@
>  
>  #include "qemu/osdep.h"
>  #include "qemu/main-loop.h"
> +#include "qemu/guest-random.h"
>  #include "s390x-internal.h"
>  #include "tcg_s390x.h"
>  #include "exec/helper-proto.h"
>  #include "exec/exec-all.h"
>  #include "exec/cpu_ldst.h"
>  
> +static void fill_buf_random(CPUS390XState *env, uintptr_t ra,
> +                            uint64_t *buf_reg, uint64_t *len_reg)
> +{
> +        uint8_t tmp[256];
> +        uint64_t len = *len_reg;
> +        int reg_len = 64;
> +
> +        if (!(env->psw.mask & PSW_MASK_64)) {
> +                len = (uint32_t)len;
> +                reg_len = (env->psw.mask & PSW_MASK_32) ? 32 : 24;
> +        }
> +
> +        while (len) {
> +                size_t block = MIN(len, sizeof(tmp));
> +
> +                qemu_guest_getrandom_nofail(tmp, block);
> +                for (size_t i = 0; i < block; ++i) {
> +                        cpu_stb_data_ra(env, wrap_address(env, *buf_reg), 
> tmp[i], ra);
> +                        *buf_reg = deposit64(*buf_reg, 0, reg_len, *buf_reg 
> + 1);
> +                        --*len_reg;
> +                }
> +                len -= block;
> +        }

Yeah, that's better, although kind-off hard to read.

We could process one guest page at a time, similar to how we handle
target/s390x/tcg/mem_helper.c:access_memset and friends nowadays.

But I won't force you to do that ;)

This here is good enough for now, with room for improvement regarding
efficiency.

I did not review the doc in detail once again, maybe I get to that later
this week.

-- 
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]