qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v3 19/28] tcg: Tidy split_cross_256mb


From: Luis Fernando Fujita Pires
Subject: RE: [PATCH v3 19/28] tcg: Tidy split_cross_256mb
Date: Wed, 9 Jun 2021 14:59:02 +0000

From: Richard Henderson <richard.henderson@linaro.org>
> Return output buffer and size via output pointer arguments, rather than
> returning size via tcg_ctx->code_gen_buffer_size.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  tcg/region.c | 15 +++++++--------
>  1 file changed, 7 insertions(+), 8 deletions(-)
> 
> diff --git a/tcg/region.c b/tcg/region.c index b44246e1aa..652f328d2c 100644
> --- a/tcg/region.c
> +++ b/tcg/region.c
> @@ -467,7 +467,8 @@ static inline bool cross_256mb(void *addr, size_t size)
>  /* We weren't able to allocate a buffer without crossing that boundary,
>     so make do with the larger portion of the buffer that doesn't cross.
>     Returns the new base of the buffer, and adjusts code_gen_buffer_size.  */ 
> -
> static inline void *split_cross_256mb(void *buf1, size_t size1)
> +static inline void split_cross_256mb(void **obuf, size_t *osize,
> +                                     void *buf1, size_t size1)

Need to adjust the comment, now that we're no longer adjusting 
code_gen_buffer_size in here.


> @@ -583,8 +583,7 @@ static bool alloc_code_gen_buffer_anon(size_t size, int
> prot,
>              /* fallthru */
>          default:
>              /* Split the original buffer.  Free the smaller half.  */
> -            buf2 = split_cross_256mb(buf, size);
> -            size2 = tcg_ctx->code_gen_buffer_size;
> +            split_cross_256mb(&buf2, &size2, buf, size);

This will be fixed by patch 21 (tcg: Allocate code_gen_buffer into struct 
tcg_region_state), but shouldn't we update tcg_ctx->code_gen_buffer_size here?

Other than that,

Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>

--
Luis Pires
Instituto de Pesquisas ELDORADO
Aviso Legal - Disclaimer <https://www.eldorado.org.br/disclaimer.html>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]