qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 2/3] tcg: Add support for fence generation i


From: Sergey Fedorov
Subject: Re: [Qemu-devel] [RFC PATCH 2/3] tcg: Add support for fence generation in x86 backend
Date: Wed, 25 May 2016 22:43:50 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0

On 25/05/16 22:25, Alex Bennée wrote:
> Richard Henderson <address@hidden> writes:
>> On 05/24/2016 10:18 AM, Pranith Kumar wrote:
>>> Signed-off-by: Pranith Kumar <address@hidden>
>>> ---
>>>  tcg/i386/tcg-target.h     | 1 +
>>>  tcg/i386/tcg-target.inc.c | 9 +++++++++
>>>  tcg/tcg-opc.h             | 2 +-
>>>  tcg/tcg.c                 | 1 +
>>>  4 files changed, 12 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
>>> index 92be341..93ea42e 100644
>>> --- a/tcg/i386/tcg-target.h
>>> +++ b/tcg/i386/tcg-target.h
>>> @@ -100,6 +100,7 @@ extern bool have_bmi1;
>>>  #define TCG_TARGET_HAS_muls2_i32        1
>>>  #define TCG_TARGET_HAS_muluh_i32        0
>>>  #define TCG_TARGET_HAS_mulsh_i32        0
>>> +#define TCG_TARGET_HAS_fence            1
>> This has to be defined for all hosts.
>>
>> The default implementation should be a function call into tcg-runtime.c that
>> calls smp_mb().
> That would solves the problem of converting the various backends
> piecemeal - although obviously we should move to all backends having
> "native" support ASAP. However by introducing expensive substitute
> functions we will slow down the translations as each front end is
> expanded to translate the target barrier ops.

I think it would better not to defer native support for the operation.
It should be relatively simple instruction. Otherwise we could wind up
deferring this indefinitely.

> Should we make the emitting of the function call/TCGop conditional on
> MTTCG being enabled? If we are running in round-robin mode there is no
> need to issue any fence operations.

Good idea.

Kind regards,
Sergey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]