[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 6/9] tcg: implement mirror mapped JIT for iOS

From: Richard Henderson
Subject: Re: [PATCH v2 6/9] tcg: implement mirror mapped JIT for iOS
Date: Mon, 19 Oct 2020 17:19:56 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0

On 10/19/20 3:39 PM, Joelle van Dyne wrote:
>> Explicit cast may not be needed here so this could be a macro if caling it
>> differently helps or why don't you just use tcg_mirror_prr_rw directly
>> everywhere?
> There are quite a bit of code that depends on tcg_insn_unit * type such as
> *tcg_code_ptr_rw(s, code_ptr) = insn;
> and
> (tcg_code_ptr_rw(s, p))[i] = NOP;
> I think it's cleaner to not have to manually cast in every one of 30+
> instances of this. In v1, I used a macro but was told to use an inline
> function instead.


>> Is that !defined or are you missing an implementation and #else here?
> No, `flush_dcache_range` is only needed when mirror mapped (after
> writing to the RW mirror). Now there is no iOS compatible compiler for
> any other arch than x86 and ARM. However, in the slim chance that
> Apple decides to change arch again in the future and moves to RISC-V
> or something, then we get a nice compiler error.

*shrug* As opposed to the nice compiler error you get for a missing function

That said, I think __builtin___clear_cache() may be the target-independent
runtime function that you need.  Both GCC and LLVM support this, and I'd be
surprised if that doesn't carry through to iOS.

>> Maybe this patch could be split up some more, making the RW offset
>> handling and cache management separate patches even if they don't work
>> separately just to make it easier to review.
> I can probably do that for v3 but imo most of the LOC here is because
> the same change has to be done to every TCG target. No matter how you
> split up the patches, it will look like a lot of changes.

It occurs to me that the majority of the code changes in patches 5 and 6 are
due to your choice that code_gen_buffer points to the RX copy and not the RW 

Swap the two, and instead have an inline function that produces the executable
pointer from the rw pointer, and suddenly there are very much fewer changes

For the most part, tcg/$cpu/ generates pc-relative code, so it need not
consider the absolute address.  There are a few exceptions including,
obviously, 32-bit x86.  But the number of places that occurs is small.

There's the assignment to tb->tc.ptr of course, and

In any case, each of these changes (generic, per tcg backend) can occur before
you finally add a non-zero displacement that actually separates the RX and RW

Finally, I'd like to have this implemented on Linux as well, or I'm afraid the
feature will bit-rot.  This can be trivially done by either (1)
MREMAP_DONTUNMAP or (2) mapping from posix shared memory instead of MAP_ANON so
that you can map the same memory twice.  Thus virtually all of the ifdefs
should go away.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]