qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 6/9] tcg: implement mirror mapped JIT for iOS


From: Joelle van Dyne
Subject: Re: [PATCH v2 6/9] tcg: implement mirror mapped JIT for iOS
Date: Sun, 25 Oct 2020 12:46:19 -0700

On Mon, Oct 19, 2020 at 5:20 PM Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 10/19/20 3:39 PM, Joelle van Dyne wrote:
> >> Explicit cast may not be needed here so this could be a macro if caling it
> >> differently helps or why don't you just use tcg_mirror_prr_rw directly
> >> everywhere?
> >
> > There are quite a bit of code that depends on tcg_insn_unit * type such as
> >
> > *tcg_code_ptr_rw(s, code_ptr) = insn;
> >
> > and
> >
> > (tcg_code_ptr_rw(s, p))[i] = NOP;
> >
> > I think it's cleaner to not have to manually cast in every one of 30+
> > instances of this. In v1, I used a macro but was told to use an inline
> > function instead.
>
> Yep.
>
> >> Is that !defined or are you missing an implementation and #else here?
> > No, `flush_dcache_range` is only needed when mirror mapped (after
> > writing to the RW mirror). Now there is no iOS compatible compiler for
> > any other arch than x86 and ARM. However, in the slim chance that
> > Apple decides to change arch again in the future and moves to RISC-V
> > or something, then we get a nice compiler error.
>
> *shrug* As opposed to the nice compiler error you get for a missing function
> declaration?
>
> That said, I think __builtin___clear_cache() may be the target-independent
> runtime function that you need.  Both GCC and LLVM support this, and I'd be
> surprised if that doesn't carry through to iOS.
You would think so but unfortunately iOS likes to be "special"
See: https://gitlab.haskell.org/ghc/ghc/-/issues/8561

>
> >> Maybe this patch could be split up some more, making the RW offset
> >> handling and cache management separate patches even if they don't work
> >> separately just to make it easier to review.
> >
> > I can probably do that for v3 but imo most of the LOC here is because
> > the same change has to be done to every TCG target. No matter how you
> > split up the patches, it will look like a lot of changes.
>
> It occurs to me that the majority of the code changes in patches 5 and 6 are
> due to your choice that code_gen_buffer points to the RX copy and not the RW 
> copy.
>
> Swap the two, and instead have an inline function that produces the executable
> pointer from the rw pointer, and suddenly there are very much fewer changes
> required.
In the original incarnation of this code from a year ago, this was the
chosen path, but we ran into other issues which unfortunately were not
recorded anywhere. I think one issue is that the mirror offset has to
be stored somewhere that can easily be retrieved or otherwise code and
structs have to be modified to take a copy of the value. If we default
to the RW copy, there may be less LOC changes but the changes will be
more significant. This version has also been running by iOS users for
the past year or so without issue and a major change in the logic
would probably require a lot more testing.

>
> For the most part, tcg/$cpu/ generates pc-relative code, so it need not
> consider the absolute address.  There are a few exceptions including,
> obviously, 32-bit x86.  But the number of places that occurs is small.
>
> There's the assignment to tb->tc.ptr of course, and
> tcg_ctx.code_gen_prologue/epilogue.
>
> In any case, each of these changes (generic, per tcg backend) can occur before
> you finally add a non-zero displacement that actually separates the RX and RW
> mappings.
>
> Finally, I'd like to have this implemented on Linux as well, or I'm afraid the
> feature will bit-rot.  This can be trivially done by either (1)
> MREMAP_DONTUNMAP or (2) mapping from posix shared memory instead of MAP_ANON 
> so
> that you can map the same memory twice.  Thus virtually all of the ifdefs
> should go away.
Okay, I will do that.

-j

>
>
> r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]