qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 01/16] tcg: Merge opcode arguments into TCGOp


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH 01/16] tcg: Merge opcode arguments into TCGOp
Date: Mon, 26 Jun 2017 07:55:36 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0

On 06/26/2017 07:44 AM, Alex Bennée wrote:
-/* The layout here is designed to avoid crossing of a 32-bit boundary.
-   If we do so, gcc adds padding, expanding the size to 12.  */
+/* The layout here is designed to avoid crossing of a 32-bit
boundary.  */

This isn't correct now? Do we mean we now aim to be cache line aligned?

I still avoid having a bitfield cross a 32-bit boundary. Perhaps I should not have trimmed quite so much from the comment.

  /* Make sure operands fit in the bitfields above.  */
  QEMU_BUILD_BUG_ON(NB_OPS > (1 << 8));
-QEMU_BUILD_BUG_ON(OPC_BUF_SIZE > (1 << 10));
-QEMU_BUILD_BUG_ON(OPPARAM_BUF_SIZE > (1 << 14));
-
-/* Make sure that we don't overflow 64 bits without noticing.  */
-QEMU_BUILD_BUG_ON(sizeof(TCGOp) > 8);
+QEMU_BUILD_BUG_ON(OPC_BUF_SIZE > (1 << 16));

OPC_BUF_SIZE is statically assigned, we don't seem to be taking notice
of sizeof(TCGOp) anymore here. In fact OPC_BUF_SIZE is really
MAX_TCG_OPS right?

Yes, I dropped the sizeof(TCGOp) check. I could perhaps adjust it, but the expression would be a bit unwieldy, since it'll vary by host now.

I suppose you could think of OPC_BUF_SIZE as MAX_TCG_OPS, yes. I suppose that might be a decent renaming as well.

I see TCGArg is currently target_ulong. Is this because we never leak
the host size details into generated code safe for the statically
assigned env_ptr?

You mis-read.  TCGArg is tcg_target_ulong, which is a host specific value.

I mention this because in looking at modelling SIMD registers I'm going
to need to carry a host ptr around in TCG registers that can be passed
to helpers and the like.

You'll always be able to do that.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]