qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] memory: make ram device read/write endian sensi


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH] memory: make ram device read/write endian sensitive
Date: Tue, 21 Feb 2017 18:09:04 +0000

On 21 February 2017 at 16:34, Paolo Bonzini <address@hidden> wrote:
>
>
> On 21/02/2017 17:21, Alex Williamson wrote:
>> On Tue, 21 Feb 2017 14:46:55 +0800
>> Yongji Xie <address@hidden> wrote:
>>
>>> At the moment ram device's memory regions are NATIVE_ENDIAN. This does
>>> not work on PPC64 because VFIO PCI device is little endian but PPC64
>>> always defines static macro TARGET_WORDS_BIGENDIAN.
>>>
>>> This fixes endianness for ram device the same way as it is done
>>> for VFIO region in commit 6758008e2c4e79fb6bd04fe8e7a41665fa583965.
>>
>> The referenced commit was to vfio code where the endianness is fixed,
>> here you're modifying shared generic code to assume the same
>> endianness as vfio.  That seems wrong.
>
> Is the goal to have the same endianness as VFIO?  Or is it just a trick
> to ensure the number of swaps is always 0 or 2, so that they cancel away?
>
> In other words, would Yongji's patch just work if it used
> DEVICE_BIG_ENDIAN and beNN_to_cpu/cpu_to_beNN?  If so, then I think the
> patch is okay.

I think any patch that proposes adding or removing one or more
endianness related swaps should come with a commit message that
states very clearly why this exact point in the stack is the
correct place to do these swaps, from a design point of view.
(This is so we can avoid the pitfall of putting in enough swaps
to cancel each other out but at the wrong point in the design.)

In this instance I don't understand the patch. The ram_device
mem-ops are there to deal with memory regions backed by a
lump of RAM, right? Lumps of memory are always the endianness
of the host CPU by definition, so DEVICE_NATIVE_ENDIAN and
no swapping in the accessors seems like it ought to be the right
thing...

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]