qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PULL for-2.0 2/7] raven: Implement non-contiguous I/O


From: Peter Maydell
Subject: Re: [Qemu-devel] [PULL for-2.0 2/7] raven: Implement non-contiguous I/O region
Date: Mon, 7 Apr 2014 22:21:44 +0100

On 7 April 2014 21:40, Andreas Färber <address@hidden> wrote:
> Am 07.04.2014 21:32, schrieb Andreas Färber:
>> I tested .bswap = false - that fixes ppc64 host but breaks x86_64 host.
>
> Same results for the following patch (x86_64 broken, ppc64 fixed):
>
> diff --git a/hw/pci-host/prep.c b/hw/pci-host/prep.c
> index d3e746c..fd3956f 100644
> --- a/hw/pci-host/prep.c
> +++ b/hw/pci-host/prep.c
> @@ -177,7 +177,7 @@ static void raven_io_write(void *opaque, hwaddr addr,
>  static const MemoryRegionOps raven_io_ops = {
>      .read = raven_io_read,
>      .write = raven_io_write,
> -    .endianness = DEVICE_LITTLE_ENDIAN,
> +    .endianness = DEVICE_NATIVE_ENDIAN,
>      .impl.max_access_size = 4,
>      .valid.unaligned = true,
>  };

Unsurprisingly, since both of those changes add/remove an
extra endianness swap for all host systems.

What you're looking for is the point in the chain where
we do something which is different depending on the
endianness of the host. You could stick in debug printfs
or just look around in gdb, to find out whether, for instance,
the values being passed into the raven_io_read/write
functions are different on the two hosts: if so then the
problem is somewhere further up the callstack...

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]