qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [qemu-s390x] [PATCH v2 2/3] hw/s390x/css: Remove QEMU_P


From: Thomas Huth
Subject: Re: [Qemu-devel] [qemu-s390x] [PATCH v2 2/3] hw/s390x/css: Remove QEMU_PACKED from struct SenseId
Date: Wed, 26 Sep 2018 11:09:03 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

On 2018-09-26 10:43, David Hildenbrand wrote:
> On 26/09/2018 10:36, Thomas Huth wrote:
>> On 2018-09-26 10:17, David Hildenbrand wrote:
>>> On 26/09/2018 10:09, Thomas Huth wrote:
>>>> On 2018-09-26 10:07, David Hildenbrand wrote:
>>>>> On 26/09/2018 10:04, David Hildenbrand wrote:
>>>>>> On 26/09/2018 09:38, Thomas Huth wrote:
>>>>>>> The uint16_t member cu_type of struct SenseId is not naturally aligned,
>>>>>>> and since the struct is marked with QEMU_PACKED, this can lead to
>>>>>>> unaligned memory accesses - which does not work on architectures like
>>>>>>> Sparc. Thus remove the QEMU_PACKED here and rather copy the struct
>>>>>>> byte by byte when we do copy_sense_id_to_guest().
>>>>>>>
>>>>>>> Signed-off-by: Thomas Huth <address@hidden>
>>>>>>> ---
>>>>>>>  hw/s390x/css.c         | 33 +++++++++++++++++----------------
>>>>>>>  include/hw/s390x/css.h |  2 +-
>>>>>>>  2 files changed, 18 insertions(+), 17 deletions(-)
>>>>>>>
>>>>>>> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
>>>>>>> index 5a9fe45..0e51b85 100644
>>>>>>> --- a/hw/s390x/css.c
>>>>>>> +++ b/hw/s390x/css.c
>>>>>>> @@ -750,20 +750,20 @@ static void sch_handle_halt_func(SubchDev *sch)
>>>>>>>  
>>>>>>>  }
>>>>>>>  
>>>>>>> -static void copy_sense_id_to_guest(SenseId *dest, SenseId *src)
>>>>>>> +static void copy_sense_id_to_guest(uint8_t *dest, SenseId *src)
>>>>>>>  {
>>>>>>>      int i;
>>>>>>>  
>>>>>>> -    dest->reserved = src->reserved;
>>>>>>> -    dest->cu_type = cpu_to_be16(src->cu_type);
>>>>>>> -    dest->cu_model = src->cu_model;
>>>>>>> -    dest->dev_type = cpu_to_be16(src->dev_type);
>>>>>>> -    dest->dev_model = src->dev_model;
>>>>>>> -    dest->unused = src->unused;
>>>>>>> -    for (i = 0; i < ARRAY_SIZE(dest->ciw); i++) {
>>>>>>> -        dest->ciw[i].type = src->ciw[i].type;
>>>>>>> -        dest->ciw[i].command = src->ciw[i].command;
>>>>>>> -        dest->ciw[i].count = cpu_to_be16(src->ciw[i].count);
>>>>>>> +    dest[0] = src->reserved;
>>>>>>> +    stw_be_p(dest + 1, src->cu_type);
>>>>>>> +    dest[3] = src->cu_model;
>>>>>>> +    stw_be_p(dest + 4, src->dev_type);
>>>>>>> +    dest[6] = src->dev_model;
>>>>>>> +    dest[7] = src->unused;
>>>>>>> +    for (i = 0; i < ARRAY_SIZE(src->ciw); i++) {
>>>>>>> +        dest[8 + i * 4] = src->ciw[i].type;
>>>>>>> +        dest[9 + i * 4] = src->ciw[i].command;
>>>>>>> +        stw_be_p(dest + 10 + i * 4, src->ciw[i].count);
>>>>>>
>>>>>>
>>>>>> Not really a fan of this, as we sacrifice readability due to one
>>>>>> unaligned member. What about only converting the unaligned members (e.g.
>>>>>> cu_type) from uint16_t to uint8_t[2] and adding a comment why this is
>>>>>> split. Then the structure is naturally packed.
>>>>>>
>>>>>> We only have to fixup the places that check cu_type.
>>>>>>
>>>>>
>>>>> Just realized this was basically suggested by Peter. If it would be as
>>>>> simple as splitting VMSTATE_UINT16 into two VMSTATE_UINT8 or similar, I
>>>>> would prefer that.
>>>>
>>>> It's not that simple, it would break migration from older versions of
>>>> QEMU due to endianness issues then.
>>>
>>> Migration between different QEMUs (e.g. big to little) is not supported
>>> as far as I remember. But my head always hurts when thinking about
>>> endianness conversions, so I am pretty sure I am missing something here.
>>
>> I was not talking about migration between hosts with different
>> endianess, but e.g. migration from a x86 host to a x86 host. If you want
>> to send 0x1234, that would be 0x34 0x12 when using a 16-bit value, but
>> if you break it up into hi- and low, then it's 0x12 0x34 instead.
>>
>> Hmm, actually the migration code seems to properly convert 16-bit values
>> to network byte order, so maybe this could even work. But honestly, I
>> still think we should avoid QEMU_PACKED as much as possible and better
>> fill in the memory in copy_sense_id_to_guest() via a byte array here. As
>> we've seen now, QEMU_PACKED can easily result in non-portable code, so
>> even if copy_sense_id_to_guest() looks a little bit uglier now than
>> before, it's certainly the better and more portable way to do this.
>>
>>  Thomas
>>
> 
> IMHO something like that looks much better (hope I am not messing up
> cu_type)
[...]
> diff --git a/include/hw/s390x/css.h b/include/hw/s390x/css.h
> index 9da5912921..592640f4dd 100644
> --- a/include/hw/s390x/css.h
> +++ b/include/hw/s390x/css.h
> @@ -38,6 +38,17 @@ typedef struct CIW {
>      uint16_t count;
>  } QEMU_PACKED CIW;
> 
> +/* Same as SenseID but naturally packed (what the guest wants) */
> +typedef struct SenseIdPacked {
> +    uint8_t reserved;
> +    uint8_t cu_type[2];
> +    uint8_t cu_model;
> +    uint16_t dev_type;
> +    uint8_t dev_model;
> +    uint8_t unused;
> +    CIW ciw[MAX_CIWS];
> +} SenseId;
> +
>  typedef struct SenseId {
>      /* common part */
>      uint8_t reserved;        /* always 0x'FF' */
> @@ -48,7 +59,7 @@ typedef struct SenseId {
>      uint8_t unused;          /* padding byte */
>      /* extended part */
>      CIW ciw[MAX_CIWS];       /* variable # of CIWs */
> -} QEMU_PACKED SenseId;
> +} SenseId;

... but then we have to define the SenseId struct twice, which is IMHO
also ugly.

I know, using packed structs looks more readable in many cases, but from
my experience with various projects in the past, it really leads to
portability issues in a lot of cases unless you know very well what you
are doing (e.g. because your code will only run on one well-defined
target architecture). In a project like QEMU which is supposed to run on
a great variety of host architectures, you're better off avoiding packed
structs and transfer bytes individually when there is need for it (like
copying structs to the guest).

 Thomas



reply via email to

[Prev in Thread] Current Thread [Next in Thread]