qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PULLv2 03/32] slirp: Don't mark struct ipq or struct ipasf


From: Samuel Thibault
Subject: [Qemu-devel] [PULLv2 03/32] slirp: Don't mark struct ipq or struct ipasfrag as packed
Date: Tue, 5 Feb 2019 18:58:58 +0200

From: Peter Maydell <address@hidden>

There is no reason to mark the struct ipq and struct ipasfrag as
packed: they are naturally aligned anyway, and are not representing
any on-the-wire packet format.  Indeed they vary in size depending on
the size of pointers on the host system, because the 'struct qlink'
members include 'void *' fields.

Dropping the 'packed' annotation fixes clang -Waddress-of-packed-member
warnings and probably lets the compiler generate better code too.

The only thing we do care about in the layout of the struct is
that the frag_link matches up with the ipf_link of the struct
ipasfrag, as documented in the comment on that struct; assert
at build time that this is the case.

Signed-off-by: Peter Maydell <address@hidden>
Reviewed-by: Eric Blake <address@hidden>
Signed-off-by: Samuel Thibault <address@hidden>
---
 slirp/ip.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/slirp/ip.h b/slirp/ip.h
index 243b6c8b24..20614f3b53 100644
--- a/slirp/ip.h
+++ b/slirp/ip.h
@@ -217,7 +217,7 @@ struct ipq {
        uint8_t ipq_p;                  /* protocol of this fragment */
        uint16_t        ipq_id;                 /* sequence id for reassembly */
        struct  in_addr ipq_src,ipq_dst;
-} QEMU_PACKED;
+};
 
 /*
  * Ip header, when holding a fragment.
@@ -227,7 +227,10 @@ struct ipq {
 struct ipasfrag {
        struct qlink ipf_link;
        struct ip ipf_ip;
-} QEMU_PACKED;
+};
+
+QEMU_BUILD_BUG_ON(offsetof(struct ipq, frag_link) !=
+                  offsetof(struct ipasfrag, ipf_link));
 
 #define ipf_off      ipf_ip.ip_off
 #define ipf_tos      ipf_ip.ip_tos
-- 
2.20.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]