[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[lwip-users] about alignment issues.

From: Pedro Alves
Subject: [lwip-users] about alignment issues.
Date: Thu, 13 Apr 2006 19:05:05 +0100
User-agent: Mozilla Thunderbird 1.0.2 (Windows/20050317)

Hi all,

Is there a reason the structures that need a forced alignment in lwip aren't declared something like this:

take pbuf.c for example:

typedef unsigned long LWIP_ALIGN_TYPE;

struct pool_pbuf
struct pbuf pbuf; // "inherit" a pbuf. C guaranties the first member takes the same address of the container. pool_pbuf b; &b == &b.pbuf -> (struct pbuf* )&b is ok.
   LWIP_ALIGN_TYPE force_align;
   u8_t payload_buf[MEM_ALIGN_SIZE(PBUF_POOL_BUFSIZE)];

static struct pool_pbuf mem[PBUF_POOL_SIZE];

Then in pbuf_init:

 u16_t i;

 pbuf_pool = (struct pbuf* )mem; // already aligned

 lwip_stats.pbuf.avail = PBUF_POOL_SIZE;
#endif /* PBUF_STATS */

 /* Set up ->next pointers to link the pbufs of the pool together */
 for(i = 0; i < PBUF_POOL_SIZE; ++i) {
   p = (struct pbuf* )mem[i];
   p->next = (struct pbuf*)mem[i+1];
   p->len = p->tot_len = MEM_ALIGN_SIZE(PBUF_POOL_BUFSIZE);
   p->payload = mem[i].payload_buf;
   p->flags = PBUF_FLAG_POOL;

 /* The ->next pointer of last pbuf is NULL to indicate that there
    are no more pbufs in the pool */
  mem[PBUF_POOL_SIZE-1]->next = NULL;

 pbuf_pool_alloc_lock = 0;
 pbuf_pool_free_lock = 0;
 pbuf_pool_free_sem = sys_sem_new(1);

and where an offset if requested:
p->payload = MEM_ALIGN(p->payload_buf + offset);

If there are compilers that don't support anonymouse unions we can have simple macros and named unions.

I never tested this, but I am sure that this would shave a few ram bytes in many arquitectures. Right now we allocate more than needed.
For example:
u8_t mem[MEM_ALIGNMENT - 1 + PBUF_POOL_SIZE * MEM_ALIGN_SIZE(PBUF_POOL_BUFSIZE + sizeof(struct pbuf))];

allocates MEM_ALIGNMENT - 1 bytes too much if mem ends up already aligned.

What do you think?

Pedro Alves

reply via email to

[Prev in Thread] Current Thread [Next in Thread]