[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-devel] netifapi limitations

From: Joel Cunningham
Subject: Re: [lwip-devel] netifapi limitations
Date: Mon, 19 Mar 2018 17:07:20 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

On 03/17/2018 11:16 AM, Jonathan Larmour wrote:
[ Gah, stupid Enigmail managed to muck up the formatting of the reply to
Sylvain, as well as sending a mail prematurely. Sorry about the previous 

Thanks for the reply Joel.

On 16/03/18 13:48, Joel Cunningham wrote:
On 03/15/2018 09:23 PM, Jonathan Larmour wrote:
I've noticed a limitation in the netifapi (not the netif API!) : [snip]
This is definitely a limitation that has been discussed before on the list
(see https://savannah.nongnu.org/task/index.php?14724). To me, the netif and
netifapis are really only useful from code that manages a struct netif (owns
the allocation/reference, so it can safely pass pointers). I think of this as
link layer management code, which typically adds/removes the neift, applies
configuration settings and not network application code that lives above the
But it's not potentially just application code but higher layer protocol code.
Think of something like mDNS, which ought to sit on top of lwIP, but would
need access to data which at the moment is only accessible in a netif.

Now, I'm not intending at this point to provide an all-singing all-dancing
abstraction and API to do that, but it would be good to be making steps in the
right direction and at the moment having the netifapi take 'struct netif *'
seems like it would store up problems for the future, in which case it's
better to address it sooner and switch to an index before things become too
hard to change.
I'm not sure I'm for changing netifapi since code managing the struct netif can safely use these.  I think I'd be more in favor of a new set of APIs.
I think for network application code, additional APIs are needed to get things
like get address information, list of interfaces.  LwIP doesn't have any of
the normal things like getifaddrs and in my projects, I've got custom code to
do these lookups in a safe way.
It's exactly the sort of thing netifapi should be able to do though, maybe not
immediately, but having the right sort of API for the future.
I agree we should have APIs that do this, but I don't have strong feelings that it should be netifapi. I will eventually want to implement things at the socket levels so I can re-enable interface code in some common OSS applications that I've currently had to disable (use of getifaddrs for example).

When we originally added if_nametoindex and if_indextoname, the only reason netifapi was used was to avoid creating another API abstraction that dealt with the non-core locking case and the MPU support.  This secondary non-core locking + MPU case is a big pain to support and in newer development, we've just said the non-core locking isn't supported for this feature.

We made some progress implementing ifnametoindex/ifindextoname, but
encountered some implementation issues with if_nameindex (which enumerates all
interfaces).  See 
Thanks, I hadn't seen that.

I assume it's
https://savannah.nongnu.org/task/?func=detailitem&item_id=14314#comment45 in
particular. I think the lwIP philosophy in general is make the "native"
interface be simple and low overhead. Anything that wants to add a
standards-compliant veneer that would add overheads should be the place for
taking a hit in space/efficiency.

So that implies to me we should either have a netifapi function as an iterator
(my preference - lower memory and doesn't need dynamic mem alloc), or a
function parallel to if_nameindex() but using netif types. The if_nameindex()
veneer would use the underlying netifapi, but entail a bit more overhead. This
fits with the lwIP philosophy in my mind.
The problem with an iterator is the inherent race in trying to iterate while leaving the LwIP core for each iteration. A netif_add/remove could happen during an iteration and cause a bunch of nasty cases.  I'm more in fan of a single call that returns an atomic snapshot of the interfaces at the time of the call, but this requires a dynamic memory allocation within netifapi

The other thing limiting the implementation was that back at that time: support the non-core locking + MPU case. Now that we've set precedence that we don't have to support that case, we could implement if_nameindex() exclusively in if_api.c, but only for the core locking case.
Next, if the numbers are part of the name, but are assigned by lwIP, how can
netif API users even find an interface except in the most simple static cases,
and making assumptions about interface initialisation order? It makes me think
we need an iterator function in netifapi to report the next used index (which
in practice just traverses the netif list). For example:
Given how it's implemented on git master with the index, you're right the
resulting name is non-deterministic. If we had something like
getifaddrs/if_nameindex, we could enumerate the names, but I think having
names which can be hardcoded into applications isn't a good practice.
I'm not sure it's about hardcoding names necessarily[1], but being able to
find information about all available netif's, whatever they are. The current
situation is worse than a hardcoded name though - you need a hard-coded
reference to a specific netif structure.
Agree, holding a reference to struct netif in code that doesn't own that instance is very dangerous.


[1] Although existing practice on UNIX-y systems shows that having a naming
convention for names can be useful in practice, even if only a prefix, not a
I need to spin up more on naming under Linux, but I've seen kernel features for different kinds of names depending on the driver, i.e. predictable vs unpredictable naming. At least on my Ubuntu 17.10 box, it's renaming the devices to have long numbers, appears to be based off MAC:

$ ifconfig -a

wlx6466b31999f5: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 64:66:b3:19:99:f5  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


reply via email to

[Prev in Thread] Current Thread [Next in Thread]