|From:||Norberto R. de Goes Jr.|
|Subject:||Re: [lwip-users] PPP [was: Netifs routing]|
|Date:||Wed, 9 Mar 2016 16:38:48 -0300|
On Wed, Mar 09, 2016 at 08:01:59PM +0100, Sylvain Rochet wrote:
> On Wed, Mar 09, 2016 at 03:28:32PM -0300, Norberto R. de Goes Jr. wrote:
Oh, and while we are at it, PPP routing (or any other point to point> > I would like to understand what can be happening in my setup (attached
> > figure). I have 03 VM´s (all with SO-Linux), just one with echo-server
> > application, the others with echo-client app.
> > "VM-lwip":
> > - eth0 and tty0 (pyshical netwoking and serial interfaces)
> > - running a server-echo app (lwip user). This app creates two netifs:
> > > netif-0: connected to eth0 - ip: 10.0.2.180, mask:
> > 255.255.255.0
> > > netif-1: connected to serial (PPP protocol) - ip: 10.0.3.183,
> > mask: 255.255.255.0.
> > - Then it binds to "IP_ADDR_ANY" address to receive packages
> > (netconn_recv).
> > "VM-0":
> > - pyshical network interface (eth0) connected to same subnetwork of
> > "VM-lwip".
> > "VM-1":
> > - pyshical serial interface (PPP) connected to "VM-lwip"
> > I do not use TUN/TAP and brigde. The "access" components shown in the
> > attached figure can receive/transmit data from/to eth and serial. That
> > works fine, no problem.
> > When I run the echo-client from VM-0 sending packets to IP associated to
> > netif-0, all works fine, no problem.
> > But when I run the echo-client from VM-1(PPP stablished) sending packets to
> > netif-1 IP adrress, the lwip processes the packages and put the reply
> > packages to netif-0.
> > I have been investigating and I think that the problem is associated to
> > PPP netif netmask (ever 255.255.255.255). Please, see the "get_mask"
> > function and "netif_add" call both codes in the "lwip/src/netif/ppp/ppp.c"
> > file.
> > Thus, the "ip4_route" function does not match the appropriate netif, in
> > this case the netif-1, and returns the default-netif (netif-0, associated
> > to eth0). Then the outgoing packets are forwarding to eth and not to
> > serial. When the client-echo is run in VM#0 the all works fine because the
> > match happens.
> > Please, is it correct?
> PPP links are point-to-point, as such there is no such concept of
> netmask because there is only one possible endpoint.
> Actually both endpoints can be on 2 totally different IP "class" (it
> hurts writing this), for IP6CP you don't even have the choice because
> random local link addresses are used.
> What used to be implemented on some host is a forward if an interface
> available in the system matches the local PPP interface IP, i.e. an
> other interface with a 192.168.10.0/24 subnet would automatically
> forward if the PPP interface local address matches this subnet, but this
> doesn't change at all that the PPP interface is netmaskless ! This is
> what you are seeing in the get_mask() function. This is an ugly hack and
> I'm not sure it is still working today, and that's only working for the
> client side, it was used to provide multiple IP to a customer without
> having to provide a loop network.
> It only works if the routing table selection is clever enough to select
> the netmaskful interface between one point to point interface
> (netmaskless) and one netmaskful interface if both interfaces are
> sharing the same subnet, this is dirty, stupid, and complete nonsense,
> especially in a -lightweight- stack.
> There is a IPCP mask request defined in the specs but pppd never
> supported it, it is meant to negotiate a pool of address to your peer,
> but your peer must be able to add a routing table entry, and there is
> still the shared subnet problem after all.
> What you need here is a routing table, see LWIP_HOOK_IP4_ROUTE.
> If you need dynamic IP routing, feel free to add OSPF support to lwIP ;-)
> (that's a joke, it's a HUUUUUGE task and no one would ever need OSPF
> in lwIP :p)
protocol such as SLIP) to its /32 peer address was fixed in commit
8b2c73de4e ("ip4: routing: check peer for point to point interfaces").
lwip-users mailing list
|[Prev in Thread]||Current Thread||[Next in Thread]|