[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[lwip-users] WG: How to limit the UDP Rx packet size to avoid big RAM al
From: |
R. Diez |
Subject: |
[lwip-users] WG: How to limit the UDP Rx packet size to avoid big RAM allocations |
Date: |
Tue, 26 Jun 2018 15:15:43 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 |
>> - Is there some constant to limit the size in bytes of reassembled
>> packets? I found IP_REASS_MAX_PBUFS, but that is not a byte limit.
>> If I set it too low, it may break, because the sender may choose
>>to send many small packets. If I set it too high, the sender could
>> send few but large frames and trigger big memory allocations.
> No, such a config option does not exist. However, you never
> trigger 'big memory allocations'. All you have is a linked
> list of all the small packets. The stack does not copy them
> into one big memory block. And from a DoS perspecive,
> those many small allocations are not worse than getting
> many partly received fragments of different IP packets,
> or am I wrong there?
The following page:
https://blog.cloudflare.com/ip-fragmentation-is-broken/
specifically warns about such memory vulnerabilities:
"Before the re-assembly a host must hold partial, fragment
datagrams in memory. This opens an opportunity for
memory exhaustion attacks."
From your comments, I am starting to suspect that lwIP is somehow
susceptible. Let's discuss this issue a little further.
My embedded system has very little RAM and I have limited the whole lwIP
memory usage to under 32 KiB. That seems to be enough for the kind of
small packets that my simple protocol uses. TCP also works fine, I even
have an HTTPD server running there.
If an attacker sends random packets for random protocols (TCP, UDP,
etc.) and protocol port numbers, lwIP will quickly drop them. My
protocol handlers will also drop UDP packets that are too big. TCP data
will be processed in a streaming manner. Therefore, an attacker cannot
exhaust the 32 KiB memory limit so easily in this respect.
However, if we now consider IP reassembly, a single reassembled packet
can already get over that 32 KiB memory limit, right? I believe that a
single UDP packet can be up to roughly 64 KiB long. An attacker can
exhaust the whole lwIP memory with just 1 or 2 big, fragmented packets.
If an attacker sends them continuously, lwIP processing will mostly
grind to a halt, right?
With an MTU of around 1500 bytes, does it make sense for lwIP to try to
reassemble such huge packets? How about a new constant like
MAX_REASSEMBLED_PACKET_SIZE? If I set it to say 2,048, and lwIP
immediately drops any segment that goes over that limit, wouldn't it at
least mitigate such attacks?
Or do you think that an effective defence is impossible anyway, because
the attacker can just increase the number of IP packet fragments? Those
fragments could belong to different IP packets, for example, just by
changing the source port number of TCP packets.
I am no expert, but if the size of the reassembled packets can be kept
reasonably low, there seem to be strategies to deal with such attacks:
"Robust TCP Stream Reassembly In the Presence of Adversaries"
Sarang Dharmapurikar, Vern Paxson
"If we use a deterministic policy to evict the buffer, the
adversary may be able to take this into account in order to
protect its buffers from getting evicted at the time of over-
flow (the inverse of an adversary willfully causing hash
collisions, as discussed in [9]). This leads us to instead
consider a randomized eviction policy."
Regards,
rdiez