Fix receive method to enforce total timeout, preventing excessive wait #88
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
If multiple iterations of
receive()
are needed to find an ICMP packet matching the sent packet's ID and sequence, the total timeout (e.g.,timeout=0.5s
) can be exceeded. The socket'ssettimeout()
applies only to individualrecvfrom
calls, not their cumulative duration. The original code attempted to track this manually with atime_limit
, but the check occurred after packet receipt, allowing the total wait time to grow beyond the specified timeout.Example
Consider three concurrent
ping
calls (e.g., via threads) usingicmplib.ping(host, count=1, interval=0, timeout=0.5, id=proc_num)
, targeting fictional hosts HostA, HostB, and HostC with a timeout of 0.5s. Assume response times as seen byicmplib
:Starting at T=0, the execution unfolds as follows:
ping
instance.receive()
inicmplib/sockets.py
, setting the socket timeout to 0.5s viaself._sock.settimeout(0.5)
. All three block atself._sock.recvfrom(1024)
.At T=100ms, HostA responds:
recvfrom
call succeeds (100ms < 0.5s). HostA's instance matches the ID/sequence and returns.receive()
, starting a newrecvfrom
with a fresh 0.5s timeout. Total time is now 100ms, but the next wait could push it past 0.5s.At T=200ms, HostC responds:
recvfrom
call succeeds (100ms wait from T=100ms to T=200ms < 0.5s). HostC's instance matches and returns.recvfrom
with another 0.5s timeout. At this point, only 0.3s should remain of the total 0.5s, yet it gets a full 0.5s.At T=700ms, HostB times out:
recvfrom
(started at T=200ms) finally times out after 500ms (T=200ms + 0.5s = T=700ms).current_time > time_limit
check triggers, raising a timeout exception, but this is 200ms overdue (total wait = 700ms vs. intended 500ms).Summary:
The bug allows each
recvfrom
to reset the 0.5s clock, accumulating up to 0.5s per iteration (e.g., 100ms + 100ms + 500ms = 700ms), far exceeding the intended total timeout of 0.5s.