coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 1/2] split: ensure input is processed when filters exit early


From: Pádraig Brady
Subject: [PATCH 1/2] split: ensure input is processed when filters exit early
Date: Sun, 19 Mar 2017 22:53:58 -0700

commit v8.25-4-g62e7af0 introduced the issue as it
broke out of the processing loop irrespective of
the value of new_file_flag which was used to indicate
a finite number of filters or not.

For example, this ran forever (as it should):
  $ yes | split --filter="head -c1 >/dev/null" -b 1000
However this exited immediately due to EPIPE being propagated
back through cwrite and the loop not considering new filters:
  $ yes | split --filter="head -c1 >/dev/null" -b 100000

Similarly processing would exit early for a bounded number of
output files, resulting in empty data sent to all but the first:
  $ truncate -s10T big.in
  $ split --filter='head -c1 >$FILE' -n 2 big.in
  $ echo $(stat -c%s x??)
  1 0

I was alerted to this code by clang-analyzer,
which indicated dead assigments, which is often
an indication of code that hasn't considered all cases.

* src/split.c (bytes_split): Change the last condition in
the processing loop to also consider the number of files
before breaking out of the processing loop.
* tests/split/filter.sh: Add a test case.
---
 src/split.c           | 3 ++-
 tests/split/filter.sh | 6 ++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/src/split.c b/src/split.c
index 9662336..85bc052 100644
--- a/src/split.c
+++ b/src/split.c
@@ -654,6 +654,7 @@ bytes_split (uintmax_t n_bytes, char *buf, size_t bufsize, 
size_t initial_read,
             {
               /* If filter no longer accepting input, stop reading.  */
               n_read = to_read = 0;
+              eof = true;
               break;
             }
           bp_out += w;
@@ -666,7 +667,7 @@ bytes_split (uintmax_t n_bytes, char *buf, size_t bufsize, 
size_t initial_read,
           opened += new_file_flag;
           to_write -= to_read;
           new_file_flag = false;
-          if (!cwrite_ok)
+          if (!cwrite_ok && opened == max_files)
             {
               /* If filter no longer accepting input, stop reading.  */
               n_read = 0;
diff --git a/tests/split/filter.sh b/tests/split/filter.sh
index 8d211c0..a85093c 100755
--- a/tests/split/filter.sh
+++ b/tests/split/filter.sh
@@ -62,4 +62,10 @@ if truncate -s$N zero.in; then
   timeout 10 sh -c 'split --filter="head -c1 >/dev/null" -n 1 zero.in' || 
fail=1
 fi
 
+# Ensure that "endless" input _is_ processed for unbounded number of filters
+for buf in 1000 1000000; do
+  returns_ 124 timeout .5 sh -c \
+    "yes | split --filter='head -c1 >/dev/null' -b $buf" || fail=1
+done
+
 Exit $fail
-- 
2.9.3




reply via email to

[Prev in Thread] Current Thread [Next in Thread]