|
From: | Jirka Hladky |
Subject: | Re: Enhancement request for tee - please add the option to not quit on SIGPIPE when someother files are still opened |
Date: | Fri, 20 Nov 2015 02:10:28 +0100 |
On 19/11/15 23:09, Jirka Hladky wrote:
> If you ignore SIGPIPE in tee in the above then what will terminate the
>
> tee process? Since the input is not ever terminated.
>
>
> That's why I would like to have the option to suppress writing to STDOUT. By default, tee will finish as soon as all files are closed. So without need to have >/dev/null redirection, it will run as long as at least one pipe is open.
>
> /while (n_outputs)/
> / {/
> / //read data;/
> /
> /
> / /* Write to all NFILES + 1 descriptors./
> / Standard output is the first one. *//
> / for (i = 0; i < nfiles; i++)/
> / if (descriptors[i]/
> / && fwrite (buffer, bytes_read, 1, descriptors[i]) != 1)/
> / {/
> / //exit on EPIPE error/
> / descriptors[i] = NULL;/
> / n_outputs--;/
> / }/
> / }/
>
> Also, a Useless-Use-Of-Cat in the above too.
>
> Yes, it is. But anyway, it's not real world example. My real problem is to test RNG by multiple tests. I need to test huge amount of data (hundreds of GB) so storing the data on disk is not feasible. Each test will consume different amount of data - some test will stop after a RNG failure has been detected or some threshold for maximum amount of processed data is reached, others will dynailly change the amount of tested data needed by test results. The command I need to run is
>
> rng_generator | tee >(test1) >(test2) >(test3)
>
>
>> Already done in the previous v8.24 release:
> I have tried it but I'm not able to get desirable behavior. See these examples:
>
> A)
> /tee --output-error=warn </dev/zero >(head -c100M | wc -c ) >(head -c1 | wc -c ) >/dev/null /
> /1/
> /src/tee: /dev/fd/62: Broken pipe/
> /104857600/
> /src/tee: /dev/fd/63: Broken pipe/
>
> => it's almost there expect that it runs forever because of >/dev/null
>
> B)
> /src/tee --output-error=warn </dev/zero >(head -c100M | wc -c ) | (head -c1 | wc -c )/
> /1/
> /src/tee: standard output: Broken pipe/
> /src/tee: /dev/fd/63: Broken pipe/
>
> As you can see, output from (head -c100M | wc -c) is missing
>
> Conclusion:
> Case A) above is close to what I want to achieve but there is a problem with writing to stdout. --output-error=warn is part of the functionality I was looking for. However, to make it usable for scenario described here we need to add an option not to write to stdout. What do you think?
Right, the particular issue here is that the >(process substitutions)
are writing to stdout, and this is intermingled through the pipe
to what tee is writing to stdout.
Generally the process substitutions write somewhere else.
In my example I used stderr (>&2), or you could write to file,
or to /dev/tty for example. Is there any particular reason
the output from your process substitutions need to go to stdout?
The general question is, would it be useful to further
process the intermingled output from process substitutions.
Maybe if it was tagged, but there still is the issue
of atomic writes through pipes, so it would be of limited application.
So in summary, maybe there is the need for --no-stdout,
though I don't see it yet myself TBH.
cheers,
Pádraig
[Prev in Thread] | Current Thread | [Next in Thread] |