coreutils
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: line buffering in pipes


From: Assaf Gordon
Subject: Re: line buffering in pipes
Date: Thu, 2 May 2019 13:51:59 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

Hello,

On 2019-05-02 1:22 p.m., Egmont Koblinger wrote:
On Thu, May 2, 2019 at 9:14 PM Assaf Gordon <address@hidden> wrote:

The easiest way to avoid that is to use "stdbuf" (from coreutils),
forcing a flush after each line. Assuming the lines are short enough
(and file's output should be short enough), it should work:

     find /usr -print0 | xargs -0r -P199 -n16 stdbuf -oL file | ...

I don't think this is robust enough. If many "stdbuf -oL file"
processes decide to produce a reasonably sized output pretty much at
the same time, it might still suddenly clog the pipe and result in a
short write in one of them. Or am I missing something?


That's exactly why I wrote "assuming the lines are short enough".

A filename (max 1024 bytes?) followed by a mime-type (reasonably short?)
should be enough (regardless of how many processes are writing to the
same pipe at the same time).

It will certainly fail for large outputs, but has worked well (for me)
in such (shorter) scenarios.

I will have to dig further for exact details and justification.

regards,
 - assaf








reply via email to

[Prev in Thread] Current Thread [Next in Thread]