fastcgipp-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Fastcgipp-users] SIGPIPE causes SEGFAULT


From: Volker Schreiner
Subject: Re: [Fastcgipp-users] SIGPIPE causes SEGFAULT
Date: Wed, 10 Nov 2010 00:51:58 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.15) Gecko/20101027 Lightning/1.0b1 Thunderbird/3.0.10

The system i am working with is a Core2Duo with 2 Gbytes of ram running a Ubuntu 10.04 LTS - Lucid Lynx with the installed prerequisites of boost 1.40 and libmysqlclient16. I am using nginx version 0.7.65 that connects the fastcgi application through a TCPIP socket on the
IP address 127.0.0.1 (localhost). Furthermore nginx starts 10 worker_processes with up to 1024 concurrent worker_connections per process (10240 concurrent connections). The test i
was running on localhost with Apache Benchmark counted 10000 Requests with 1000 concurrent requests. I think the 1000 concurrent requests in combination with the busy system caused the nginx to create a large number of SIGPIPE signals that needs to be handled
by the busy webapplication and causes a confusion in the transceiver that leads to the segmentation fault.

Am 09.11.2010 19:37, schrieb Eddie Carle:
On Tue, 2010-11-09 at 18:38 +0100, Volker Schreiner wrote:
I am surprised but i experienced the same behaviour for the Echo example. It also crashes with Segfault in freeRead.
Well this is most definitely a bug in the way the Transceiver class is handling the pipe once it has been closed on the other end. I'll have to fool around with it. It may be related to calling a close() on a file descriptor that has already been closed. Or perhaps not properly handling a SIGPIPE errno. Can you send all additional info on the system?
--
Eddie Carle


-- 
Mit freundlichen Grüßen

Volker Schreiner

reply via email to

[Prev in Thread] Current Thread [Next in Thread]