dotgnu-general
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [CoreTeam]Re: [DotGNU]Potentially useful kit


From: Chris Smith
Subject: Re: [CoreTeam]Re: [DotGNU]Potentially useful kit
Date: Tue, 8 Jan 2002 00:10:47 +0000

On Monday 07 January 2002 18:41, you wrote:
> Yes, it's quite interesting.
>
> Did I read it correctly (the phoenix explanation) when it said that it
> maintains only one stateful connection per server?  Please explain any
> issues that have arisen from this in the context of deployment.  Are
> there any?

Did you mean 'per client?' here??

Phoenix internally multiplexes connections over multiple calls to select, and 
divides its time processing IO between these connections.

(warning techie bit!)
Four brief steps in processing a connection:

1. For each new connection, phoenix creates a new session object for the 
connection and associates it with the connection.

3. When there is data ready to be read from the new connection, Phoenix 
invokes the service modules' read_handler, passing it the session object for 
that connection.

4. As service modules store all state and buffer information unique to a 
connection within these session objects, being handed one at any time allows 
it to resume IO with the client at the the point it left off.

[Service module read_handlers read data from clients in 'chunks'. If there is 
 more that one chunks worth of data to read from a client, a service module 
 must wait until it is 're-scheduled' and get the next chunks worth.]

So, going back to your question, you'll see from the above that phoenix 
maintains state for every connection it is processing.

The advantage of the internal multiplex design is that all connections get 
accepted and start being processed almost immediately.  There is no implicit 
blocking (like servers which block on accept() ) up to the limit of available 
fd's - which is of course tunable.

So far there have been no problems with deployment. Though I am aware of some 
issues that could arise from such a single process approach:

1.  A very large number of clients simultaneously transferring large amounts 
of data could potentially see each client perceiving a reduction in 
throughput.  This will be made particularly obvious if Phoenix is continually 
paged out by the OS's process scheduler on a busy server.

2. No use is made of additional processors in multi-processor machines.

3. The amount of processing a Phoenix service module may perform on a 
connection in a single time-slice is limited, as the overall performance of 
Phoenix is governed by that of it's slowest component.

So, by booting multiple Phoenix processes we can alleviate the issues 
associated with the OS's process scheduler (1) and can tie them to particular 
processors (2).

However (3) is a different problem, but as long as the developer of a Phoenix 
service module is mindful of Phoenix' fundamental design criteria, "to 
service client requests a quickly as possible", then all will be well.

If request processing is likely to take a significant length of time, then it 
is best to hand the received request off to another process - Phoenix 
becoming a gateway in this case.  This is Phoenix's primary role within 
Goldwater - getting data in and out!

As an example, Phoenix is particularly good at serving HTTP requests for 
static data, particularly if the HTTP service module employs page caching for 
the most frequently requested pages (to reduce disk IO).

Hopefully that answers your question :o)

Regards,
Chris

-- 
Chris Smith
  Technical Architect - netFluid Technology Limited.
  "Internet Technologies, Distributed Systems and Tuxedo Consultancy"
  E: address@hidden  W: http://www.nfluid.co.uk



reply via email to

[Prev in Thread] Current Thread [Next in Thread]