chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Chicken-users] Enterprise and scalability


From: Mor Phir
Subject: Re: [Chicken-users] Enterprise and scalability
Date: Sat, 24 Jan 2009 01:36:10 +0100

Here is a try to collect my thoughts so far. Keeping in mind that we
want this simple ;)

http://omploader.org/vMTYweA

Right now, I have placed the REST within the form module, meaning that it also creates the url path.
But of course, segregating the menu system and the path system into its own respective modules is trivial.
What is of great importance however - is how the menu-system should be tied to the path-module (url generator).
Thus keeping in mind the requirement for REST. I really want consitency between urls and menu names.

eg.

www.example.com/admin/plugins

Menu level 1: <a>Admin</a>
 |__ Menu level 2: <a>Plugins</a>

Im sketching and tinkering widly here, so please dont think my modules is anything final.

/morphir

On Mon, Jan 19, 2009 at 10:59 PM, Jörg F. Wittenberger <address@hidden> wrote:
I've missed that posting (due to mail host faliure).

Am Freitag, den 16.01.2009, 10:18 +0000 schrieb Alaric Snell-Pym:
> On 12 Jan 2009, at 12:20 pm, Mor Phir wrote:
>
> > I was wondering if anyone on the ml where using chicken in a
> > enterprise enviroment.
>
> *booming silence*
>
> I'm sure there was somebody!

More or less.

We recently ported askemos.org from rscheme to chicken.

Roll out is currently stalled (hence we do not _yet_ _deploy_ in a
enterprise environment, but hope to do so RSN) -- actually because we're
busy elsewhere here and I'm sort of waiting for Felix to include those
scheduler modifications I came up with -- wich incidentally address some
of those "scalability" issues.  See
http://lists.gnu.org/archive/html/chicken-users/2008-12/msg00122.html
and related threads from the last 3 month.

There are a few more minor issues related to higher load, which I shall
post about (the one I currently have in my mind: beware of garbage
collection between fork and exec if finalizers might come into play,
since those might fail in the child [[for instance because they access
file descriptors close just before the fork]] and hence the gc will loop
forever) as time permits.

> > scenario:
> > You wanna utilize more cores and more cpus at your setup, how would
> > you go about this?
>
> Well, just run more chicken processes. Design your app so that lots of
> processes can cooperate.

I can second that.  Really.

(That is, I do not only have some experiences deploying chicken or -
highly similar wrt. multi core utilisation - rscheme but also with n:m
mapping from POSIX thread to user level threads at "pure C" level.)

Assuming that parallel read is never a real issue, since we know how to
do master-slave-replication, all you need is replicated mutation [and
that's basically what you get from Scheme from askemos.org at this
time].

In this context network latency is going to be the bottleneck, not the
processing power at all.  The less frequent you need save mutation -- no
matter of actual size of the mutated state -- the faster you application
will fly.  That's the design issue to keep in mind.

> But don't be put off by the lack of 'native threads' in Chicken. That
> would let you use more cores in one chicken process, if you wrote
> threads, but it can't scale beyond more than one physical server,
> while being multi-process can;

As a rough estimation, based on the algorithm (byzantine agreement)
implemented in askemos expect: 2*P+S where P is the "ping time at
application level" - including overhead for SSL and friends or whatever
you settle on above the wire - and S is the time spent in local
processing to commit the transaction to persistent storage.  The point
is: P >> S (read: P much larger than S); let alone the variance of P.
And *then* add clock deviation!  I fact we measure anything from
responses at 10ms past the request - which is less then the ICMP ping
time for the same pair of hosts - to 13 seconds with a "felt" average of
250ms.  To sum it up: more cores will make your reads faster.  For write
performance tweak your whole setup instead, especially the network.

/Jörg



reply via email to

[Prev in Thread] Current Thread [Next in Thread]