[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] pooling connections

From: Julio Jiménez
Subject: Re: [Gnumed-devel] pooling connections
Date: Sun, 01 Sep 2002 13:33:43 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.0) Gecko/20020623 Debian/1.0.0-0.woody.1

Horst Herb wrote:

Problem: database "commits" via client libraries are done *per connection* and not per cursor, unless server side cursors are used explicitly in the queries.

Shall we rely on the developers to diligently use server side cursors for write access, or shall we return a separate database connection for each service request (a la psycopg)? I calculated for my current implementation that I would need about 20 connections par client to do it that way; even more, if the services are distributed.
It's true.. "commits" are at connections levels. But I think you musn't have 20 connections for it. you must issolate and serialize the transactions (don execute cursors for insert, update... until really need to be upgraded, then apply connection.commit()) this is the way i usually use.

Julio Jiménez

Or, shall we request separate write and read-only connections (my favourite), where the read connections share one single physical connection, and the write access gets a connection all for itself. Advantage: the vast majority of access attempts are read-only queries. Write queries are few and far between. We would reduce the number of physical connections needed, which would reduce ressource consumption and thus improve performance.

Any thoughts welcome. I am just about to finalize asynchronous communication with the backend, and need to know which way to go.


Gnumed-devel mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]