gnumed-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnumed-devel] run_commit2()


From: Karsten Hilbert
Subject: Re: [Gnumed-devel] run_commit2()
Date: Tue, 2 Nov 2004 17:40:07 +0100
User-agent: Mutt/1.3.22.1i

> My biggest criticism of run_commit is that it establishes a new
> read-write *connection* for every call (unless you manually create 
> connections,
Writes are rare as opposed to reads.

> however IMHO the main purpose of this function is to avoid having to do that) 
To me it's main purpose is to abstract the
writing-transaction machinery into a more
accessible API.

> Can we pool read/write connections as we do for read connections ?
We could but that should be done transparently inside
gmPG.GetConnection() IMO. Also we'd have to do it a bit
different from the readonly connections as one would have to
make sure rw conns aren't handed out more than once, eg. only
reused after having been handed back in explicitely. This is
necessary for consistency reasons. Still, one could "prefork"
a few connections and "fork" more when the pool runs low.
Psycopg (which is likely "better" than pyPgSQL) used to do
that but they seem to have gotten away from it if I read their
news correctly. There's three reasons why we don't use
psycopg:

The good reason: Last I looked it didn't support NOTIFY/LISTEN.
The OK reason: We have quite some tested code for pyPgSQL.
The bad reason: I had trouble installing it.

> Even better, throw queries which have no result
>  onto a queue so a background thread can commit them 
This would be a conceptual change, not just a rightification
of run_commit. I shy away from the complexity vs. gain and
from the sync issues.

> [which, with use of a subquery like "(select curr_val 
> ('the_table_just_inserted_pk_seq'))", would be most],
Well, INSERTs and UPDATEs are, generally speaking, fairly
evenly distributed for our purpose. INSERTs are perhaps even a
bit more likely (eg. we create more data than change data).
And INSERTs usually want to *return* the "LAST_OID". This is
at least common enough to happen that the ability to return
values from run_commit*() was added in in the first place.

> I still don't understand why we can't throw a python exception for database 
> errors.
Because it would *force* the calling code to *handle*
potential errors (some will say this is good ;-)  From memory
it was you who once said the run_commit() is a convenience
function to free callers from always having to redo the
tedious "try: except: bureaucracy" on every call.

I want to be able to say "I don't care whether this query
fails or not" and still not be forced to add an additional
level of indentation. Likely that's just me being lazy or
something.

> AFAICT there are 5 types of error, so 5 classes of exception:
>       1 - bugs in the query
nothing we can do about it at runtime

>       2 - loss of connection or other backend catastrophe.
nothing we can do about it at runtime

>       3 - integrity constraint violation
indeed, this is useful to report, as it just may be possible
for the user to change the data somewhat and resubmit

>       4- access violation
as is this, however, nothing can really be done about that at
runtime short of the user waiting at the error message dialog
having the admin fix permissions on the backend and then retry
the transaction - which is unlikely given the fact users don't
read error messages let alone act on them

>       5 - concurrency
this is the most useful to report as there is a rational
response - make the user know about the conflict, let her
handle merging and resubmit

I will try to support more types of errors according to your
suggestions. This is hard/error-prone, however, as long as
libpq doesn't support the new PostgreSQL error reporting
protocol because it means we'll have to parse returned strings
for key words.

> 5 you have covered, for the others the client action is nearly always the 
> same: dump to the log (which run_commit already does) 
> and display an error dialog, which can be done by a single except: clause in 
> the GUI event handler, using a message supplied in the 
> exception. 
However, most such messages don't make much sense to Joe User.

> For 1,2 and 4 run_commit can generate an error message itself, for 3 you 
> would need to provide with the query, so <queries>
> would be a list of (query, [args], error_message)
Interesting idea.

> I'm sure there are occasions where we do need specific action in response to 
> errors, but a try..except clause is much easier
> than surrounding *every* call to run_commit with "if/else",
Huh ? Only if you care to handle any errors.

> just to do the same thing on error 90% of the time, additionally excepts need
> only catch the error type you are looking for.
Uhm, like if/else ? I see no difference there. One advantage
of try:except over if:else: that I see is that the former is
said to be somewhat faster.

Karsten
-- 
GPG key ID E4071346 @ wwwkeys.pgp.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346




reply via email to

[Prev in Thread] Current Thread [Next in Thread]