|Subject:||Re: [Gnumed-devel] Comments on 0.2|
|Date:||Wed, 21 Jun 2006 11:59:04 +0800|
has anyone used or know if savepoints work ok ? The problem is that I think you can
only rollback to savepoint when it's issued explicitly, i.e. for some reason ,e.g. the client
application wants to rollback before an abort condition occurs. Once you hit the abort
condition, (e.g. too late a concurrent write), I don't think rollback occurs back to the savepoint,
but rather the whole transaction is aborted.
I just don't think any database transaction system is suited for integrity checking of frequently
updated shared data which can be checked out and checked in. CVS or subversion is more like
On Wed Jun 21 11:30 , Syan Tan sent:
mvcc means no read will block, but a write may block or abort, or if
configured, block and timeout. you need to start the transaction at the
time of reading for mvcc to detect a concurrent write.
In the past, once a transaction aborted, all writes in the transaction also abort,
without heirarchical transaction control. The problem is that getting an emr is a
group fetch of many items that may get updated. What's needed is a a subcommit,
that says it is trying to commit a couple for rows of a mass read within in a transaction,
( a subtransaction). Karsten really has implemented the only solution if you can't
have compound transactions.
On Wed Jun 21 7:30 , Tim Churches sent:Karsten Hilbert wrote:
> On Tue, Jun 20, 2006 at 10:38:29PM +1000, Tim Churches wrote:
>> What is wrong with just using a transaction. If the row is locked by
>> another transaction writing to it, then the transaction will abort. No
>> need to mess around with explicit row locks, is there?
> IF we want to minimize the time the row is locked.
> THEN we must
> 1) read the data and display it
> 2) do any amount of user interaction
> 3) only then lock the row and write it
My point is that there is no need to explicitly lock the row at step 3.
Just put a transaction around your update at step 3 and let the PG MVCC
features look after the atomicity of the transaction.
> Now, while I am idling within 2) someone else could have run
> through the entire cycle from 1) to 3). If I then go from 2)
> to 3) without checking for interim changes I will overwrite
> changes without detecting it. If I want to detect it I have
> to check XMIN (or maintain my own marker).
Yes, no argument there. The issue is what you do when you detect that
the data has been updated since you read it. That becomes a user
interface issue which is difficult to solve generically - but unless
you worry about what the user sees when you detect this condition, there
is little point in trying to detect it. Remember the old medical maxim:
Never do a test if you don't know how to interpret or treat the results.
>>> - this will prevent other transactions from *writing* to that row
>>> until I release that lock by "commit"
>> I am pretty sure that Postgres offers more fine-grained concurrency
>> control than this.
> No. Better-than-row-level was introduced in 8.1 or so.
Yeah, but MVCC has been in Postgres since the beginning.
>> Doing this row-level locking is ignoring the MVCC
>> feature of Postgres,
> Absolutely not.
I am not a Postgres expert, but I think you may be mistaken on this point.
>> I thought that SELECT FOR UPDATE is only supported in Postgres in order to
>> allow applications designed for less sophisticated database backends to
>> be ported to PG...
> Well, yes and no. Thinking about it we just *might* shove
> the XMIN checking into the UPDATE query itself and detect
> things that way. Let me think about this.
Or get rid of SELECT FOR UPDATE entirely...
>> let's see if I can find the presentation in which I
>> read this... googles... yup, here it is:
> Page 10:
> Writers only block each other when updating the same row.
> Exactly what we are talking about.
Yeah, and PG looks after this automatically - no need to manually issue
a row lock as you are doing.
> Page 12:
> Serializable is logically cleaner. (but produces more work)
> Exactly what we do. We run serializable transactions. No
> bullshit in our database.
Yeah, and you can set that behaviour as a config option in PG... No need
for you GNUmed code to be second-guessing what PG alreday does.
>>> - 3) checking XMIN
>>> - after my "select for update" no one else can *write* that row
>>> until I commit
>>> - therefore, the timespan between "select for update" and commit
>>> should be minimized
>>> - therefore I only "select for update" right before actually writing
>> I am fairly sure that if you use a transaction PG handles all this for
>> you and will abort the transaction if another transaction has a
>> write-lock on the row in question.
> NO IT DOES NOT. It cannot. Because the concurrency is beyond
> *technical* concurrency. It is concurrency of user intent.
> It cannot because at the time the second transaction runs
> the first one DOES NOT HOLD A LOCK ANYMORE on the row in
> question. Because it has already commited. And that is what
> we need to detect - that something else was committed
> between our data retrieval and our attempt at storing
> modifications thereof.
Karsten, read what I said. I did not say that you did not need to use
XMIN or some other mechanism to check for modified data before you
update, just that you don't need to worry about issuing an explicit row
lock before updating.
>>> - however, the user has seen data on screen from much earlier, namely
>>> when the initial, simple "select" was done
>>> - other people might have overwritten that data in the database
>>> in the meantime (yes, I have seen it happen)
>>> - they, too, properly locked the data but the lock is released again
>>> because they already finished their write
>>> - now I want to write *my changes* to the backend
>>> - if no one else has locked the row I can do that *regardless*
>>> of *what* is actually in the database *right now*
>>> - that way I could overwrite other people's interim changes
>>> without ever knowing they existed
>>> - this is unacceptable
>>> - therefore I check XMIN before I write to the backend
>>> - it must be the same XMIN as when I initially read the data
>>> - or else someone has overwritten the data in the meantime
>>> - XMIN is a PostgreSQL internal "timestamp" telling me who
>>> last *modified* a given row
>> I thought XMIN was the transaction ID of the last or current transaction
>> for that row? Perhaps that's what you mean by "timestamp"?
> Gee, well, same thing. That's why I said "timestamp" and not timestamp.
>>> - if XMIN changed between me reading/displaying the data and me
>>> wanting to save changes I *know* someone else modified it
>>> - at which point I need to inform the user about that fact
>> You really need to show the user what the changes to the data were,
>> while still preserving the user's own changes.
> Precisely. Our base class detects this situation and offers all three versions of the data:
> - initial
> - our modifications
> - the current database state
>> That becomes a very
>> difficult user interface issue, one which very few applications attempt
>> to solve (perhaps wisely), and instead they disallow multiple sessions
>> to edit the one record simultaneously,
> So here you are advocating *less* fine-grained locks
> contrary to what you seemed to suggest above. Why, of course
> it's difficult. No one said it's gonna be easy.
OK, its your computer science project - I'me just an outside observer
offering unsolicited comments. But if people as smart as Ian are
bamboozled by GNUmed's complexity, then maybe it is time to step back,
fasten a new scalpel blade and start cutting and simplifying... (and to
write some unit tests while you're at it)?
Gnumed-devel mailing list
|[Prev in Thread]||Current Thread||[Next in Thread]|