On Sun, Sep 5, 2010 at 7:21 PM, Maciej Jaros <egil(a)wp.pl> wrote:
Maybe this could be done step by step by adding named
transactions and
some other other method/parameter like commitToUnlockTables so the
developers using it would know what it is used for ;-). Commit (in
general) is obviously not always done to simply unlock tables. Some
might not expect changes be committed already and might want to rollback
but they would be unable to.
The thing is, we know the status quo works. If we change behavior so
that locks (these are row locks for InnoDB, BTW, not table locks)
aren't released quickly enough, the worst case is it might take down
the site, or at least significantly increase deadlock errors. So it's
best to be a bit cautious here.
Maybe I'm being overly pessimistic, though. Perhaps Domas or Tim
could share an opinion.
So until proper handling of transactions would be
implemented warnings
might be thrown if begin/commit would not be called without proper
attributes. An additional warning might be thrown if more then say... 3
nested transactions would be started as it would probably mean locking
too many tables.
No, that's not how it works. The problem is even if you run a single
query that locks a bunch of rows that are likely to be accessed. For
instance, if you run a query that updates a row of the page table,
then any other changes to those rows will have to wait until the
transaction is committed, and if it waits too long, InnoDB might
decide there's a deadlock and start rolling back transactions with an
error (which will typically get reported to the user).
if ($this->trxLevel()>0 &&
$this->isWriteQuery( $sql ))
{
$this->mNumWritesSinceTran++;// set to 0 on tran end
if ($this->mNumWritesSinceTran>TOO_MANY_WRITES_SINCE_TRAN)
{
The number of queries doesn't matter. One query that updates a lot of
rows in a key table could be a problem, dozens that update rows in
rarely-accessed tables might not be a problem. They need to be
identified manually and/or by actually trying them. The latter is bad
because it risks taking the site down.
Put it this way: what's being proposed here will have the overall
effect of making transactions longer. Longer transactions means more
locks, which means more lock contention, which means worse
performance. Atomicity needs to be weighed against performance here
-- just removing all the extra transaction commits is not wise. I'm
thinking a good approach would be:
1) Give a $fname parameter to begin(), commit(), and rollback(), and
update everything to pass __METHOD__ (or __METHOD__ . '-something' if
there are multiples in the function). Keep the opener of the current
transaction in some member variable.
2) If begin() is called when a transaction is already in process, run
commit() first (so we match MySQL's current behavior on all DBMSes,
for consistency) and do wfWarn(), providing a list of the caller of
begin(), and the caller of begin() for the in-progress transaction.
If wfWarn() gets triggered too much, switch it to wfDebug().
Then we can see what's causing premature transaction commits and work
on fixing them one by one. In many cases, it might be straightforward
to refactor them to remove the nesting without holding transactions
open for longer than they should be -- it would have to be decided on
a case-by-case basis. If it turns out there are lots of places where
we really want to use checkpoints or something that gives us
nested-transaction-like behavior, we can talk about that then.