openerp-expert-framework team mailing list archive
-
openerp-expert-framework team
-
Mailing list archive
-
Message #00821
Re: Exception TransactionRollbackError not correctly handled ?
-
To:
Raphael Valyi <rvalyi@xxxxxxxxx>
-
From:
Olivier Dony <odo@xxxxxxxxxxx>
-
Date:
Thu, 03 May 2012 18:25:09 +0200
-
Cc:
Openerp Expert Framework <openerp-expert-framework@xxxxxxxxxxxxxxxxxxx>
-
In-reply-to:
<CAArDTh8LSFALk1ZXHurFkxf-2d4oA3TA5qec-yfw5iPnEU7bZg@mail.gmail.com>
-
Organization:
OpenERP s.a.
-
User-agent:
Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120310 Thunderbird/11.0
On 05/03/2012 04:49 PM, Raphael Valyi wrote:
> The idea having OpenERP retry transactions that failed because of a
> serialization problem has been discussed in the past, and it sounds fine to me.
> However your implementation is too low-level: you are replaying only the last
> cr.execute() call, while it is the whole transaction that was rolled back. It
> will work for a few trivial cases (like the one-query transaction that updates
> the last login date), but a real transaction could consist in hundreds of
> queries, and discarding the previous queries will cause unpredictable madness.
>
> If you could move the retry logic higher in the stack and make it retry the
> whole RPC call, it should become a workable patch, but probably less simple.
>
>
> Hello, shouldn't OpenERP then comes with a native "@retry" decorator so we
> could decide, method by method if it should automatically retry once or even
> several times in case of such error? Waiting for your comments.
Similarly to retrying at the level of a query, I don't think retrying at the
method level would work well, because we'd be missing the context of a possibly
larger transaction. Retrying just a method would mean discarding all previous
changes in the same transaction.
As transactions are always rolled back as a whole, it seems to me that the only
place where it is correct to retry them is where they are managed: at the RPC
level. objects_proxy.execute() and exec_workflow() of osv.py are the places
where I would start looking, in trunk.
Also I'm not sure there is any sense in finer-grained control over this
mechanism: would the logic differ from transaction to transaction? If a
transaction failed because of a concurrent update, why would you want to avoid
at least one automatic retry if there is a chance it will work...and save you
the trouble of handling the retry yourself.
It's really as if by chance your transaction had landed 2 microseconds later
and everything would had worked directly...
You could possibly find this behavior annoying if your business code is not
fully transactional (e.g. altering the outside non-transactional world), as the
transactions will not be idempotent. But that's something you'd have to fix
anyway, because transactions *will* fail eventually, and ignoring that is
asking for trouble.
References