← Back to team overview

mahara-contributors team mailing list archive

[Bug 685642] Re: Locked crons cause duplicate key errors in postgres logs

 

Hey Andrew,

I did that stuff using the nasty SQL exception because I wanted to be
confident that two copies of cron.php running at the same time can't
both grab the same lock, and thought I needed a single SQL statement
that either fails or inserts a row.

Your method will cut down the SQL errors in the postgres logs which is
nice, but I think it would also open the door for two copies of cron.php
to both do the SELECT (get_field) and find nothing before either of them
attempts the INSERT.  In that case one of them will throw an exception,
so we should still catch SQL exceptions there.

Maybe the best of both worlds is to leave your "if (!$started =
get_field..." wrapped around the insert, but put that all inside the
existing try, set $started inside the catch too, and then after the
catch, check the value of $started and do the logging and the 24 hour
checks, etc.


By the way I'm not too worried about a few extra db calls in cron jobs because they're so infrequent, but it would be nice to get rid of a bunch of them at some stage by letting each function specify whether it needs a lock.  For many cron functions it doesn't matter if two copies run in parallel.

-- 
You received this bug notification because you are a member of Mahara
Contributors, which is subscribed to Mahara.
https://bugs.launchpad.net/bugs/685642

Title:
  Locked crons cause duplicate key errors in postgres logs

Status in Mahara ePortfolio:
  New

Bug description:
  This affects postgres, and may affect MySQL too. Although this is currently the intended behaviour of lib/cron.php->cron_lock(), and the SQLException is caught in the php, the error is still seen by the database and logged in the database server's logs.

Although it's not actually an issue, it may confuse/upset admins and is probably best removed.

The easiest solution is to run a get_field before the insert atempt, but obviously this adds an additional database call to the mix. I imagine that this is the reason that the current system was implemented in this manner, so I won't push my fix just yet.

Any thoughts..?





References