maria-developers team mailing list archive
-
maria-developers team
-
Mailing list archive
-
Message #03640
Transactional INFORMATION_SCHEMA / status variables?
Would it make sense to have transactional behaviour for status variables,
and/or information_schema tables? Or does this break the code and/or user
expectations too much?
The motivation is to follow up on MWL#116, group commit, which introduces
consistent commit order between storage engines and binary log.
I want to use this to get a consistent binlog position for START TRANSACTION
WITH CONSISTENT SNAPSHOT, without taking any additional locks. Currently, I
believe it is common (eg. mysqlbinlog --master-data --single-transaction) to
do something like this:
FLUSH TABLES WITH READ LOCK;
START TRANSACTION WITH CONSISTENT SNAPSHOT;
UNLOCK TABLES
<take consistent backup by dumping tables using the consistent snapshot>
and this takes a lock that can depending on circumstances severely affect the
server:
http://www.mysqlperformanceblog.com/2010/04/24/how-fast-is-flush-tables-with-read-lock/
One idea is to let the binlog storage engine participate in START TRANSACTION
WITH CONSISTENT SNAPSHOT, by installing a start_consistent_snapshot() method
in its handlerton. And then do something like this:
START TRANSACTION WITH CONSISTENT SNAPSHOT;
SELECT variable_value FROM INFORMATION_SCHEMA.SESSION_STATUS WHERE
variable_name IN ('binlog_master_file', 'binlog_master_position');
<dump other transactional tables>
If the SELECT of binlog_master_file and binlog_master_position could be
transactional, then the binlog engine could return the correct values
associated with the consistent snapshot, without blocking any other
transactions.
I like the simplicity of this idea, but I do not understand server transaction
handling enough to be sure it will work well, some concerns:
- Using SHOW STATUS / INFORMATION_SCHEMA.SESSION_STATUS like this could be
surprising to people that do not expect different parts of the results to
display different kinds of transactional behaviour. This could be helped by
using instead a separate INFORMATION_SCHEMA table for the binlog position.
- But this raises another concern, if an INFORMATION_SCHEMA can be
transactional in this respect?
- Also, it seems to me that merely selecting from such transactional
INFORMATION_SCHEMA table would then start a new transaction inside the
binlog engine. I wonder if this would cause any unpleasant side effects?
Any suggestions or comments?
----
If this does not work, I have another idea, which I think is more general, but
also more complicated to implement.
The idea is that every transaction has a local transaction ID, assigned at the
start (we already have this, in thd->transaction.xid_state.xid).
Each engine will transactionally store the local transaction ID of the last
transaction committed. The binary log will similarly store this ID along with
every transaction that is binlogged.
Then START TRANSACTION WITH CONSISTENT SNAPSHOT could optionally return the
local transaction ID of the last committed transaction visible in the snapshot
obtained. This local transaction ID could then be mapped to binlog position
(with eg. mysqldump), and more generally any binlog plugin could provide a way
to map such local transaction ID into its own global transaction ID.
Similarly, after restore of InnoDB hot backup or LVM snapshot, one could ask
the engine for last committed local transaction ID, and map this to binlog
position / global transaction ID to be able to use the restored backup to
provision a new slave.
This would work with any storage engine and any binlog/replication
implementation, without any need for FLUSH TABLES WITH READ LOCK.
- Kristian.
Follow ups