maria-developers team mailing list archive
-
maria-developers team
-
Mailing list archive
-
Message #08012
Re: More suggestions for changing option names for optimistic parallel replication
On Tue, Dec 9, 2014 at 12:17 AM, Kristian Nielsen
<knielsen@xxxxxxxxxxxxxxx> wrote:
> Pavel Ivanov <pivanof@xxxxxxxxxx> writes:
>
>> This is not entirely true, right? Let's say master binlog has
>> transactions T1.1, T1.2, T1.3, T1.4, T2.1, T1.5, T2.2 (where T1.* have
>> domain_id = 1 and T2.* have domain_id = 2) and slave has 3 parallel
>> threads. Then as I understand threads will be assigned to execute
>> T1.1, T1.2 and T1.3. T2.1 won't be scheduled to execute until these 3
>> transactions (or at least 2 of them T1.1 and T1.2) have been
>> committed. So streams from different domains are not completely
>> independent, right?
>
> One can use --slave-domain-parallel-threads to limit the number of threads
> that one domain in one multi-source connection can reserve. By default, things
> work as in your example. With eg. --slave-parallel-threads=3
> --slave-domain-parallel-threads=2, two threads will be assigned to run T1.1,
> T1.2, T1.3, and T1.4, and one free thread will remain to run T2.1 in parallel
> with them.
So the slave coordinator (or I don't remember how you call it) reads
relay log ahead of the last executing transaction? I.e. it will read
and assign to threads T1.1, T1.2, then it will read T1.3, detect that
there are no threads available for execution, but according to what
you said it will still put this in the queue for thread 1, right? How
long this queuing can be? Does it keep all queued events in memory?
Does it depend on the size of the transactions (i.e. how much memory
can it consume by this queuing)?
>> I guess a big question I want to ask: why would someone want to use
>> multiple domains together with slave-parallel-domains = off? If it's a
>> kind of kill-switch to turn off multi-domain feature completely if it
>> causes troubles for some reason, then I don't think it is baked deep
>> enough to actually work like that. But I don't understand what else
>> could it be used for.
>
> The original motivation for replication domains is multi-source
> replication. Suppose we have M1->S1, M2->S1, S1->S2, S1->S3:
>
> M1 --\ /---S2
> +-- S1 ---+
> M2 --/ \---S3
>
> Configuring different domains for M1 and M2 is necessary to be able to
> reconfigure the replication hierarchy, for example to M1->S2, M2->S2;
> or to S2->S3:
>
> M1 --\ /---S2
> +--+
> M2 --/ \---S1 ---S3
>
>
> M1 --\ /---S2 ---S3
> +-- S1 ---+
> M2 --/
>
> This requires a way to track the position in the binlog streams of M1 and M2
> independently, hence the need for domain_id.
>
> The domains can also be used for parallel replication; this is needed to allow
> S2 and S3 to have the same parallelism as S1. However, this kind of parallel
> replication requires support from the application to avoid conflicts. Now
> concurrent changes on M1 and M2 have to be conflict-free not just on S1, but
> on _all_ slaves in the hierarchy.
>
> I think that such a feature, which can break replication unless the user
> carefully designs the application to avoid it, requires a switch to turn it on
> or off.
Could there really be cases when multi-domain parallel application of
transaction is safe on S1, but not safe on S2 or S3?
>> Both seem to be a very narrow use case to make it worth adding a flag
>> that can significantly hurt the majority of other use cases. I think
>
> I see your point. Another thing that makes the use case even narrower is that
> it will be kind of random if we actually get the lock wait in T2 on the
> master. So even if delaying T2 would significantly improve performance on the
> slave, it is not a reliable mechanism.
>
>> this feature will be useful only if master will somehow leave
>> information about which transaction T2 was in conflict with, and then
>> slave would make sure that T2 is not started until T1 has finished.
>> Though this sounds over-complicated already.
>
> Yeah, it does.
>
> What I really need is to get some results from testing optimistic parallel
> replication, to understand how many retries will be needed in various
> scenarios, and if those retries are a bottleneck for performance.
Then I'd suggest to not add any special processing of such use case,
but add something that will allow to easily monitor what happens. E.g.
some status variables which could be plotted over time and show (or at
least hint on) whether this is significant bottleneck for performance
or not. This could be something like total time (in both wall time and
accumulated CPU time) spent executing transactions in parallel, time
spent rolling back transactions due to this lock conflict, time spent
rolling back transactions because of other reasons (e.g. due to STOP
SLAVE or reconnect after master crash), maybe also time spent waiting
in one parallel thread while transaction is executing in another
thread, etc.
Pavel
Follow ups
References
-
Syntax for parallel replication
From: Kristian Nielsen, 2014-10-06
-
Re: Syntax for parallel replication
From: Sergei Golubchik, 2014-10-13
-
Re: Syntax for parallel replication
From: Kristian Nielsen, 2014-10-16
-
Re: Syntax for parallel replication
From: Sergei Golubchik, 2014-10-28
-
Re: Syntax for parallel replication
From: Kristian Nielsen, 2014-11-13
-
Re: Syntax for parallel replication
From: Pavel Ivanov, 2014-11-13
-
Re: Syntax for parallel replication
From: Kristian Nielsen, 2014-11-14
-
Re: Syntax for parallel replication
From: Jonas Oreland, 2014-11-14
-
Re: Syntax for parallel replication
From: Pavel Ivanov, 2014-11-14
-
More suggestions for changing option names for optimistic parallel replication
From: Kristian Nielsen, 2014-12-04
-
Re: More suggestions for changing option names for optimistic parallel replication
From: Pavel Ivanov, 2014-12-04
-
Re: More suggestions for changing option names for optimistic parallel replication
From: Kristian Nielsen, 2014-12-08
-
Re: More suggestions for changing option names for optimistic parallel replication
From: Pavel Ivanov, 2014-12-09
-
Re: More suggestions for changing option names for optimistic parallel replication
From: Kristian Nielsen, 2014-12-09