← Back to team overview

maria-developers team mailing list archive

Re: Doublewrite block number in XtraDB

 

Hi,

In MariaDB Server 10.2, is the default storage engine InnoDB instead of
XtraDB?
The feature MDEV-11659 Move the InnoDB doublewrite buffer to flat files
<https://jira.mariadb.org/browse/MDEV-11659>, I remember that the similarly
feature was implement in XtraDB  before MariaDB 10.0 and it can set the
parameter named innodb_doublewrite_file in my.cnf but removed it from
MariaDB 10.0.
Is it not worth in MariaDB? Or is it difficult to manage two space
(trx_sys_space and trx_doublewrite_space) in development I think?

Best Regards,

Hank Lyu


2017-02-09 17:28 GMT+08:00 Marko Mäkelä <marko.makela@xxxxxxxxxxx>:

> Hi,
> I think that the doublewrite buffer would become more flexible and perform
> better if it was moved out of the InnoDB system tablespace. Then we could
> use any number of pages and not be limited to 128 pages.
> I filed MDEV-11659 Move the InnoDB doublewrite buffer to flat files
> <https://jira.mariadb.org/browse/MDEV-11659> some time ago. I would be
> glad to see a contributed patch, or maybe I will find time to do it myself
> at some point in the future.
>
> We do not have a working XtraDB in MariaDB Server 10.2. As far as I
> understand, the copy that is in the source tree at storage/xtradb is based
> on MySQL 5.6, not on 5.7. The InnoDB in 10.2 is based on MySQL 5.7.
>
> Best regards,
>
> Marko
> <https://jira.mariadb.org/browse/MDEV-11659>
>
> On Thu, Feb 9, 2017 at 9:49 AM, Laurynas Biveinis <
> laurynas.biveinis@xxxxxxxxx> wrote:
>
>> Hank -
>>
>> A very similar idea has been implemented in XtraDB of Percona Server
>> 5.7, see "Parallel Doublewrite" at
>> https://www.percona.com/doc/percona-server/5.7/performance/
>> xtradb_performance_improvements_for_io-bound_highly-concurrent_workloads.
>> html
>>
>> AFAIK, this feature is not in XtraDB of MariaDB as of today.
>>
>> 2017-02-09 9:32 GMT+02:00 Hank Lyu <hanklgs9564@xxxxxxxxx>:
>> > Hello:
>> >
>> > In XtraDB, if we adjust the size of doublewrite buffer and relative
>> variable
>> > (i.e. srv_doublewrite_batch_size) , in theory, we can get better
>> throughput
>> > when buffer pool flush to disk.
>> >
>> > I wonder that why doublewrite buffer size is 2 block and each flush page
>> > number is 120 (decided by srv_doublewrite_batch_size), instead
>> doublewrite
>> > size is 8 and each flush page number is 500 or more?
>> > Is it worry about that when doing flush will occupy too much resource or
>> > other reason?
>> >
>> > Best Regard
>> > Hank Lyu
>> >
>> >
>> > _______________________________________________
>> > Mailing list: https://launchpad.net/~maria-developers
>> > Post to     : maria-developers@xxxxxxxxxxxxxxxxxxx
>> > Unsubscribe : https://launchpad.net/~maria-developers
>> > More help   : https://help.launchpad.net/ListHelp
>> >
>>
>>
>>
>> --
>> Laurynas
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~maria-developers
>> Post to     : maria-developers@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~maria-developers
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
> --
> DON’T MISS
> M|17
> April 11 - 12, 2017
> The Conrad Hotel
> New York City
> https://m17.mariadb.com/
>
> Marko Mäkelä, Lead Developer InnoDB
> MariaDB Corporation
>

Follow ups

References