← Back to team overview

maria-developers team mailing list archive

Re: impact of segmented table_open_cache on sysbench results

 

Hi Mark,

MARK CALLAGHAN wrote:

> I didn't see a big change in going from toci=1 to toci=64. I don't
> dispute your results, but I am curious about why it made a difference
> for you but not for me. My sysbench test had:
> * 8 tables, partitioning not used
> * 8 copies of the sysbench process (1 per table), running a different
> host from mysqld
> * mysqld on host with 12 real CPUs and 24 vCPUs after HT was enabled
> * jemalloc
> * my table names were test.sbtestX (for X in 1 .. 8)

I see. It seems you are using sysbench-0.4. I migrated to sysbench-0.5 (the
bzr trunk) a while ago because it has

a) LUA support, this is great to implement custom workloads, and
b) the ability to report progress; this is useful to spot irregularities
like write stalls

I pushed the scripts and config to Launchpad:

bazaar.launchpad.net/~ahel/maria/mariadb-benchmarks/sysbench-runs

series25 is the glibc vs. tcmalloc run and series26 is the one that tests
the impact of table_open_cache_instances

>From Dimitri's blog I have learned that the size of per-thread buffers is
critical for high concurrency benchmarks. I guess this is mitigated by using
tcmalloc, but I've not yet run the tests to confirm that. So reducing
read_buffer_size and sort_buffer_size might already do the trick.

> The common hash function in MySQL is lousy for a small number of buckets
> -- http://bugs.mysql.com/bug.php?id=66473. Part of the problem is that
> it isn't put all of my tables into one bucket given the naming pattern
> above when there were 8 buckets. What were you table names?

sysbench-0.5 uses sbtest/sbtest1 .. sbtest16

so it's very much the same as your setup

> I also use
> table-definition-cache=1000 and table-open-cache=2000 to guarantee that
> once the table caches are large enough once populated. Did you do the same?

Yes, table_open_cache is huge in this benchmark.

> The workaround for my problem with the hash function was to
> use metadata_locks_hash_instances=256 with 5.6.10.

Interesting, I wasn't aware of that variablet. It's not yet in MariaDB-10.0
though.

> If you are comparing 5.6.10 versus MariaDB 5.5+XtraDB on XFS, then a big
> win should be to compare innodb_flush_method=O_DIRECT. AFAIK, XtraDB &
> the Facebook patch have changes to not fsync after O_DIRECT writes
> unless the ibd file has grown.

This is a pure read only benchmark, so fsync does not matter. Also I have
found that the InnoDB plugin is ~5-10% faster than XtraDB for sysbench.
Hence the last benchmarks are using plain InnoDB.

XtraDB indeed shines for read/write loads, mostly because it has a better
adaptive flushing estimation. InnoDB runs into write stalls when it starts
synchronous flushing.

> I also used HW checksums when possible (innodb_checksum_algorithm=CRC32
> for 5.6.10, not sure how XtraDB enables that). That makes a difference
> with fast storage, maybe it doesn't matter if you are using SAS/SATA disk.

Checksums have no impact on read only benchmark with hot buffer pool.


XL


Follow ups

References