← Back to team overview

maria-discuss team mailing list archive

Rocks, toku and some performance considerations.

 

Hi Everyone, i got recently interested in rocks and have a couple of questions if anyone else is doing/done some migrations to this engine:   1. Noticed that compared to toku - I/O overhead for range SELECT's with cold caches is much higher, especially on vmware I/O utilization (WAit) goes over 50% of available CPU on standard cfq scheduler and 4 cores. Changing it to noop helps and utilization drops around tenfold to 5-10%. It doesn't matter if we're doing full table scan or range scan with index. Basically i thought for LSM lists there'll be almost no IOPs and I/O utilization will be non-existing because all you need to do is reading a long, continuous block on disk (we're using SSDs). Any reason for that or am i doing something incorrectly? (default config + 6gb rocks key cache) Any reason it's working so badly with cfq and much better with noop?   2. Now with keys 100% cached - rocks is still around 2-3 times slower than toku for range scans, and even considerably slower than innodb. From what i understand keycache for rocks stores uncompressed keys, so is there some performance issue eg. with copying data from global keycache to local storage and is it going to be fixed or that's something related to how rocks works internally and there's no way of making it work better in the future?   3. In general it was a huge surprise that toku which is (at least it seems) so complicated to implement, and is based on very complex variation of b tree is having so much better read performance and so much better i/o characteristics... than "simple" (from what i understand) sorted list, which could be probably just read in bulk like a simple log file...   4. I found some info that having rocks databases which are over 100gb in size is not recommended (and it seems this is tiny... we were able to work with myisam tables which were close to 2TB). Also merging data could make the DB to end up being twice as big for some time. Are there any plans of implementing one-file-per-table like for inno?   5. What's the future of toku? I understand that percona is considering dropping it, will you take developement if that will happen or are you going to obsolete it and focus on rocks? From what i saw it seems that rocks and toku have totally different characteristics. So rocks is decent for point-reads (seems faster than toku when number of row reads is low)... and it seems to require less memory than toku, but for range scans it isn't going nowhere close. So both engines may have completely different use cases (eg. toku is great for long running statistical queries and servers with a lot of memory)   Thanks

Follow ups