percona-discussion team mailing list archive
-
percona-discussion team
-
Mailing list archive
-
Message #00266
[Bug 329005] [NEW] XtraDB don't scale tpcc performance to 16CPU
Public bug reported:
At 16 CPU server,
16 sessions
10, 7902(0):0.2, 7911(0):0.2, 791(0):0.2, 792(0):0.2, 793(0):0.2
20, 7773(0):0.2, 7777(0):0.2, 777(0):0.2, 776(0):0.2, 778(0):0.2
30, 7825(0):0.2, 7827(0):0.2, 784(0):0.2, 785(0):0.2, 782(0):0.2
12 sessions
10, 7718(0):0.2, 7727(0):0.2, 772(0):0.2, 774(0):0.2, 774(0):0.2
20, 7794(0):0.2, 7791(0):0.2, 780(0):0.2, 779(0):0.2, 778(0):0.2
30, 7771(0):0.2, 7773(0):0.2, 778(0):0.2, 777(0):0.2, 778(0):0.2
8 sessions
10, 6684(0):0.2, 6680(0):0.2, 669(0):0.2, 668(0):0.2, 668(0):0.2
20, 6663(0):0.2, 6664(0):0.2, 666(0):0.2, 667(0):0.2, 667(0):0.2
30, 6768(0):0.2, 6773(0):0.2, 677(0):0.2, 676(0):0.2, 677(0):0.2
4 sessions
10, 4119(0):0.2, 4123(0):0.2, 412(0):0.2, 412(0):0.2, 412(0):0.2
20, 4128(0):0.2, 4129(0):0.2, 413(0):0.2, 414(0):0.2, 414(0):0.2
30, 4161(0):0.2, 4162(0):0.2, 416(0):0.2, 415(0):0.2, 416(0):0.2
2 sessions
10, 2362(0):0.2, 2363(0):0.2, 236(0):0.2, 236(0):0.2, 236(0):0.2
20, 2370(0):0.2, 2367(0):0.2, 238(0):0.2, 236(0):0.2, 237(0):0.2
30, 2365(0):0.2, 2364(0):0.2, 235(0):0.2, 238(0):0.2, 236(0):0.2
It may come from "index->lock" and "kernel_mutex".
If we want to more scaling, we should fix them.
** Affects: percona-xtradb
Importance: Medium
Assignee: Yasufumi Kinoshita (yasufumi-kinoshita)
Status: Confirmed
** Changed in: percona-xtradb
Importance: Undecided => Medium
Assignee: (unassigned) => Yasufumi Kinoshita (yasufumi-kinoshita)
Status: New => Confirmed
--
XtraDB don't scale tpcc performance to 16CPU
https://bugs.launchpad.net/bugs/329005
You received this bug notification because you are a member of Percona
developers, which is the registrant for Percona-XtraDB.
Status in Percona XtraDB Storage Engine for MySQL: Confirmed
Bug description:
At 16 CPU server,
16 sessions
10, 7902(0):0.2, 7911(0):0.2, 791(0):0.2, 792(0):0.2, 793(0):0.2
20, 7773(0):0.2, 7777(0):0.2, 777(0):0.2, 776(0):0.2, 778(0):0.2
30, 7825(0):0.2, 7827(0):0.2, 784(0):0.2, 785(0):0.2, 782(0):0.2
12 sessions
10, 7718(0):0.2, 7727(0):0.2, 772(0):0.2, 774(0):0.2, 774(0):0.2
20, 7794(0):0.2, 7791(0):0.2, 780(0):0.2, 779(0):0.2, 778(0):0.2
30, 7771(0):0.2, 7773(0):0.2, 778(0):0.2, 777(0):0.2, 778(0):0.2
8 sessions
10, 6684(0):0.2, 6680(0):0.2, 669(0):0.2, 668(0):0.2, 668(0):0.2
20, 6663(0):0.2, 6664(0):0.2, 666(0):0.2, 667(0):0.2, 667(0):0.2
30, 6768(0):0.2, 6773(0):0.2, 677(0):0.2, 676(0):0.2, 677(0):0.2
4 sessions
10, 4119(0):0.2, 4123(0):0.2, 412(0):0.2, 412(0):0.2, 412(0):0.2
20, 4128(0):0.2, 4129(0):0.2, 413(0):0.2, 414(0):0.2, 414(0):0.2
30, 4161(0):0.2, 4162(0):0.2, 416(0):0.2, 415(0):0.2, 416(0):0.2
2 sessions
10, 2362(0):0.2, 2363(0):0.2, 236(0):0.2, 236(0):0.2, 236(0):0.2
20, 2370(0):0.2, 2367(0):0.2, 238(0):0.2, 236(0):0.2, 237(0):0.2
30, 2365(0):0.2, 2364(0):0.2, 235(0):0.2, 238(0):0.2, 236(0):0.2
It may come from "index->lock" and "kernel_mutex".
If we want to more scaling, we should fix them.
Follow ups
References