openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #17788
Re: [Openstack-operators] [SWIFT] Proxies Sizing for 90.000 / 200.000 RPM
Well, we've conducted some test, but i dont know if it simulate the real
use case, actually, the oposite, since the objects that are puted into the
cluster, are read no more than twice, the thing is that they are millions.
So, the test is as follow.
We are using SWIFT 1.4.8 With Keystone.
Behind this test as we said there are 6 proxy nodes ( 2 QuadCore owith HT,
96GB of RAM, Memcached with 32GB configured, 2 Bonded 1Gb Nic LACP Mode 4,
the interfaces are balanced about 60%-40% )
#1 We put 30 objects (50K) into a container (private for PUT, public for
GET, so no keystone on the GET side), then from 30 different physical hosts
acting as clients (96GB fo RAM, 2 Intel QuadCore HT Enabled, 2 Bonded 1Gb
Nic) executing "httperf" geting a different image each (HTTP, no SSL) with
this command:
httperf --server F5_IP_ADDRESS --port 8080 --uri
/v1/AUTH_1bf1f1b69a864abb84ed8a1bc82cff21/testCONT/objXX.swf --num-conn
7200 --num-call 1 --rate 50 --timeout 20 --hog -v > swift_$(hostname)
So, each host is getting allways the same object, so doing the math :
30 clients, 50 req/s per 60 secconds = 90.000 RPM, again, each client is
getting ALLWAYS the same object.
We monitor this test, and we get throughput peaks of 60.000 RPM at 600ms
average response time (with think its a little bit high for lan test),
since its true that we are allways getting the same objects for each
client, we CANT make the throughput to scale (maybe due to lack of proxy
procesing power, but the thing is, the proxies CPU are under the 30% of CPU
usage, and Bandwith utilization per NIC peaks 30MB/s).
We started with 20 clients and got peaks of 60.000 RPMS, the we went with
30 clients and we cant get through the 60.000 RPMS threshold.
Its all about proxies ? or we should consider tune the datanode sides
----------------
alejandrito
On Thu, Oct 25, 2012 at 9:02 AM, Ywang225 <ywang225@xxxxxxx> wrote:
> How many disks on each storage node, and what's the model? Normally, small
> requests performance depends on proxy CPU, but disk model matters,
> especially for writes. If no bottlenecks on keystone, and disks aren't too
> bad, I assume over 1000 op/s can archive with one proxy plus 5 storage
> modes with your pattern.
>
> -ywang
>
> 在 2012-10-25,1:56,Alejandro Comisario <
> alejandro.comisario@xxxxxxxxxxxxxxxx> 写道:
>
> Guys ??
> Anyone ??
>
> *
> *
> *
> *
> *Alejandro Comisario
> #melicloud CloudBuilders*
> Arias 3751, Piso 7 (C1430CRG)
> Ciudad de Buenos Aires - Argentina
> Cel: +549(11) 15-3770-1857
> Tel : +54(11) 4640-8443
>
>
> On Mon, Oct 15, 2012 at 11:59 AM, Kiall Mac Innes <kiall@xxxxxxxxxxxx>wrote:
>
>> While I can't answer your question (I've never used swift) - it's worth
>> mentioning many of the openstack folks are en-route/at the design summit.
>>
>> Also - you might have more luck on the openstack-operators list, rather
>> than the general list.
>>
>> Kiall
>> On Oct 15, 2012 2:57 PM, "Alejandro Comisario" <
>> alejandro.comisario@xxxxxxxxxxxxxxxx> wrote:
>>
>>> Its worth to know that the objects in the cluster, are going to be from
>>> 200KB the biggest and 50KB the tiniest.
>>> Any considerations regarding this ?
>>>
>>> -----
>>> alejandrito
>>>
>>> On Thu, Oct 11, 2012 at 8:28 PM, Alejandro Comisario <
>>> alejandro.comisario@xxxxxxxxxxxxxxxx> wrote:
>>>
>>>> Hi Stackers !
>>>> This is the thing, today we have a 24 datanodes (3 copies, 90TB
>>>> usables) each datanode has 2 intel hexacores CPU with HT and 96GB of RAM,
>>>> and 6 Proxies with the same hardware configuration, using swift 1.4.8 with
>>>> keystone.
>>>> Regarding the networking, each proxy / datanodes has a dual 1Gb nic,
>>>> bonded in LACP mode 4, each of the proxies are behind an F5 BigIP Load
>>>> Balancer ( so, no worries over there ).
>>>>
>>>> Today, we are receiving 5000 RPM ( Requests per Minute ) with 660 RPM
>>>> per Proxies, i know its low, but now ... with a new product migration, soon
>>>> ( really soon ) we are expecting to receive about a total of 90.000 RPM
>>>> average ( 1500 req / s ) with weekly peaks of 200.000 RPM ( 3500 req / s )
>>>> to the swift api, witch will be 90% public gets ( no keystone auth ) and
>>>> 10% authorized PUTS (keystone in the middle, worth to know that we have a
>>>> 10 keystone vms pool, connected to a 5 nodes galera mysql cluster, so no
>>>> worries there either )
>>>>
>>>> So, 3500 req/s divided by 6 proxy nodes doesnt sounds too much, but
>>>> well, its a number that we cant ignore.
>>>> What do you think about this numbers? does this 6 proxies sounds good,
>>>> or we should double or triple the proxies ? Does anyone has this size of
>>>> requests and can share their configs ?
>>>>
>>>> Thanks a lot, hoping to ear from you guys !
>>>>
>>>> -----
>>>> alejandrito
>>>>
>>>
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help : https://help.launchpad.net/ListHelp
>>>
>>>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@xxxxxxxxxxxxxxxxxxx
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
References