openstack team mailing list archive
Mailing list archive
Re: [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift
By stopping, do you mean halt the service (kill the process) or is it a
change in the configuration file?
On Mon, Jan 14, 2013 at 1:20 PM, Robert van Leeuwen <
> On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert <
> leanderbb@xxxxxxxxx> wrote:
>> Hello all,
>> I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients
>> (each hosted on a different machine) with 10 threads each uploading files
>> using the official python-swiftclient. Each thread is uploading to a
>> separate container.
>> I have 5 storage nodes and 1 proxy node. The nodes are all running with
>> a replication factor of 3. Each node has a quad-core i3 processor, 4GB of
>> RAM and a gigabit network interface.
>> Is there any way I can speed up this process? At the moment it takes
>> about 20 seconds per file or more.
> It is very likely the system is starved on IO's.
> As a temporary workaround you can stop the object-replicator and
> object-auditor during the import to have less daemons competing for IO's.
> Some general troubleshooting tips:
> Use iotop to look for the processes consuming io's
> Assuming you use XFS:
> Make sure the filesystem is created with the appropriate inode size as
> described in the docs.
> (e.g. mkfs.xfs -i size=1024)
> Also with lots of files you need quite a bit of memory to cache the inodes
> into memory.
> Use the xfs runtime stats to get some indication about the cache:
> xs_dir_lookup and xs_ig_missed will give some indication how much IO's are
> spend on the inode lookups
> You can look at slabtop to see how much memory is used by the inode cache.
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp