On Tue, Jun 18, 2013 at 10:42 AM, Jonathan Lu <jojokururu@xxxxxxxxx
<mailto:jojokururu@xxxxxxxxx>> wrote:
On 2013/6/17 18:59, Robert van Leeuwen wrote:
I'm facing the issue about the performance degradation,
and once I glanced that changing the value in /proc/sys
/vm/vfs_cache_pressure will do a favour.
Can anyone explain to me whether and why it is useful?
Hi,
When this is set to a lower value the kernel will try to keep
the inode/dentry cache longer in memory.
Since the swift replicator is scanning the filesystem
continuously it will eat up a lot of iops if those are not in
memory.
To see if a lot of cache misses are happening, for xfs, you
can look at xs_dir_lookup and xs_ig_missed.
( look at http://xfs.org/index.php/Runtime_Stats )
We greatly benefited from setting this to a low value but we
have quite a lot of files on a node ( 30 million)
Note that setting this to zero will result in the OOM killer
killing the machine sooner or later.
(especially if files are moved around due to a cluster change ;)
Cheers,
Robert van Leeuwen
Hi,
We set this to a low value(20) and the performance is better
than before. It seems quite useful.
According to your description, this issue is related with the
object quantity in the storage. We delete all the objects in the
storage but it doesn't help anything. The only method to recover
is to format and re-mount the storage node. We try to install
swift on different environment but this degradation problem seems
to be an inevitable one.
It's inode cache for each file(object) helps (reduce extra disk IOs).
As long as your memory is big enough to hold inode information of
those frequently accessed objects, you are good. And there's no need
(no point) to limit # of objects for each storage node IMO. You may
manually load inode information of objects into VFS cache if you like
(by simply 'ls' files), to _restore_ performance. But still memory
size and object access pattern are the key to this kind of performance
tuning, if memory is too small, inode cache will be invalided sooner
or later.
Cheers,
Jonathan Lu
_______________________________________________
Mailing list: https://launchpad.net/~openstack
<https://launchpad.net/%7Eopenstack>
Post to : openstack@xxxxxxxxxxxxxxxxxxx
<mailto:openstack@xxxxxxxxxxxxxxxxxxx>
Unsubscribe : https://launchpad.net/~openstack
<https://launchpad.net/%7Eopenstack>
More help : https://help.launchpad.net/ListHelp
--
Regards
Huang Zhiteng