← Back to team overview

openstack team mailing list archive

Re: Poor guest disk IO after disk usage.

 

On 03/04/2013 07:54 PM, Pádraig Brady wrote:
> On 03/04/2013 07:19 PM, Samuel Winchenbach wrote:
>> Hi All,
>>
>> I have a cluster of three nodes set up.  Live migration is accomplished via an NFS mount over 10GigE shared by all three nodes.
>>
>> I have noticed that when a VM is brand new I am getting close to 60MB/s disk write when using "dd if=/dev/zero of=file bs=1M count=1k conv=fdatasync". after doing this 10s of times the perfomance of the disk seems to drop  to around 4 - 12 MB/s.
>>
>> I have also noticed that doing a live-migration causes a similar effect immediately.
>>
>> Here is the virsh xml output of one of my VMs:  https://gist.github.com/swinchen/397fbe3bb74305064944
>>
>> I have read several "tuning" guides and most of the suggestions seems to be configured already (cache='none', virtio for network, disk and memballoon).   
>>
>> Do you think qcow2 is causing my issue and if so is there a way to boot an instance and override the disk format?
> 
> qcow may possibly be the issue.
> 
> You could use raw disk images by setting the
> use_cow_images=False nova config option.
> The tradeoff there is slower instance start and
> increased (but more guaranteed) disk usage.
> 
> Alternatively you could try to fallocate the disk
> image before running the perf test like:
>   # get virt_size
>   qemu-img info /var/lib/nova/instances/instance-0000002f/disk
>   fallocate -n -l $virt_size /var/lib/nova/instances/instance-0000002f/disk
> 
> There is also the possibility of adjusting nova
> to use preallocation=metadata on the qcow images
> (forfeiting CoW in the process), as discussed in "future work" at:
> https://blueprints.launchpad.net/nova/+spec/preallocated-images

Just did some testing here, writing in a VM backed by a local file system, using:
  dd if=/dev/zero of=file bs=1M count=1k conv=notrunc,fdatasync oflag=append

I didn't see a degredation after a while, but did see
quite different performance depending on the formats used:

disk performance outside VM = 120MB/s
raw in $instance_dir/ = 105MB/s
qcow copy with preallocation=metadata in $instance_dir/ = 100MB/s
qcow CoW with fallocate full size in $instance_dir/ = 55MB/s
  Note perf a bit more stable than without fallocate
  I didn't test with full host disk where improvements would be more noticeable
qcow CoW in $instance_dir/ = 52MB/s
qcow CoW in $instance_dir/ backed by qcow with preallocation=metadata in base = 52MB/s

thanks,
Pádraig.


Follow ups

References