← Back to team overview

openstack team mailing list archive

Re: Poor guest disk IO after disk usage.

 

On 03/04/2013 07:19 PM, Samuel Winchenbach wrote:
> Hi All,
> 
> I have a cluster of three nodes set up.  Live migration is accomplished via an NFS mount over 10GigE shared by all three nodes.
> 
> I have noticed that when a VM is brand new I am getting close to 60MB/s disk write when using "dd if=/dev/zero of=file bs=1M count=1k conv=fdatasync". after doing this 10s of times the perfomance of the disk seems to drop  to around 4 - 12 MB/s.
> 
> I have also noticed that doing a live-migration causes a similar effect immediately.
> 
> Here is the virsh xml output of one of my VMs:  https://gist.github.com/swinchen/397fbe3bb74305064944
> 
> I have read several "tuning" guides and most of the suggestions seems to be configured already (cache='none', virtio for network, disk and memballoon).   
> 
> Do you think qcow2 is causing my issue and if so is there a way to boot an instance and override the disk format?

qcow may possibly be the issue.

You could use raw disk images by setting the
use_cow_images=False nova config option.
The tradeoff there is slower instance start and
increased (but more guaranteed) disk usage.

Alternatively you could try to fallocate the disk
image before running the perf test like:
  # get virt_size
  qemu-img info /var/lib/nova/instances/instance-0000002f/disk
  fallocate -n -l $virt_size /var/lib/nova/instances/instance-0000002f/disk

There is also the possibility of adjusting nova
to use preallocation=metadata on the qcow images
(forfeiting CoW in the process), as discussed in "future work" at:
https://blueprints.launchpad.net/nova/+spec/preallocated-images

thanks,
Pádraig.


Follow ups

References