yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #75670
[Bug 1801702] Re: Spawn may fail when cache=none on block device with logical block size > 512
Maybe the check should try 8192, 4096 and then 512 and failing all three
consider it not supported.
** Changed in: nova
Status: New => Triaged
** Changed in: nova
Importance: Undecided => High
** Also affects: nova/ocata
Importance: Undecided
Status: New
** Also affects: nova/rocky
Importance: Undecided
Status: New
** Also affects: nova/pike
Importance: Undecided
Status: New
** Also affects: nova/queens
Importance: Undecided
Status: New
** Changed in: nova/pike
Status: New => Triaged
** Changed in: nova/ocata
Status: New => Triaged
** Changed in: nova/rocky
Status: New => Triaged
** Changed in: nova/queens
Status: New => Triaged
** Changed in: nova/ocata
Importance: Undecided => Medium
** Changed in: nova
Importance: High => Medium
** Changed in: nova/queens
Importance: Undecided => Medium
** Changed in: nova/pike
Importance: Undecided => Medium
** Changed in: nova/rocky
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1801702
Title:
Spawn may fail when cache=none on block device with logical block size
> 512
Status in OpenStack Compute (nova):
In Progress
Status in OpenStack Compute (nova) ocata series:
Triaged
Status in OpenStack Compute (nova) pike series:
Triaged
Status in OpenStack Compute (nova) queens series:
Triaged
Status in OpenStack Compute (nova) rocky series:
Triaged
Bug description:
Description
===========
When we spawn instances without cache enabled (cache='none') on a file system
there a check in nova code that test if file system support direct IO:
https://github.com/openstack/nova/blob/master/nova/privsep/utils.py#L34
Because this test use 512b alignment size it seems to failed on newer block device that have
logical block size > 512b like nvme:
parted /dev/nvme0n1 print | grep "Sector size"
Sector size (logical/physical): 4096B/4096B
reason should be that alignement size of direct io must be a multiple
of logical block size of underlying device (not of fs block size) as
explain here:
http://man7.org/linux/man-pages/man2/open.2.html
O_DIRECT
...
Under Linux 2.4, transfer sizes, and the alignment of the user buffer
and the file offset must all be multiples of the logical block size
of the filesystem. Since Linux 2.6.0, alignment to the logical block
size of the underlying storage (typically 512 bytes) suffices
Because this test failed, it fallbacks value of cache to "writethrough" which have following consequences:
1) qemu run without direct io even device/fs support but with higher block size
2) qemu failed to start because cache=writethrough may conflict with other dev paramer like "io=native": with the following message:
2018-08-22 20:50:41.226 80512 ERROR oslo_messaging.rpc.server File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1065, in createWithFlags
2018-08-22 20:50:41.226 80512 ERROR oslo_messaging.rpc.server if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2018-08-22 20:50:41.226 80512 ERROR oslo_messaging.rpc.server libvirtError: unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads
Steps to reproduce
==================
to reproduce spawn issue:
having instances on fs with block device with logical block size > 512b (typically nvme with 4096 8192 sector size)
nova.conf with:
images_type=raw
preallocate_images=space
Solution
========
Can we consider increasing align_size from 512b to 8192b as it will work on most cases?
Is there any other reason to keep 512b ?
Set it to 4096 or 8192 fix the issue in my environment.
Environment
===========
I met the issue on newton, but same check with 512b exists on master.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1801702/+subscriptions
References