← Back to team overview

yahoo-eng-team team mailing list archive

[Bug 1823100] [NEW] Ephemeral disk not mounted when new instance requires reformat of the volume

 

Public bug reported:

Each Azure VM is provided with an ephemeral disk, and the cloud-init
configuration supplied in the VM image requests the volume be mounted
under /mnt. Each new ephemeral disk is formatted for NTFS rather than
ext4 or another Linux filesystem. The Azure datasource detects this (in
.activate()) and makes sure the disk_setup and mounts modules run. The
disk_setup module formats the volume; the mounts module sees that the
ephemeral volume is configured to be mounted and it adds the appropriate
entry to /etc/fstab. After updating fstab, the mounts volume invokes the
"mount -a" command to mount (or unmount) volumes according to fstab.
That's how it all works during the initial provisioning of a new VM.

When a VM gets rehosted for any reason (service heal, stop/deallocate
and restart), the ephemeral drive provided to the previous instance is
lost. A new ephemeral volume is supplied, also formatted ntfs. When the
VM is booted, systemd's mnt.mount unit runs and complains about the
unmountable ntfs volume that's still in /etc/fstab. The disk_setup
module properly formats the volume. However, the mounts module sees the
volume is *already* in fstab, sees that it didn't change anything, so it
doesn't run "mount -a". The net result: the volume doesn't get mounted.

** Affects: cloud-init
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1823100

Title:
  Ephemeral disk not mounted when new instance requires reformat of the
  volume

Status in cloud-init:
  New

Bug description:
  Each Azure VM is provided with an ephemeral disk, and the cloud-init
  configuration supplied in the VM image requests the volume be mounted
  under /mnt. Each new ephemeral disk is formatted for NTFS rather than
  ext4 or another Linux filesystem. The Azure datasource detects this
  (in .activate()) and makes sure the disk_setup and mounts modules run.
  The disk_setup module formats the volume; the mounts module sees that
  the ephemeral volume is configured to be mounted and it adds the
  appropriate entry to /etc/fstab. After updating fstab, the mounts
  volume invokes the "mount -a" command to mount (or unmount) volumes
  according to fstab. That's how it all works during the initial
  provisioning of a new VM.

  When a VM gets rehosted for any reason (service heal, stop/deallocate
  and restart), the ephemeral drive provided to the previous instance is
  lost. A new ephemeral volume is supplied, also formatted ntfs. When
  the VM is booted, systemd's mnt.mount unit runs and complains about
  the unmountable ntfs volume that's still in /etc/fstab. The disk_setup
  module properly formats the volume. However, the mounts module sees
  the volume is *already* in fstab, sees that it didn't change anything,
  so it doesn't run "mount -a". The net result: the volume doesn't get
  mounted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1823100/+subscriptions