yahoo-eng-team team mailing list archive
-
yahoo-eng-team team
-
Mailing list archive
-
Message #61957
[Bug 1669503] [NEW] [Libvirt] End of file while reading data: Input/output error in Ocata (VM does not boot)
Public bug reported:
I am setting up OpenStack Ocata as HA deployment on Ubuntu 16.04. and I
am confronted with a heavy blocker. I have set up Keystone, Glance,
Nova, Neutron and Horizon. I am able to create and spawn a VM, but it
does not boot.
The block device is created and available in my Ceph cluster. Libvirt gives me only:
Mar 2 17:09:11 os-compute01 virtlogd[3218]: End of file while reading data: Input/output error
The instances folder located under "/var/lib/nova/instances" contains a
folder for the VM, but it contains only the empty file "console.log" and
NO "libvirt.xml". Another strange thing is that "console.log" is owned
by root:root. Under Newton the file is owned by libvirt-qemu:kvm.
I have tried to correct the ownership manaully and copied the VM
specification file from "/etc/libvirt/qemu/instance-*.xml" to
"/var/lib/nova/instancs/[VM-UUID]/libvirt.xml". After rebooting the VM
the console.log is owned by root:root again and the VM does not boot.
There seems to be problem in the current Ocata release of Nova.
Any hints how to workaround or fix the problem?
This is my nova.conf on os-compute01:
[DEFAULT]
compute_monitors = cpu.virt_driver
debug = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
host = os-compute01
instance_usage_audit = true
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
instances_path = $state_path/instances
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
log_dir = /var/log/nova
memcached_servers = os-memcache:11211
my_ip = 10.30.200.111
notification_driver = messagingv2
resume_guests_state_on_host_boot = true
state_path = /var/lib/nova
transport_url = rabbit://nova:SECRET@os-rabbit01:5672,nova:SECRET@os-rabbit02:5672/openstack
use_neutron = true
[api]
auth_strategy = keystone
[cache]
enabled = true
backend = oslo_cache.memcache_pool
memcache_servers = os-memcache:11211
[cinder]
catalog_info = volumev2:cinderv2:internalURL
[conductor]
use_local = false
[glance]
api_servers = http://os-image:9292
[keystone_authtoken]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
memcached_servers = os-memcache:11211
password = SECRET
project_domain_name = default
project_name = service
service_token_roles_required = true
user_domain_name = default
username = nova
[neutron]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
password = SECRET
project_domain_name = default
project_name = service
region_name = RegionOne
url = http://os-network:9696
user_domain_name = default
username = neutron
[oslo_concurrency]
lock_path = /var/lock/nova
[oslo_messaging_rabbit]
amqp_durable_queues = true
rabbit_ha_queues = true
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
[placement]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
username = placement
password = SECRET
user_domain_name = default
project_name = service
project_domain_name = default
os_interface = internal
os_region_name = RegionOne
[vnc]
enabled = true
novncproxy_base_url = https://os-cloud.materna.com:6080/vnc_auto.html
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
and my nova-compute.conf:
[DEFAULT]
compute_driver = libvirt.LibvirtDriver
[libvirt]
cpu_mode = custom
cpu_model = SandyBridge
disk_cachemodes="network=writeback"
hw_disk_discard = unmap
images_rbd_ceph_conf = /etc/ceph/ceph.conf
images_rbd_pool = vms
images_type = rbd
inject_key = false
inject_partition = -2
inject_password = false
rbd_secret_uuid = SECRET
rbd_user = cinder
virt_type = kvm
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1669503
Title:
[Libvirt] End of file while reading data: Input/output error in Ocata
(VM does not boot)
Status in OpenStack Compute (nova):
New
Bug description:
I am setting up OpenStack Ocata as HA deployment on Ubuntu 16.04. and
I am confronted with a heavy blocker. I have set up Keystone, Glance,
Nova, Neutron and Horizon. I am able to create and spawn a VM, but it
does not boot.
The block device is created and available in my Ceph cluster. Libvirt gives me only:
Mar 2 17:09:11 os-compute01 virtlogd[3218]: End of file while reading data: Input/output error
The instances folder located under "/var/lib/nova/instances" contains
a folder for the VM, but it contains only the empty file "console.log"
and NO "libvirt.xml". Another strange thing is that "console.log" is
owned by root:root. Under Newton the file is owned by libvirt-
qemu:kvm.
I have tried to correct the ownership manaully and copied the VM
specification file from "/etc/libvirt/qemu/instance-*.xml" to
"/var/lib/nova/instancs/[VM-UUID]/libvirt.xml". After rebooting the VM
the console.log is owned by root:root again and the VM does not boot.
There seems to be problem in the current Ocata release of Nova.
Any hints how to workaround or fix the problem?
This is my nova.conf on os-compute01:
[DEFAULT]
compute_monitors = cpu.virt_driver
debug = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
host = os-compute01
instance_usage_audit = true
instance_usage_audit_period = hour
notify_on_state_change = vm_and_task_state
instances_path = $state_path/instances
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
log_dir = /var/log/nova
memcached_servers = os-memcache:11211
my_ip = 10.30.200.111
notification_driver = messagingv2
resume_guests_state_on_host_boot = true
state_path = /var/lib/nova
transport_url = rabbit://nova:SECRET@os-rabbit01:5672,nova:SECRET@os-rabbit02:5672/openstack
use_neutron = true
[api]
auth_strategy = keystone
[cache]
enabled = true
backend = oslo_cache.memcache_pool
memcache_servers = os-memcache:11211
[cinder]
catalog_info = volumev2:cinderv2:internalURL
[conductor]
use_local = false
[glance]
api_servers = http://os-image:9292
[keystone_authtoken]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
memcached_servers = os-memcache:11211
password = SECRET
project_domain_name = default
project_name = service
service_token_roles_required = true
user_domain_name = default
username = nova
[neutron]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
password = SECRET
project_domain_name = default
project_name = service
region_name = RegionOne
url = http://os-network:9696
user_domain_name = default
username = neutron
[oslo_concurrency]
lock_path = /var/lock/nova
[oslo_messaging_rabbit]
amqp_durable_queues = true
rabbit_ha_queues = true
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
[placement]
auth_type = password
auth_uri = http://os-identity:5000
auth_url = http://os-identity:35357
username = placement
password = SECRET
user_domain_name = default
project_name = service
project_domain_name = default
os_interface = internal
os_region_name = RegionOne
[vnc]
enabled = true
novncproxy_base_url = https://os-cloud.materna.com:6080/vnc_auto.html
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
and my nova-compute.conf:
[DEFAULT]
compute_driver = libvirt.LibvirtDriver
[libvirt]
cpu_mode = custom
cpu_model = SandyBridge
disk_cachemodes="network=writeback"
hw_disk_discard = unmap
images_rbd_ceph_conf = /etc/ceph/ceph.conf
images_rbd_pool = vms
images_type = rbd
inject_key = false
inject_partition = -2
inject_password = false
rbd_secret_uuid = SECRET
rbd_user = cinder
virt_type = kvm
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1669503/+subscriptions
Follow ups