openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #13466
Re: Instance termination is not stable
Hi George,
Thanks for the reply, actually I'm not too sure the type of image that I'm
using, I'm trying to create an LXC instance here, so i have this entry in
/etc/nova/nova.conf file. --libvirt_type=lxc.
And i noted that, once instance are spawned, some of the files are get
stored in "/var/lib/nova/instances/_base", after your reply I then realized
it is qcow images being used, therefore to avoid that i add the following
entry into nova.conf file "--use_cow_images=false". But still I see, once
instances are spawned files goes into "/var/lib/nova/instances/_base"
location and still I'm having the issue in terminating instances.
What kind of configuration I should have in order to avoid this issue?
For your convenience I have posted my nova.conf file as below..
--daemonize=1
--dhcpbridge_flagfile=/etc/nova/nova.conf
--dhcpbridge=/usr/bin/nova-dhcpbridge
--force_dhcp_release
--logdir=/var/log/nova
--state_path=/var/lib/nova
--verbose
--connection_type=libvirt
--libvirt_type=lxc
--libvirt_use_virtio_for_bridges
--sql_connection=mysql://nova:openstack@172.16.0.1/nova
--s3_host=172.16.0.1
--s3_dmz=172.16.0.1
--rabbit_host=172.16.0.1
--ec2_host=172.16.0.1
--ec2_dmz_host=172.16.0.1
--ec2_url=http://172.16.0.1:8773/services/Cloud
--fixed_range=10.1.0.0/16
--network_size=512
--num_networks=1
--FAKE_subdomain=ec2
--public_interface=eth1
--auto_assign_floating_ip
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--image_service=nova.image.glance.GlanceImageService
--glance_api_servers=172.16.0.1:9292
--vlan_start=100
--vlan_interface=eth2
--root_helper=sudo nova-rootwrap
--zone_name=nova
--node_availability_zone=nova
--storage_availability_zone=nova
--allow_admin_api
--enable_zone_routing
--api_paste_config=/etc/nova/api-paste.ini
--vncserver_host=0.0.0.0
--vncproxy_url=http://172.16.0.1:6080
--ajax_console_proxy_url=http://172.16.0.1:8000
--osapi_host=172.16.0.1
--rabbit_host=172.16.0.1
--auth_strategy=keystone
--keystone_ec2_url=http://172.16.0.1:5000/v2.0/ec2tokens
--multi_host
--send_arp_for_ha
--novnc_enabled=true
--novncproxy_base_url=http://172.16.0.1:6080/vnc_auto.html
--vncserver_proxyclient_address=172.16.0.1
--vncserver_listen=172.16.0.1
--use_cow_images=false
Thanks
Sajith
On Wed, Jun 20, 2012 at 6:48 PM, George Mihaiescu
<George.Mihaiescu@xxxxxx>wrote:
> Hi Sajith,****
>
> ** **
>
> I noticed this error in the logs you sent :****
>
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp os.remove(fullname)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp OSError: [Errno 13] Permission
> denied:
> '/var/lib/nova/instances/instance-00000037/rootfs/boot/memtest86+.bin'\***
> *
>
> ** **
>
> Check who owns that file and if it’s somehow shared between instances
> because there might be an issue deleting it if it’s used by another
> instance.****
>
> I’m not sure what type of image you use, but qcow2 baselines stay in
> “/var/lib/nova/instances/_base” and they are shared among similar instances.
> ****
>
> ** **
>
> This is just a starting point, so you might have to dig some more through
> the logs. ****
>
> ** **
>
> George****
>
> ** **
> ------------------------------
>
> *From:* openstack-bounces+george.mihaiescu=q9.com@xxxxxxxxxxxxxxxxxxx[mailto:
> openstack-bounces+george.mihaiescu=q9.com@xxxxxxxxxxxxxxxxxxx] *On Behalf
> Of *Sajith Kariyawasam
> *Sent:* Tuesday, June 19, 2012 1:22 PM
> *To:* openstack@xxxxxxxxxxxxxxxxxxx
> *Subject:* Re: [Openstack] Instance termination is not stable****
>
> ** **
>
> Any clue on this guys? ****
>
> On Mon, Jun 18, 2012 at 7:08 PM, Sajith Kariyawasam <sajhak@xxxxxxxxx>
> wrote:****
>
> Hi all,
>
> I have Openstack Essex version installed and I have created several
> instances based on an Ubuntu-12.04 UEC image in Openstack and those are up
> and running.
>
> When I'm trying to terminate an instance I'm getting an exception (log is
> mentioned below) and, in console its status is shown as "Shutoff" and the
> task is "Deleting". Even though i tried terminating the instance again and
> again nothing happens. But after I restart machine (nova) those instances
> can be terminated.
>
> This issue is not occurred everytime, but occassionally, as I noted this
> occurs when there are more than 2 instances up and running at the same
> time.. If I create one instance, terminate that, again create one,
> terminate that one, if goes like that, there wont be an issue in
> terminating.
>
> What could be the problem here? any suggestions are highly appreciated.
>
> Thanks
>
>
> *ERROR LOG ( /var/log/nova/nova-compute.log )
> ==========*
>
> 2012-06-18 18:43:55 DEBUG nova.manager [-] Skipping
> ComputeManager._run_image_cache_manager_pass, 17 ticks left until next run
> from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:43:55 DEBUG nova.compute.manager [-]
> FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=24151)
> _reclaim_queued_deletes
> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380
> 2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._report_driver_status from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:43:55 INFO nova.compute.manager [-] Updating host status
> 2012-06-18 18:43:55 DEBUG nova.virt.libvirt.connection [-] Updating host
> stats from (pid=24151) update_status
> /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:2467
> 2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:17 DEBUG nova.rpc.amqp [-] received {u'_context_roles':
> [u'swiftoperator', u'Member', u'admin'], u'_context_request_id':
> u'req-01ca70c8-2240-407b-92d1-5a59ee497291', u'_context_read_deleted':
> u'no', u'args': {u'instance_uuid':
> u'9999d250-1d8b-4973-8320-e6058a2058b9'}, u'_context_auth_token':
> '<SANITIZED>', u'_context_is_admin': True, u'_context_project_id':
> u'194d6e24ec1843fb8fbd94c3fb519deb', u'_context_timestamp':
> u'2012-06-18T13:14:17.013212', u'_context_user_id':
> u'f8a75778c36241479693ff61a754f67b', u'method': u'terminate_instance',
> u'_context_remote_address': u'172.16.0.254'} from (pid=24151) _safe_log
> /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
> 2012-06-18 18:44:17 DEBUG nova.rpc.amqp
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] unpacked context: {'user_id':
> u'f8a75778c36241479693ff61a754f67b', 'roles': [u'swiftoperator', u'Member',
> u'admin'], 'timestamp': '2012-06-18T13:14:17.013212', 'auth_token':
> '<SANITIZED>', 'remote_address': u'172.16.0.254', 'is_admin': True,
> 'request_id': u'req-01ca70c8-2240-407b-92d1-5a59ee497291', 'project_id':
> u'194d6e24ec1843fb8fbd94c3fb519deb', 'read_deleted': u'no'} from
> (pid=24151) _safe_log
> /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160
> 2012-06-18 18:44:17 INFO nova.compute.manager
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: decorating:
> |<function terminate_instance at 0x2bd3050>|
> 2012-06-18 18:44:17 INFO nova.compute.manager
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: arguments:
> |<nova.compute.manager.ComputeManager object at 0x20ffb90>|
> |<nova.rpc.amqp.RpcContext object at 0x4d2a450>|
> |9999d250-1d8b-4973-8320-e6058a2058b9|
> 2012-06-18 18:44:17 DEBUG nova.compute.manager
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] instance
> 9999d250-1d8b-4973-8320-e6058a2058b9: getting locked state from (pid=24151)
> get_lock /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1597
> 2012-06-18 18:44:17 INFO nova.compute.manager
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: locked: |False|
> 2012-06-18 18:44:17 INFO nova.compute.manager
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: admin: |True|
> 2012-06-18 18:44:17 INFO nova.compute.manager
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: executing:
> |<function terminate_instance at 0x2bd3050>|
> 2012-06-18 18:44:17 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Attempting to grab semaphore
> "9999d250-1d8b-4973-8320-e6058a2058b9" for method
> "do_terminate_instance"... from (pid=24151) inner
> /usr/lib/python2.7/dist-packages/nova/utils.py:927
> 2012-06-18 18:44:17 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Got semaphore
> "9999d250-1d8b-4973-8320-e6058a2058b9" for method
> "do_terminate_instance"... from (pid=24151) inner
> /usr/lib/python2.7/dist-packages/nova/utils.py:931
> 2012-06-18 18:44:17 AUDIT nova.compute.manager
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] [instance:
> 9999d250-1d8b-4973-8320-e6058a2058b9] Terminating instance
> 2012-06-18 18:44:17 DEBUG nova.rpc.amqp
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Making asynchronous call on network ...
> from (pid=24151) multicall
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321
> 2012-06-18 18:44:17 DEBUG nova.rpc.amqp
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] MSG_ID is
> 2fba7314616d480fa39d5d4d1d942c46 from (pid=24151) multicall
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324
> 2012-06-18 18:44:17 DEBUG nova.compute.manager
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] [instance:
> 9999d250-1d8b-4973-8320-e6058a2058b9] Deallocating network for instance
> from (pid=24151) _deallocate_network
> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:616
> 2012-06-18 18:44:17 DEBUG nova.rpc.amqp
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Making asynchronous cast on network...
> from (pid=24151) cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:346
> 2012-06-18 18:44:20 WARNING nova.virt.libvirt.connection
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] [instance:
> 9999d250-1d8b-4973-8320-e6058a2058b9] Error from libvirt during saved
> instance removal. Code=3 Error=this function is not supported by the
> connection driver: virDomainHasManagedSaveImage
> 2012-06-18 18:44:20 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Attempting to grab semaphore "iptables"
> for method "apply"... from (pid=24151) inner
> /usr/lib/python2.7/dist-packages/nova/utils.py:927
> 2012-06-18 18:44:20 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Got semaphore "iptables" for method
> "apply"... from (pid=24151) inner
> /usr/lib/python2.7/dist-packages/nova/utils.py:931
> 2012-06-18 18:44:20 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Attempting to grab file lock "iptables"
> for method "apply"... from (pid=24151) inner
> /usr/lib/python2.7/dist-packages/nova/utils.py:935
> 2012-06-18 18:44:20 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Got file lock "iptables" for method
> "apply"... from (pid=24151) inner
> /usr/lib/python2.7/dist-packages/nova/utils.py:942
> 2012-06-18 18:44:20 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd (subprocess): sudo
> nova-rootwrap iptables-save -t filter from (pid=24151) execute
> /usr/lib/python2.7/dist-packages/nova/utils.py:219
> 2012-06-18 18:44:20 INFO nova.virt.libvirt.connection [-] [instance:
> 9999d250-1d8b-4973-8320-e6058a2058b9] Instance destroyed successfully.
> 2012-06-18 18:44:20 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd (subprocess): sudo
> nova-rootwrap iptables-restore from (pid=24151) execute
> /usr/lib/python2.7/dist-packages/nova/utils.py:219
> 2012-06-18 18:44:20 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd (subprocess): sudo
> nova-rootwrap iptables-save -t nat from (pid=24151) execute
> /usr/lib/python2.7/dist-packages/nova/utils.py:219
> 2012-06-18 18:44:21 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd (subprocess): sudo
> nova-rootwrap iptables-restore from (pid=24151) execute
> /usr/lib/python2.7/dist-packages/nova/utils.py:219
> 2012-06-18 18:44:21 DEBUG nova.network.linux_net
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] IPTablesManager.apply completed with
> success from (pid=24151) apply
> /usr/lib/python2.7/dist-packages/nova/network/linux_net.py:335
> 2012-06-18 18:44:21 INFO nova.virt.libvirt.connection
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] [instance:
> 9999d250-1d8b-4973-8320-e6058a2058b9] Deleting instance files
> /var/lib/nova/instances/instance-00000037
> 2012-06-18 18:44:21 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd (subprocess): sudo
> nova-rootwrap umount /dev/nbd11 from (pid=24151) execute
> /usr/lib/python2.7/dist-packages/nova/utils.py:219
> 2012-06-18 18:44:21 DEBUG nova.utils
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd (subprocess): sudo
> nova-rootwrap qemu-nbd -d /dev/nbd11 from (pid=24151) execute
> /usr/lib/python2.7/dist-packages/nova/utils.py:219
> 2012-06-18 18:44:21 ERROR nova.rpc.amqp
> [req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
> 194d6e24ec1843fb8fbd94c3fb519deb] Exception during message handling
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp Traceback (most recent call last):
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 252, in
> _process_data
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp rval = node_func(context=ctxt,
> **node_args)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp return f(*args, **kw)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 153, in
> decorated_function
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp function(self, context,
> instance_uuid, *args, **kwargs)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 177, in
> decorated_function
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp sys.exc_info())
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp self.gen.next()
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 171, in
> decorated_function
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp return function(self, context,
> instance_uuid, *args, **kwargs)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 747, in
> terminate_instance
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp do_terminate_instance()
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/utils.py", line 945, in inner
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp retval = f(*args, **kwargs)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 740, in
> do_terminate_instance
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp self._delete_instance(context,
> instance)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 718, in
> _delete_instance
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp
> self._shutdown_instance(context, instance, 'Terminating')
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 687, in
> _shutdown_instance
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp block_device_info)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
> 484, in destroy
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp cleanup=True)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
> 478, in _destroy
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp self._cleanup(instance)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
> 493, in _cleanup
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp shutil.rmtree(target)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/shutil.py", line 245, in rmtree
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp rmtree(fullname,
> ignore_errors, onerror)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/shutil.py", line 245, in rmtree
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp rmtree(fullname,
> ignore_errors, onerror)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/shutil.py", line 250, in rmtree
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp onerror(os.remove, fullname,
> sys.exc_info())
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp File
> "/usr/lib/python2.7/shutil.py", line 248, in rmtree
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp os.remove(fullname)
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp OSError: [Errno 13] Permission
> denied:
> '/var/lib/nova/instances/instance-00000037/rootfs/boot/memtest86+.bin'
> 2012-06-18 18:44:21 TRACE nova.rpc.amqp
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._publish_service_capabilities from (pid=24151)
> periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Notifying Schedulers of
> capabilities ... from (pid=24151) _publish_service_capabilities
> /usr/lib/python2.7/dist-packages/nova/manager.py:203
> 2012-06-18 18:44:55 DEBUG nova.rpc.amqp [-] Making asynchronous fanout
> cast... from (pid=24151) fanout_cast
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:354
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_rescued_instances from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._sync_power_states from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 WARNING nova.compute.manager [-] Found 5 in the
> database and 4 on the hypervisor.
> 2012-06-18 18:44:55 WARNING nova.compute.manager [-] [instance:
> 9999d250-1d8b-4973-8320-e6058a2058b9] Instance found in database but not
> known by hypervisor. Setting power state to NOSTATE
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_bandwidth_usage from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager.update_available_resource from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 INFO nova.virt.libvirt.connection [-] Compute_service
> record updated for sajithvb2
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_rebooting_instances from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Skipping
> ComputeManager._cleanup_running_deleted_instances, 27 ticks left until next
> run from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._heal_instance_info_cache from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 DEBUG nova.rpc.amqp [-] Making asynchronous call on
> network ... from (pid=24151) multicall
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321
> 2012-06-18 18:44:55 DEBUG nova.rpc.amqp [-] MSG_ID is
> e220048302744b3180c048d0e410aa29 from (pid=24151) multicall
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324
> 2012-06-18 18:44:55 DEBUG nova.compute.manager [-] Updated the info_cache
> for instance fa181c09-f78e-441f-afda-8c8eb84f24bd from (pid=24151)
> _heal_instance_info_cache
> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2227
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Skipping
> ComputeManager._run_image_cache_manager_pass, 16 ticks left until next run
> from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 DEBUG nova.compute.manager [-]
> FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=24151)
> _reclaim_queued_deletes
> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._report_driver_status from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._publish_service_capabilities from (pid=24151)
> periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Notifying Schedulers of
> capabilities ... from (pid=24151) _publish_service_capabilities
> /usr/lib/python2.7/dist-packages/nova/manager.py:203
> 2012-06-18 18:45:55 DEBUG nova.rpc.amqp [-] Making asynchronous fanout
> cast... from (pid=24151) fanout_cast
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:354
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_rescued_instances from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Skipping
> ComputeManager._sync_power_states, 10 ticks left until next run from
> (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_bandwidth_usage from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager.update_available_resource from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 INFO nova.virt.libvirt.connection [-] Compute_service
> record updated for sajithvb2
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_rebooting_instances from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Skipping
> ComputeManager._cleanup_running_deleted_instances, 26 ticks left until next
> run from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._heal_instance_info_cache from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 DEBUG nova.rpc.amqp [-] Making asynchronous call on
> network ... from (pid=24151) multicall
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321
> 2012-06-18 18:45:55 DEBUG nova.rpc.amqp [-] MSG_ID is
> 4dfad20f5f9f498997cf6ee7141e563d from (pid=24151) multicall
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324
> 2012-06-18 18:45:55 DEBUG nova.compute.manager [-] Updated the info_cache
> for instance 8ff80284-ff59-4eb7-b5e7-5e3a36fc4144 from (pid=24151)
> _heal_instance_info_cache
> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2227
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Skipping
> ComputeManager._run_image_cache_manager_pass, 15 ticks left until next run
> from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 DEBUG nova.compute.manager [-]
> FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=24151)
> _reclaim_queued_deletes
> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._report_driver_status from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:45:55 INFO nova.compute.manager [-] Updating host status
> 2012-06-18 18:45:55 DEBUG nova.virt.libvirt.connection [-] Updating host
> stats from (pid=24151) update_status
> /usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:2467
> 2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._publish_service_capabilities from (pid=24151)
> periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Notifying Schedulers of
> capabilities ... from (pid=24151) _publish_service_capabilities
> /usr/lib/python2.7/dist-packages/nova/manager.py:203
> 2012-06-18 18:46:55 DEBUG nova.rpc.amqp [-] Making asynchronous fanout
> cast... from (pid=24151) fanout_cast
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:354
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_rescued_instances from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Skipping
> ComputeManager._sync_power_states, 9 ticks left until next run from
> (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_bandwidth_usage from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager.update_available_resource from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 INFO nova.virt.libvirt.connection [-] Compute_service
> record updated for sajithvb2
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_rebooting_instances from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Skipping
> ComputeManager._cleanup_running_deleted_instances, 25 ticks left until next
> run from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._heal_instance_info_cache from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 DEBUG nova.rpc.amqp [-] Making asynchronous call on
> network ... from (pid=24151) multicall
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321
> 2012-06-18 18:46:55 DEBUG nova.rpc.amqp [-] MSG_ID is
> fcebfbce0666469dbefdd4b1a2c4df38 from (pid=24151) multicall
> /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324
> 2012-06-18 18:46:55 DEBUG nova.compute.manager [-] Updated the info_cache
> for instance dda6d890-72bf-4538-816d-5e19702902a4 from (pid=24151)
> _heal_instance_info_cache
> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2227
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Skipping
> ComputeManager._run_image_cache_manager_pass, 14 ticks left until next run
> from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:147
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 DEBUG nova.compute.manager [-]
> FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=24151)
> _reclaim_queued_deletes
> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._report_driver_status from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
> 2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
> ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
> /usr/lib/python2.7/dist-packages/nova/manager.py:152
>
>
> --
> Best Regards****
>
> Sajith****
>
> ** **
>
>
>
>
> --
> Best Regards****
>
> Sajith****
>
> ** **
>
--
Best Regards
Sajith
References