openstack team mailing list archive
-
openstack team
-
Mailing list archive
-
Message #05869
Re: nova compute node error [diablo]
hi,nova.conf as follow:
#general
--logdir=/var/log/nova
--state_path=/var/lib/nova
--lock_path=/var/lock/nova
--verbose=True
--use_syslog=False
#nova-objectstore
--use_s3=True
--s3_host=192.168.1.2
--s3_port=3333
#rabbit
--rabbit_host=192.168.1.2
--rabbit_port=5672
--rabbit_password=16bcb168c55b9f4fa881
#ec2
--ec2_host=192.168.1.2
--ec2_port=8773
--ec2_url=http://192.168.1.2:8773/services/Cloud
#osapi
--osapi_host=192.168.1.2
--osapi_port=8774
#db
--sql_connection=mysql://nova:nova@192.168.1.2/nova
--sql_idle_timeout=600
--sql_max_retries=3
--sql_retry_interval=3
#glance
--glance_host=192.168.1.2
--glance_api_servers=192.168.1.2:9292
--image_service=nova.image.glance.GlanceImageService
#libvirt
--connection_type=libvirt
--libvirt_type=kvm
--snapshot_image_format=qcow2
--use_cow_image=True
--libvirt_use_virtio_for_bridges=True
#auth
--use_deprecated_auth=True
#--start_guests_on_host_boot=True
#--resume_guests_state_on_host_boot=True
#nova-network
--dhcpbridge_flagfile=/etc/nova/nova.conf
--public_interface=eth0
--dhcpbridge=/usr/bin/nova-dhcpbridge
--routing_source_ip=x.x.x.x
--network_manager=nova.network.manager.FlatDHCPManager
--linuxnet_interface_driver=nova.network.linux_net.LinuxBridgeInterfaceDriver
--flat_injected=False
--flat_interface=eth1
--multi_host=True
--floating_range=x.x.x.x
--fixed_range=10.0.0.0/23
--fixed_ip_disassociate_timeout=600
--force_dhcp_release=True
--use_ipv6=False
--allow_resize_to_same_host=True
--dhcp_lease_time=62208000
2011/12/1 Leandro Reox <leandro.reox@xxxxxxxxx>:
> Hi
>
> Thats an sqlalchemy error, what database backend are you using ? That error
> is that actually youre not closing connections and they are not returning to
> the pool
>
> Regards
>
> On Thu, Dec 1, 2011 at 7:04 AM, darkfower <atkisc@xxxxxxxxx> wrote:
>>
>> hi ,everybody
>>
>> nova compute node error :
>>
>> 2011-12-01 17:35:52,871 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:37:27,531 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:38:27,540 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 17:39:01,621 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:40:36,781 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:41:36,782 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 17:42:11,045 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:43:45,621 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:44:45,621 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 17:45:20,981 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:46:55,161 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:47:55,171 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 17:48:29,746 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:50:04,291 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:51:04,292 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 17:51:38,771 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:53:13,591 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:54:13,592 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 17:54:48,022 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:56:23,211 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:57:23,221 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 17:57:57,921 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 17:59:32,065 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 18:00:32,066 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 18:01:06,801 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 18:02:41,632 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>> 2011-12-01 18:03:41,632 INFO nova.compute.manager [-] Updating host status
>> 2011-12-01 18:04:15,962 WARNING nova.compute.manager [-] Error during
>> power_state sync: QueuePool limit of size 10 overflow 10 reached,
>> connection timed out, timeout 30
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>
>
Follow ups
References