← Back to team overview

openstack team mailing list archive

Re: CRITICAL nova [-] [Errno 98] Address already in use

 

I just realized the problem. Your issue is actually the metadata api since you have something listening on 8775. If you are running nova-api-metadata separately then you can remove it from your list of enabled apis:

enabled_apis=ec2,osapi_compute

Alternatively just kill nova-api-metadata and allow it to run as wone of the nova-api components.

Just for your reference, nova-api is an easy way to run all of the apis as one service. In this case it uses the enabled_apis config option. You can also run all of the apis separately by using the individual binaries:

nova-api-ec2
nova-api-metadata
nova-api-os-compute
nova-api-os-volume

Vish

On Dec 10, 2012, at 10:42 AM, Andrew Holway <a.holway@xxxxxxxxxxxx> wrote:

> Hi,
> 
> I have actually no idea how do do that. But the service opts look vaguely relevant:
> 
> Does anyone have a working installation on Centos 6.3?
> 
> Thanks,
> 
> Andrew
> 
> 
> 
> service_opts = [
>    cfg.IntOpt('report_interval',
>               default=10,
>               help='seconds between nodes reporting state to datastore'),
>    cfg.IntOpt('periodic_interval',
>               default=60,
>               help='seconds between running periodic tasks'),
>    cfg.IntOpt('periodic_fuzzy_delay',
>               default=60,
>               help='range of seconds to randomly delay when starting the'
>                    ' periodic task scheduler to reduce stampeding.'
>                    ' (Disable by setting to 0)'),
>    cfg.StrOpt('ec2_listen',
>               default="0.0.0.0",
>               help='IP address for EC2 API to listen'),
>    cfg.IntOpt('ec2_listen_port',
>               default=8773,
>               help='port for ec2 api to listen'),
>    cfg.IntOpt('ec2_workers',
>               default=None,
>               help='Number of workers for EC2 API service'),
>    cfg.StrOpt('osapi_compute_listen',
>               default="0.0.0.0",
>               help='IP address for OpenStack API to listen'),
>    cfg.IntOpt('osapi_compute_listen_port',
>               default=8774,
>               help='list port for osapi compute'),
>    cfg.IntOpt('osapi_compute_workers',
>               default=None,
>               help='Number of workers for OpenStack API service'),
>    cfg.StrOpt('metadata_manager',
>               default='nova.api.manager.MetadataManager',
>               help='OpenStack metadata service manager'),
>    cfg.StrOpt('metadata_listen',
>               default="0.0.0.0",
>               help='IP address for metadata api to listen'),
>    cfg.IntOpt('metadata_listen_port',
>               default=8775,
>               help='port for metadata api to listen'),
>    cfg.IntOpt('metadata_workers',
>               default=None,
>               help='Number of workers for metadata service'),
>    cfg.StrOpt('osapi_volume_listen',
>               default="0.0.0.0",
>               help='IP address for OpenStack Volume API to listen'),
>    cfg.IntOpt('osapi_volume_listen_port',
>               default=8776,
>               help='port for os volume api to listen'),
>    cfg.IntOpt('osapi_volume_workers',
>               default=None,
>               help='Number of workers for OpenStack Volume API service'),
>    ]
> 
> On Dec 10, 2012, at 7:29 PM, Vishvananda Ishaya wrote:
> 
>> Nope. Best i can think of is to throw some log statements into nova/service.py right before the exception gets thrown. See which api it is trying to start and what it thinks the value of enabled_apis is. Etc.
>> 
>> Vish
>> 
>> On Dec 10, 2012, at 10:24 AM, Andrew Holway <a.holway@xxxxxxxxxxxx> wrote:
>> 
>>> Hi,
>>> 
>>> maybe this will shed some light on it..?
>>> 
>>> Thanks,
>>> 
>>> Andrew
>>> 
>>> [root@blade02 init.d]# cat /etc/nova/api-paste.ini 
>>> ############
>>> # Metadata #
>>> ############
>>> [composite:metadata]
>>> use = egg:Paste#urlmap
>>> /: meta
>>> 
>>> [pipeline:meta]
>>> pipeline = ec2faultwrap logrequest metaapp
>>> 
>>> [app:metaapp]
>>> paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory
>>> 
>>> #######
>>> # EC2 #
>>> #######
>>> 
>>> [composite:ec2]
>>> use = egg:Paste#urlmap
>>> /services/Cloud: ec2cloud
>>> 
>>> [composite:ec2cloud]
>>> use = call:nova.api.auth:pipeline_factory
>>> noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor
>>> keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor
>>> 
>>> [filter:ec2faultwrap]
>>> paste.filter_factory = nova.api.ec2:FaultWrapper.factory
>>> 
>>> [filter:logrequest]
>>> paste.filter_factory = nova.api.ec2:RequestLogging.factory
>>> 
>>> [filter:ec2lockout]
>>> paste.filter_factory = nova.api.ec2:Lockout.factory
>>> 
>>> [filter:ec2keystoneauth]
>>> paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory
>>> 
>>> [filter:ec2noauth]
>>> paste.filter_factory = nova.api.ec2:NoAuth.factory
>>> 
>>> [filter:cloudrequest]
>>> controller = nova.api.ec2.cloud.CloudController
>>> paste.filter_factory = nova.api.ec2:Requestify.factory
>>> 
>>> [filter:authorizer]
>>> paste.filter_factory = nova.api.ec2:Authorizer.factory
>>> 
>>> [filter:validator]
>>> paste.filter_factory = nova.api.ec2:Validator.factory
>>> 
>>> [app:ec2executor]
>>> paste.app_factory = nova.api.ec2:Executor.factory
>>> 
>>> #############
>>> # Openstack #
>>> #############
>>> 
>>> [composite:osapi_compute]
>>> use = call:nova.api.openstack.urlmap:urlmap_factory
>>> /: oscomputeversions
>>> /v1.1: openstack_compute_api_v2
>>> /v2: openstack_compute_api_v2
>>> 
>>> [composite:osapi_volume]
>>> use = call:nova.api.openstack.urlmap:urlmap_factory
>>> /: osvolumeversions
>>> /v1: openstack_volume_api_v1
>>> 
>>> [composite:openstack_compute_api_v2]
>>> use = call:nova.api.auth:pipeline_factory
>>> noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
>>> keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2
>>> keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2
>>> 
>>> [composite:openstack_volume_api_v1]
>>> use = call:nova.api.auth:pipeline_factory
>>> noauth = faultwrap sizelimit noauth ratelimit osapi_volume_app_v1
>>> keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_volume_app_v1
>>> keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_volume_app_v1
>>> 
>>> [filter:faultwrap]
>>> paste.filter_factory = nova.api.openstack:FaultWrapper.factory
>>> 
>>> [filter:noauth]
>>> paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory
>>> 
>>> [filter:ratelimit]
>>> paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
>>> 
>>> [filter:sizelimit]
>>> paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory
>>> 
>>> [app:osapi_compute_app_v2]
>>> paste.app_factory = nova.api.openstack.compute:APIRouter.factory
>>> 
>>> [pipeline:oscomputeversions]
>>> pipeline = faultwrap oscomputeversionapp
>>> 
>>> [app:osapi_volume_app_v1]
>>> paste.app_factory = nova.api.openstack.volume:APIRouter.factory
>>> 
>>> [app:oscomputeversionapp]
>>> paste.app_factory = nova.api.openstack.compute.versions:Versions.factory
>>> 
>>> [pipeline:osvolumeversions]
>>> pipeline = faultwrap osvolumeversionapp
>>> 
>>> [app:osvolumeversionapp]
>>> paste.app_factory = nova.api.openstack.volume.versions:Versions.factory
>>> 
>>> ##########
>>> # Shared #
>>> ##########
>>> 
>>> [filter:keystonecontext]
>>> paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory
>>> 
>>> [filter:authtoken]
>>> paste.filter_factory = keystone.middleware.auth_token:filter_factory
>>> admin_tenant_name = service
>>> admin_user = nova
>>> admin_password = x7deix7dei
>>> auth_uri = http://controller:5000/
>>> On Dec 10, 2012, at 7:10 PM, Vishvananda Ishaya wrote:
>>> 
>>>> Odd. This looks remarkably like it is trying to start osapi_volume even though you don't have it specified in enabled apis. Your enabled_apis setting looks correct to me.
>>>> 
>>>> Vish
>>>> 
>>>> 
>>>> On Dec 10, 2012, at 9:24 AM, Andrew Holway <a.holway@xxxxxxxxxxxx> wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> I cannot start the nova-api service.
>>>>> 
>>>>> [root@blade02 07-openstack-controller]# nova list
>>>>> ERROR: ConnectionRefused: '[Errno 111] Connection refused'
>>>>> 
>>>>> I followed this guide very carefully:
>>>>> 
>>>>> https://github.com/beloglazov/openstack-centos-kvm-glusterfs/#07-openstack-controller-controller
>>>>> 
>>>>> Here is api.log
>>>>> 
>>>>> 2012-12-10 17:51:31 DEBUG nova.wsgi [-] Loading app metadata from /etc/nova/api-paste.ini from (pid=2536) load_app /usr/lib/python2.6/site-packages/nova/wsgi.py:371
>>>>> 2012-12-10 17:51:31 CRITICAL nova [-] [Errno 98] Address already in use
>>>>> 2012-12-10 17:51:31 TRACE nova Traceback (most recent call last):
>>>>> 2012-12-10 17:51:31 TRACE nova   File "/usr/bin/nova-api", line 50, in <module>
>>>>> 2012-12-10 17:51:31 TRACE nova     server = service.WSGIService(api)
>>>>> 2012-12-10 17:51:31 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/service.py", line 584, in __init__
>>>>> 2012-12-10 17:51:31 TRACE nova     port=self.port)
>>>>> 2012-12-10 17:51:31 TRACE nova   File "/usr/lib/python2.6/site-packages/nova/wsgi.py", line 72, in __init__
>>>>> 2012-12-10 17:51:31 TRACE nova     self._socket = eventlet.listen((host, port), backlog=backlog)
>>>>> 2012-12-10 17:51:31 TRACE nova   File "/usr/lib/python2.6/site-packages/eventlet/convenience.py", line 38, in listen
>>>>> 2012-12-10 17:51:31 TRACE nova     sock.bind(addr)
>>>>> 2012-12-10 17:51:31 TRACE nova   File "<string>", line 1, in bind
>>>>> 2012-12-10 17:51:31 TRACE nova error: [Errno 98] Address already in use
>>>>> 2012-12-10 17:51:31 TRACE nova 
>>>>> 2012-12-10 17:51:31 INFO nova.service [-] Parent process has died unexpectedly, exiting
>>>>> 2012-12-10 17:51:31 INFO nova.service [-] Parent process has died unexpectedly, exiting
>>>>> 2012-12-10 17:51:31 INFO nova.wsgi [-] Stopping WSGI server.
>>>>> 2012-12-10 17:51:31 INFO nova.wsgi [-] Stopping WSGI server.
>>>>> 
>>>>> [root@blade02 07-openstack-controller]# cat /etc/nova/nova.conf 
>>>>> [DEFAULT]
>>>>> logdir = /var/log/nova
>>>>> state_path = /var/lib/nova
>>>>> lock_path = /var/lib/nova/tmp
>>>>> volumes_dir = /etc/nova/volumes
>>>>> dhcpbridge = /usr/bin/nova-dhcpbridge
>>>>> dhcpbridge_flagfile = /etc/nova/nova.conf
>>>>> force_dhcp_release = False
>>>>> injected_network_template = /usr/share/nova/interfaces.template
>>>>> libvirt_nonblocking = True
>>>>> libvirt_inject_partition = -1
>>>>> network_manager = nova.network.manager.FlatDHCPManager
>>>>> iscsi_helper = tgtadm
>>>>> sql_connection = mysql://nova:x7deix7dei@controller/nova
>>>>> compute_driver = libvirt.LibvirtDriver
>>>>> firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
>>>>> rpc_backend = nova.openstack.common.rpc.impl_qpid
>>>>> rootwrap_config = /etc/nova/rootwrap.conf
>>>>> verbose = True
>>>>> auth_strategy = keystone
>>>>> qpid_hostname = controller
>>>>> network_host = compute1
>>>>> fixed_range = 10.0.0.0/24
>>>>> flat_interface = eth1
>>>>> flat_network_bridge = br100
>>>>> public_interface = eth1
>>>>> glance_host = controller
>>>>> vncserver_listen = 0.0.0.0
>>>>> vncserver_proxyclient_address = controller
>>>>> novncproxy_base_url = http://37.123.104.3:6080/vnc_auto.html
>>>>> xvpvncproxy_base_url = http://37.123.104.3:6081/console
>>>>> metadata_host = 10.141.6.2
>>>>> enabled_apis=ec2,osapi_compute,metadata
>>>>> 
>>>>> #[keystone_authtoken]
>>>>> admin_tenant_name = %SERVICE_TENANT_NAME%
>>>>> admin_user = %SERVICE_USER%
>>>>> admin_password = %SERVICE_PASSWORD%
>>>>> auth_host = 127.0.0.1
>>>>> auth_port = 35357
>>>>> auth_protocol = http
>>>>> signing_dirname = /tmp/keystone-signing-nova
>>>>> 
>>>>> There is no process using port 8774.
>>>>> 
>>>>> [root@blade02 07-openstack-controller]# netstat -tunlp | grep 877
>>>>> tcp        0      0 0.0.0.0:8775                0.0.0.0:*                   LISTEN      2157/python      
>>>>> 
>>>>> Maybe it is something similar to:
>>>>> 
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=877606#c3
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Andrew
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> _______________________________________________
>>>>> Mailing list: https://launchpad.net/~openstack
>>>>> Post to     : openstack@xxxxxxxxxxxxxxxxxxx
>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>> More help   : https://help.launchpad.net/ListHelp
>>>> 
>>> 
>>> 
>> 
> 
> 



Follow ups

References