← Back to team overview

fuel-dev team mailing list archive

Re: Additional Slave node is not getting DHCP IP assigned

 

Hi Mike,

I tried to open cobbler-broken.tar.xz and it looks like archive is broken.

Thanks,

On Tue, Sep 16, 2014 at 9:14 AM, Mike Chao <mchao@xxxxxxxxxxx> wrote:

> Hello Matthew/Evgeniy,
>
>
>
> Any update in regards to this issue?
>
>
>
> Thanks,
>
> -mc
>
>
>
> *From:* Mike Chao
> *Sent:* Friday, August 29, 2014 3:44 PM
> *To:* 'Evgeniy L'; Matthew Mosesohn
> *Cc:* fuel-dev@xxxxxxxxxxxxxxxxxxx; Chee Thao (CW); Nataraj Mylsamy (CW);
> Raghunath Mallina (CW)
> *Subject:* RE: [Fuel-dev] Additional Slave node is not getting DHCP IP
> assigned
>
>
>
> Hello,
>
>
>
> Thank you for the input Evgeniy, but not able to get cobbler container
> running to access shell for cobbler.  I checked the folder you pointed out
> using the one off method that Matthew provided to access the folder and saw
> no files in there with 0 size.
>
>
>
> Matthew,
>
> The size is 124 on the file.  Here’s the link:
>
>
>
>
> https://drive.google.com/file/d/0B2coa7x859pmcWZSdkZqdTVOWUU/edit?usp=sharing
>
>
>
> -mc
>
>
>
> *From:* Evgeniy L [mailto:eli@xxxxxxxxxxxx <eli@xxxxxxxxxxxx>]
> *Sent:* Friday, August 29, 2014 5:04 AM
> *To:* Matthew Mosesohn
> *Cc:* Mike Chao; fuel-dev@xxxxxxxxxxxxxxxxxxx; Chee Thao (CW); Nataraj
> Mylsamy (CW); Raghunath Mallina (CW)
>
> *Subject:* Re: [Fuel-dev] Additional Slave node is not getting DHCP IP
> assigned
>
>
>
> Hi,
>
>
>
> It looks like it's is the problem which we were debugging several
>
> month ago, we were not able to reproduce it since then, but
>
> it's really easy to tell if it was the same issue.
>
>
>
> You can login into the container and run
>
>
>
>   dockerctl shell cobbler
>
>
>
> Run command
>
>
>
>   ls -l /var/lib/cobbler/config/{distros.d,profiles.d}
>
>
>
> If some of the files have 0 size, just delete them and restart the
> container.
>
>
>
> Thanks,
>
>
>
> On Fri, Aug 29, 2014 at 3:25 PM, Matthew Mosesohn <mmosesohn@xxxxxxxxxxxx>
> wrote:
>
> Hi Mike Chao,
>
> Is it possible for you to save that container and send it to me? You
> could, for example, upload it to google drive or dropbox and send me a
> link?
>
> To save it, run the following commands:
> docker commit fuel-core-5.1-cobbler fuel/cobblersave
> docker save fuel/cobblersave | xz -c -T0 > cobbler-broken.tar.xz
> Then copy this file somewhere I can reach it. The file should be ~300mb.
>
> Also, there appears to be an issue with loading distros in Cobbler,
> but it's really unclear. If you want to get your hands really dirty,
> you can try these steps:
> In the meanwhile, you can run a one-off instance of cobbler container
> in docker and get to a permanent shell:
> docker run --rm -i -t  -v /etc/fuel:/etc/fuel:ro -v
> /var/log/docker-logs:/var/log -v /var/www/nailgun:/var/www/nailgun:rw
> -v /root/.ssh:/root/.ssh:ro  fuel/cobbler_5.1 /bin/bash
> Then run start.sh and let it fail.
> Then run cobblerd -F -l DEBUG and see errors.
>
> I expect there was an issue loading one of the distros (centos,
> ubuntu, or bootstrap) and some file did not get configured correctly.
> Could you tell me where you installed from? Was it a custom ISO or one
> of the community ones? I would try to install a newer one and see if
> the problem reproduces itself.
>
>
> On Fri, Aug 29, 2014 at 2:30 AM, Mike Chao <mchao@xxxxxxxxxxx> wrote:
> > Hello Sergii,
> >
> >
> >
> > After diving in deeper, we noticed that cobbler container is not running.
> > Attempted to restart docker-cobbler through supervisorctl – cobbler shell
> > via dockerctl becomes available for a short time, but soon fails as
> cobbler
> > container runs for a short time (seconds) and then stops due to errors.
> >
> >
> >
> > Please let us know if this is recoverable and best course of action to
> take.
> >
> >
> >
> > Also would it be possible to build a mirantis openstack server on a
> > different server and add in existing nodes to it, so we can create a new
> > environment without having to bring down the slave nodes?
> >
> >
> >
> > Here is a portion of the cobbler log via dockerctl copied and pasted:
> >
> >
> >
> > Stopping cobbler daemon: [FAILED]
> >
> > Starting dnsmasq: [  OK  ] Traceback (most recent call last):
> >
> >   File "/usr/bin/cobblerd", line 76, in main
> >
> >     api = cobbler_api.BootAPI(is_cobblerd=True)
> >
> >   File "/usr/lib/python2.6/site-packages/cobbler/api.py", line 130, in
> > __init__
> >
> >     self.deserialize()
> >
> >   File "/usr/lib/python2.6/site-packages/cobbler/api.py", line 898, in
> > deserialize
> >
> >    return self._config.deserialize()
> >
> >   File "/usr/lib/python2.6/site-packages/cobbler/config.py", line 266, in
> > deserialize
> >
> >     raise CX("serializer: error loading collection %s. Check
> > /etc/cobbler/modules.conf" % item.collection_type())
> >
> > CX: 'serializer: error loading collection distro. Check
> > /etc/cobbler/modules.conf'
> >
> > Stopping httpd: [FAILED] Stopping xinetd: [FAILED] _[0;32mInfo: Loading
> > facts in /etc/puppet/modules/nagios/lib/facter/disks.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/nagios/lib/facter/mountpoints.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/corosync/lib/facter/pacemaker_hostname.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> >
> /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/firewall/lib/facter/iptables_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/ceph_osd.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/cinder_conf.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/glance_api_conf.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/ceph_conf.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/nova_compute.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/keystone_conf.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/osnailyfacter/lib/facter/naily.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/stdlib/lib/facter/root_home.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/stdlib/lib/facter/pe_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/stdlib/lib/facter/puppet_vardir.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/galera/lib/facter/galera_gcomm_empty.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/galera/lib/facter/mysql_log_file_size_real.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/cacrl.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/cacert.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/hostprivkey.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/localacacert.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/certname.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/puppet_semantic_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/hostcert.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/cakey.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/concat/lib/facter/concat_basedir.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/mmm/lib/facter/ipaddresses.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/swift/lib/facter/swift_mountpoints.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/nailgun/lib/facter/fuel_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/nailgun/lib/facter/generate_fuel_key.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/ovs_vlan_splinters.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/fqdn_hostname.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/check_kern_module.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/default_route.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/openvswitch.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/neutron/lib/facter/defaultroute.rb_[0m
> >
> > ls: cannot access /dev/sda?: No such file or directory
> >
> > ls: cannot access /dev/sda?: No such file or directory
> >
> > _[1;31mWarning: Config file /etc/puppet/hiera.yaml not found, using Hiera
> > defaults_[0m
> >
> > _[mNotice: Compiled catalog for 1cd07e2e6b90.englab.brocade.com in
> > environment production in 2.91 seconds_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/nagios/lib/facter/disks.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/nagios/lib/facter/mountpoints.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/corosync/lib/facter/pacemaker_hostname.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> >
> /etc/puppet/modules/firewall/lib/facter/iptables_persistent_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/firewall/lib/facter/ip6tables_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/firewall/lib/facter/iptables_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/ceph_osd.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/cinder_conf.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/glance_api_conf.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/ceph_conf.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/nova_compute.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/ceph/lib/facter/keystone_conf.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/osnailyfacter/lib/facter/naily.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/stdlib/lib/facter/root_home.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/stdlib/lib/facter/pe_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/stdlib/lib/facter/puppet_vardir.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/galera/lib/facter/galera_gcomm_empty.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/galera/lib/facter/mysql_log_file_size_real.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/cacrl.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/cacert.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/hostprivkey.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/localacacert.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/certname.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/puppet_semantic_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/hostcert.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/puppet/lib/facter/cakey.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/concat/lib/facter/concat_basedir.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/mmm/lib/facter/ipaddresses.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/swift/lib/facter/swift_mountpoints.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/nailgun/lib/facter/fuel_version.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/nailgun/lib/facter/generate_fuel_key.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/ovs_vlan_splinters.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/fqdn_hostname.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/check_kern_module.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/default_route.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/l23network/lib/facter/openvswitch.rb_[0m
> >
> > _[0;32mInfo: Loading facts in
> > /etc/puppet/modules/neutron/lib/facter/defaultroute.rb_[0m
> >
> > ls: cannot access /dev/sda?: No such file or directory
> >
> > ls: cannot access /dev/sda?: No such file or directory
> >
> > _[0;32mInfo: Applying configuration version '1407408243'_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[ssh]/Exec[access_to_cobbler_tcp_port:
> > 22]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[tftp_udp]/Exec[access_to_cobbler_udp_port:
> > 69]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[dns_udp]/Exec[access_to_cobbler_udp_port:
> > 53]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[http]/Exec[access_to_cobbler_tcp_port:
> > 80]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[dhcp_68]/Exec[access_to_cobbler_udp_port:
> > 68]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[xmlrpc_api]/Exec[access_to_cobbler_tcp_port:
> > 25151]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[tftp_tcp]/Exec[access_to_cobbler_tcp_port:
> > 69]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[http_3128]/Exec[access_to_cobbler_tcp_port:
> > 3128]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[pxe_4011]/Exec[access_to_cobbler_udp_port:
> > 4011]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[dhcp_67]/Exec[access_to_cobbler_udp_port:
> > 67]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[ntp_udp]/Exec[access_to_cobbler_udp_port:
> > 123]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[syslog_tcp]/Exec[access_to_cobbler_tcp_port:
> > 25150]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[https]/Exec[access_to_cobbler_tcp_port:
> > 443]/returns: executed successfully_[0m
> >
> > _[mNotice:
> >
> /Stage[main]/Cobbler::Iptables/Cobbler::Iptables::Access_to_cobbler_port[dns_tcp]/Exec[access_to_cobbler_tcp_port:
> > 53]/returns: executed successfully_[0m
> >
> > _[0;32mInfo: cobbler_digest_user: user cobbler already exists_[0m
> >
> > _[mNotice: /Stage[main]/Cobbler::Server/Service[httpd]/ensure: ensure
> > changed 'stopped' to 'running'_[0m
> >
> > _[0;32mInfo: /Stage[main]/Cobbler::Server/Service[httpd]: Unscheduling
> > refresh on Service[httpd]_[0m
> >
> > _[mNotice: /Stage[main]/Cobbler::Server/Service[cobblerd]/ensure: ensure
> > changed 'stopped' to 'running'_[0m
> >
> > _[0;32mInfo: /Stage[main]/Cobbler::Server/Service[cobblerd]: Scheduling
> > refresh of Exec[cobbler_sync]_[0m
> >
> > _[0;32mInfo: /Stage[main]/Cobbler::Server/Service[cobblerd]: Unscheduling
> > refresh on Service[cobblerd]_[0m
> >
> > _[mNotice: /Stage[main]/Cobbler::Server/Exec[cobbler_sync]/returns:
> cobblerd
> > does not appear to be running/accessible_[0m
> >
> > _[1;31mError: /Stage[main]/Cobbler::Server/Exec[cobbler_sync]: Failed to
> > call refresh: cobbler sync returned 155 instead of one of [0]_[0m
> >
> > _[1;31mError: /Stage[main]/Cobbler::Server/Exec[cobbler_sync]: cobbler
> sync
> > returned 155 instead of one of [0]_[0m
> >
> > _[mNotice: /Stage[main]/Cobbler::Server/Service[xinetd]/ensure: ensure
> > changed 'stopped' to 'running'_[0m
> >
> > _[0;32mInfo: /Stage[main]/Cobbler::Server/Service[xinetd]: Unscheduling
> > refresh on Service[xinetd]_[0m
> >
> > _[0;32mInfo: cobbler_distro: checking if distro exists: bootstrap_[0m
> >
> > _[1;31mError: /Stage[main]/Nailgun::Cobbler/Cobbler_distro[bootstrap]:
> Could
> > not evaluate: cobblerd does not appear to be running/accessible
> >
> > _[0m
> >
> > _[mNotice: /Stage[main]/Nailgun::Cobbler/Exec[cp /root/.ssh/id_rsa.pub
> > /etc/cobbler/authorized_keys]/returns: executed successfully_[0m
> >
> > _[mNotice: /Stage[main]/Nailgun::Cobbler/Cobbler_profile[bootstrap]:
> > Dependency Cobbler_distro[bootstrap] has failures: true_[0m
> >
> > _[1;31mWarning: /Stage[main]/Nailgun::Cobbler/Cobbler_profile[bootstrap]:
> > Skipping because of failed dependencies_[0m
> >
> > _[mNotice:
> /Stage[main]/Nailgun::Cobbler/Exec[cobbler_system_add_default]:
> > Dependency Cobbler_distro[bootstrap] has failures: true_[0m
> >
> > _[1;31mWarning:
> > /Stage[main]/Nailgun::Cobbler/Exec[cobbler_system_add_default]: Skipping
> > because of failed dependencies_[0m
> >
> > _[mNotice:
> /Stage[main]/Nailgun::Cobbler/Exec[cobbler_system_edit_default]:
> > Dependency Cobbler_distro[bootstrap] has failures: true_[0m
> >
> > _[1;31mWarning:
> > /Stage[main]/Nailgun::Cobbler/Exec[cobbler_system_edit_default]: Skipping
> > because of failed dependencies_[0m
> >
> > _[mNotice: /Stage[main]/Nailgun::Cobbler/Exec[nailgun_cobbler_sync]:
> > Dependency Cobbler_distro[bootstrap] has failures: true_[0m
> >
> > _[1;31mWarning: /Stage[main]/Nailgun::Cobbler/Exec[nailgun_cobbler_sync]:
> > Skipping because of failed dependencies_[0m
> >
> > _[0;32mInfo: cobbler_distro: checking if distro exists:
> > ubuntu_1204_x86_64_[0m
> >
> > _[1;31mError:
> > /Stage[main]/Nailgun::Cobbler/Cobbler_distro[ubuntu_1204_x86_64]: Could
> not
> > evaluate: cobblerd does not appear to be running/accessible
> >
> > _[0m
> >
> > _[mNotice:
> > /Stage[main]/Nailgun::Cobbler/Cobbler_profile[ubuntu_1204_x86_64]:
> > Dependency Cobbler_distro[ubuntu_1204_x86_64] has failures: true_[0m
> >
> > _[1;31mWarning:
> > /Stage[main]/Nailgun::Cobbler/Cobbler_profile[ubuntu_1204_x86_64]:
> Skipping
> > because of failed dependencies_[0m
> >
> > _[0;32mInfo: cobbler_distro: checking if distro exists: centos-x86_64_[0m
> >
> > _[1;31mError:
> /Stage[main]/Nailgun::Cobbler/Cobbler_distro[centos-x86_64]:
> > Could not evaluate: cobblerd does not appear to be running/accessible
> >
> > _[0m
> >
> > _[mNotice: /Stage[main]/Nailgun::Cobbler/Cobbler_profile[centos-x86_64]:
> > Dependency Cobbler_distro[centos-x86_64] has failures: true_[0m
> >
> > _[1;31mWarning:
> > /Stage[main]/Nailgun::Cobbler/Cobbler_profile[centos-x86_64]: Skipping
> > because of failed dependencies_[0m
> >
> > _[mNotice: Finished catalog run in 73.14 seconds_[0m
> >
> >
> >
> > Stopping cobbler daemon: [FAILED]
> >
> > Starting dnsmasq: [  OK  ] Traceback (most recent call last):
> >
> >   File "/usr/bin/cobblerd", line 76, in main
> >
> >     api = cobbler_api.BootAPI(is_cobblerd=True)
> >
> >   File "/usr/lib/python2.6/site-packages/cobbler/api.py", line 130, in
> > __init__
> >
> >     self.deserialize()
> >
> >   File "/usr/lib/python2.6/site-packages/cobbler/api.py", line 898, in
> > deserialize
> >
> >     return self._config.deserialize()
> >
> >   File "/usr/lib/python2.6/site-packages/cobbler/config.py", line 266, in
> > deserialize
> >
> >     raise CX("serializer: error loading collection %s. Check
> > /etc/cobbler/modules.conf" % item.collection_type())
> >
> > CX: 'serializer: error loading collection distro. Check
> > /etc/cobbler/modules.conf'
> >
> > Stopping httpd: [FAILED] Stopping xinetd: [FAILED] _[0;32mInfo: Loading
> > facts in /etc/puppet/modules/nagios/lib/facter/disks.rb_[0m
> >
> >
> >
> >
> >
> >
> >
> > From: Chee Thao (CW)
> > Sent: Wednesday, August 27, 2014 10:02 AM
> > To: Mike Chao
> > Subject: FW: [Fuel-dev] Additional Slave node is not getting DHCP IP
> > assigned
> >
> >
> >
> >
> >
> >
> >
> > From: Raghunath Mallina (CW)
> > Sent: Tuesday, August 26, 2014 11:20 AM
> > To: Chee Thao (CW)
> > Cc: Gandhirajan Mariappan (CW)
> > Subject: FW: [Fuel-dev] Additional Slave node is not getting DHCP IP
> > assigned
> >
> >
> >
> > Some inputs from the Marantis Team.
> >
> >
> >
> > Thanks
> >
> > Raghunath
> >
> >
> >
> > From: Sergii Golovatiuk [mailto:sgolovatiuk@xxxxxxxxxxxx]
> > Sent: Tuesday, August 26, 2014 11:18 AM
> > To: Gandhirajan Mariappan (CW)
> > Cc: fuel-dev@xxxxxxxxxxxxxxxxxxx; Nataraj Mylsamy (CW); Raghunath
> Mallina
> > (CW)
> > Subject: Re: [Fuel-dev] Additional Slave node is not getting DHCP IP
> > assigned
> >
> >
> >
> > Hi Gandhi,
> >
> > The DHCP server on master node should definitely work in cobbler
> container.
> > You may check it by
> >
> > dockerctl shell cobbler
> > cat /etc/dnsmasq.conf
> >
> > Also dhcprelay should be running to allow dhcp traffic to pass through
> > master node to cobbler container.
> >
> > Additionally you may use tcpdump to trace traffic from node-3 to cobbler
> > container.
> >
> >
> > --
> > Best regards,
> > Sergii Golovatiuk,
> > Skype #golserge
> > IRC #holser
> >
> >
> >
> > On Tue, Aug 26, 2014 at 9:41 AM, Gandhirajan Mariappan (CW)
> > <gmariapp@xxxxxxxxxxx> wrote:
> >
> > Hi Fuel Dev,
> >
> >
> >
> > In addition to the 2 slave nodes and 1 master node attached to the
> Brocade
> > VDX device, we are attaching 1 more slave node (slave node 3). We have
> set
> > the PXE boot setting in 3rd slave node and tried the below setup but
> DHCP in
> > master node is not assigning IP to the slave node 3. Kindly let us know
> what
> > could be the problem. Is there any way to make master node DHCP to
> assign IP
> > to slave node 3?
> >
> >
> >
> > “We have setup a testing DHCP server on a new CentOS server.  This CentOS
> > Server was given the same port on the VDX switch as the Mirantis Fuel
> Master
> > Node.  The DHCP Server on the CentOS was able to give DHCP ip to the 3rd
> > node that was having issues getting DHCP ip from the Fuel Master Node.
> >
> > What this confirm is that the VDX switch is not causing the issue and
> that
> > the DHCP Server on the Fuel Master Node is having issues giving out DHCP
> > leases.  The DHCP Server on the Fuel Master Node appears to be running,
> but
> > because it is not a standard DHCP server, but on that is wrapped in
> custom
> > software from Mirantis, we currently do not know what we need to adjust
> to
> > get it working.  It looks like it is a very tightly coupled system where
> > each component depends on the other.”
> >
> > Thanks and Regards,
> >
> > Gandhi Rajan
> >
> >
> >
> >
> >
> >
> > --
> > Mailing list: https://launchpad.net/~fuel-dev
> > Post to     : fuel-dev@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~fuel-dev
> > More help   : https://help.launchpad.net/ListHelp
> >
> >
> >
> >
> > --
> > Mailing list: https://launchpad.net/~fuel-dev
> > Post to     : fuel-dev@xxxxxxxxxxxxxxxxxxx
> > Unsubscribe : https://launchpad.net/~fuel-dev
> > More help   : https://help.launchpad.net/ListHelp
> >
>
> --
> Mailing list: https://launchpad.net/~fuel-dev
> Post to     : fuel-dev@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~fuel-dev
> More help   : https://help.launchpad.net/ListHelp
>
>
>

References