← Back to team overview

fuel-dev team mailing list archive

Re: VMs are not getting IP address

 

Hi Andrew,

Please find our comments inline.

Thanks and Regards,
Gandhi Rajan

From: Andrew Woodward [mailto:xarses@xxxxxxxxx]
Sent: Thursday, July 17, 2014 10:20 PM
To: Gandhirajan Mariappan (CW)
Cc: Miroslav Anashkin; fuel-dev@xxxxxxxxxxxxxxxxxxx; Nataraj Mylsamy (CW); Senthil Thanganadarrosy (CW); Prakash Kaligotla; Karthi Palaniappan (CW); Evgeniya Shumakher; Harsha Ramamurthy
Subject: Re: [Fuel-dev] VMs are not getting IP address

Gandhi,

Looking over the support bundle, I see that node-17's /etc/neutron/plugin.ini and /etc/neutron/plugins/ml2/ml2_conf.ini are not the same file, they should be linked. You should also ensure that the /etc/init/neutron-server and /etc/init/neutron-plugins-openvswitch-agent are pointing to /etc/neutron/plugin.ini

[Gandhi] We tried linking the files in node-17 and restarted the services. But still it didn’t work (VM created successfully but IP address is not assigned).

The ml2_conf.ini has
[ovs]
tenant_network_type = vlan

As far as I'm aware this is not a value for ovs.

[Gandhi] vlan is the value we used to give for Brocade plugin testing. We have provided the same value in other setups (Non-Mirantis setups) and it is working fine.

You described above that you are attempting to pass the ostf test which launches and instance and checks access via floating IP, this test will require a L3 agent, and router in neutron.conf service_plugins.

The repo for the ml2 brocade code https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/brocade shows that the config for the brocade driver should be in the seperate plugins/ml2/ml2_brocade_conf.ini although the code https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py#L47 implies that the heading [ml2_brocade] usable.

There are also many Traceback maessages in neutron-server.log http://paste.openstack.org/show/86975/ It seems that there are errors in the xml RPC calls being processed by the switch.

[Gandhi] At /etc/neutron/plugins/ml2/ml2_conf_brocade.ini, below lines are available.
[ml2_brocade]
username = admin
password = password
address  = 10.25.225.133
ostype   = NOS
physical_networks = physnet1
Also in /etc/neutron/plugins/ml2/ml2_conf.ini, we have below lines of code –
[ml2_brocade]
username = admin
password = password
address  = 10.25.225.133
ostype = NOS
physical_networks = physnet1
Note: These two .ini files have same configurations in Mirantis environment as we have provided for other setups (Non Mirantis setups).



On Thu, Jul 17, 2014 at 2:32 AM, Gandhirajan Mariappan (CW) <gmariapp@xxxxxxxxxxx<mailto:gmariapp@xxxxxxxxxxx>> wrote:
Hi Andrew, Miroslav,

Hope you are analyzing the logs/snapshots. Kindly let us know your views on this issue.

Thanks and Regards,
Gandhi Rajan

-----Original Message-----
From: Gandhirajan Mariappan (CW)
Sent: Wednesday, July 16, 2014 12:36 PM
To: 'Andrew Woodward'; Miroslav Anashkin; fuel-dev@xxxxxxxxxxxxxxxxxxx<mailto:fuel-dev@xxxxxxxxxxxxxxxxxxx>
Cc: Nataraj Mylsamy (CW); Senthil Thanganadarrosy (CW); Prakash Kaligotla; Raghunath Mallina (CW); Karthi Palaniappan (CW); 'Evgeniya Shumakher'
Subject: RE: [Fuel-dev] VMs are not getting IP address

Hi Miroslav, Andrew,

We have configured below core_plugins at /etc/neutron/neutron.conf. Furthermore, we are not using service_plugin, since it is not required for our current brocade vcs plugin testing.

core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin

Among the quantum settings mentioned by Andrew, only below configurations are applicable for our Brocade plugin testing -

[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch,brocade

Diagnostic snapshot is available at https://www.dropbox.com/s/o8kt0istezimlim/fuel-snapshot-2014-07-16_05-08-37.tgz

VM (instance) ID of one of the instances is : 314d00c2-8e42-429c-9b4e-a3bcca49f386

Error Details:
Message
    500-{u'NeutronError': {u'message': u'create_port_postcommit failed.', u'type': u'MechanismDriverError', u'detail': u''}} Code
    500
Details
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 296, in decorated_function return function(self, context, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2075, in run_instance do_run_instance() File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 249, in inner return f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2074, in do_run_instance legacy_bdm_in_spec) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1207, in _run_instance notify("error", fault=e) # notify that build failed File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1191, in _run_instance instance, image_meta, legacy_bdm_in_spec) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1311, in _build_instance set_access_ip=set_access_ip) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 399, in decorated_function return function(self, context, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1723, in _spawn LOG.exception(_('Instance failed to spawn'), instance=instance) File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720, in _spawn block_device_info) File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2248, in spawn write_to_disk=True) File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3420, in to_xml network_info_str = str(network_info) File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 424, in __str__ return self._sync_wrapper(fn, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 407, in _sync_wrapper self.wait() File "/usr/lib/python2.7/dist-packages/nova/network/model.py", line 439, in wait self[:] = self._gt.wait() File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait return self._exit_event.wait() File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait return hubs.get_hub().switch() File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch return self.greenlet.switch() File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main result = function(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1510, in _allocate_network_async dhcp_options=dhcp_options) File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 361, in allocate_for_instance LOG.exception(msg, port_id) File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 336, in allocate_for_instance security_group_ids, available_macs, dhcp_opts) File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 195, in _create_port network_id, instance=instance) File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 184, in _create_port port_id = port_client.create_port(port_req_body)['port']['id'] File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 111, in with_params ret = self.function(instance, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 316, in create_port return self.post(self.ports_path, body=body) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1241, in post headers=headers, params=params) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1164, in do_request self._handle_fault_response(status_code, replybody) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 1134, in _handle_fault_response exception_handler_v20(status_code, des_error_body) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 96, in exception_handler_v20 message=msg) Created
    July 10, 2014, 11:45 a.m.


Thanks and Regards,
Gandhi Rajan


-----Original Message-----
From: Andrew Woodward [mailto:xarses@xxxxxxxxx<mailto:xarses@xxxxxxxxx>]
Sent: Tuesday, July 15, 2014 9:44 PM
To: Miroslav Anashkin
Cc: Gandhirajan Mariappan (CW); fuel-dev@xxxxxxxxxxxxxxxxxxx<mailto:fuel-dev@xxxxxxxxxxxxxxxxxxx>; Nataraj Mylsamy (CW); Senthil Thanganadarrosy (CW); Prakash Kaligotla; Raghunath Mallina (CW)
Subject: Re: [Fuel-dev] VMs are not getting IP address

Gandhi,

by hand,
You will need to ensure that you configure in /etc/neutron/neutron.conf core_plugin, service_plugins, you will also need to check your init scripts for neutron-openvswitch-agent, and neutron-server to ensure that they are reading the ml2.ini. You will also need to install the ml2 (neutron-plugin-ml2, openstack-neutron-ml2[if centos]) and brocade packages.

You can follow an example ml2 configuration at http://www.revolutionlabs.net/2013/11/part-2-how-to-install-openstack-havana_15.html

If you are testing against current master, we just landed code to configure ml2 last week https://review.openstack.org/#/c/106731/1/deployment/puppet/neutron/lib/puppet/parser/functions/sanitize_neutron_config.rb.
You can now add ml2 data to quantum_settings in astute.yaml

quantum_settings:
  server:
    core_plugin: openvswitch
    service_plugins:
'neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.ser
vices.firewall.fwaas_plugin.FirewallPlugin,neutron.services.metering.metering_plugin.MeteringPlugin'
  L2:
    mechanism_drivers:
    type_drivers: "local,flat,l2[:segmentation_type]"
    tenant_network_types: "local,flat,l2[:segmentation_type]"
    flat_networks: '*'
    tunnel_types: l2[:segmentation_type]
    tunnel_id_ranges: l2[:tunnel_id_ranges]
    vxlan_group: "None"
    vni_ranges: l2[:tunnel_id_ranges]

Note: tunnel_types, tunnel_id_ranges, vxlan_group, vni_ranges are only set if L2[enable_tunneling] is true.
Note: there are some l2[item] references, these are references to values already in the quantum_settings.
Note: these are only showing new items related to ml2 config. The values shown are the defaults if no other value is present.

On Tue, Jul 15, 2014 at 6:24 AM, Miroslav Anashkin <manashkin@xxxxxxxxxxxx<mailto:manashkin@xxxxxxxxxxxx>> wrote:
> Greetings, Gandhi,
>
>
> Could you please provide us a diagnostic snapshot, containing your new
> OpenStack configuration?
>
> Please also provide us VM (instance) ID of one of the instances, which
> fails to get IP address, it should make search through the logs easier.
>
> Kind regards,
> Miroslav
>
>
> On Fri, Jul 11, 2014 at 1:29 PM, Gandhirajan Mariappan (CW)
> <gmariapp@xxxxxxxxxxx<mailto:gmariapp@xxxxxxxxxxx>> wrote:
>>
>> Hi Fuel Dev,
>>
>>
>>
>> Issue : IP address is not getting assigned to the created VMs
>>
>>
>>
>> Steps followed:
>>
>> 1.       Connected eth0(Public), eth1(Admin PXE) and eth2(Private,
>> Storage, Management) to VCS device and deployed environment through
>> Fuel UI
>>
>> 2.       Since VCS ports connected to eth1 ports of the nodes are
>> configured as Access VLANs and eth2 ports are configured as Trunk
>> VLANs, we have given new connection i.e., eth3 of nodes are connected
>> to VCS device
>>
>> 3.       Now we configured VCS ports which are connected to eth3 as
>> Port-profile-port, which is the expected configuration for Brocade
>> VCS plugin to work.
>>
>>
>>
>> Expected Answer:
>>
>> After creating VMs, IP address should be assigned automatically.
>>
>>
>>
>> Suspect:
>>
>> We suspect configurations mentioned in below section might be the
>> problem for this issue but we are not sure about it. Kindly confirm
>> us whether is there any other problem in the configuration or is this
>> an issue. Also, confirm us whether br-eth3 will populate below lines
>> if we redeploy the environment.
>>
>>
>>
>> Port “phy-br-eth3”
>>
>>    Interface “phy-br-eth3”
>>
>>
>>
>> Configurations:
>>
>> Below are the openVswitch configuration we had in our another
>> Icehouse setup. Highlighted are the lines, which we are expecting to
>> be present in Mirantis setup as well.
>>
>>
>>
>> Local Ice House setup – Controller node:
>>
>> [root@rhel7-41-110 ~]# ovs-vsctl show
>>
>> b1ba2ad0-d40a-4193-b3d9-4b5e0196dfbd
>>
>>     Bridge br-ex
>>
>>         Port br-ex
>>
>>             Interface br-ex
>>
>>                 type: internal
>>
>>     Bridge "br-eth1"
>>
>>         Port "eth1"
>>
>>             Interface "eth1"
>>
>>         Port "br-eth1"
>>
>>             Interface "br-eth1"
>>
>>                 type: internal
>>
>>         Port "phy-br-eth1"
>>
>>             Interface "phy-br-eth1"
>>
>>     Bridge br-int
>>
>>         Port br-int
>>
>>             Interface br-int
>>
>>                 type: internal
>>
>>         Port "int-br-eth1"
>>
>>             Interface "int-br-eth1"
>>
>>     ovs_version: "2.0.0"
>>
>>
>>
>> Mirantis Setup – Controller Node:
>>
>> root@node-18:~# ovs-vsctl show
>>
>> 583bdc4f-52d0-493a-8d51-a613a4da6c9a
>>
>>     Bridge "br-eth2"
>>
>>         Port "br-eth2"
>>
>>             Interface "br-eth2"
>>
>>                 type: internal
>>
>>         Port "br-eth2--br-storage"
>>
>>             tag: 102
>>
>>             Interface "br-eth2--br-storage"
>>
>>                 type: patch
>>
>>                 options: {peer="br-storage--br-eth2"}
>>
>>         Port "br-eth2--br-mgmt"
>>
>>             tag: 101
>>
>>             Interface "br-eth2--br-mgmt"
>>
>>                 type: patch
>>
>>                 options: {peer="br-mgmt--br-eth2"}
>>
>>         Port "eth2"
>>
>>             Interface "eth2"
>>
>>         Port "br-eth2--br-prv"
>>
>>             Interface "br-eth2--br-prv"
>>
>>                 type: patch
>>
>>                 options: {peer="br-prv--br-eth2"}
>>
>>     Bridge br-mgmt
>>
>>         Port "br-mgmt--br-eth2"
>>
>>             Interface "br-mgmt--br-eth2"
>>
>>                 type: patch
>>
>>                 options: {peer="br-eth2--br-mgmt"}
>>
>>         Port br-mgmt
>>
>>             Interface br-mgmt
>>
>>                 type: internal
>>
>>     Bridge "br-eth0"
>>
>>         Port "br-eth0"
>>
>>             Interface "br-eth0"
>>
>>                 type: internal
>>
>>         Port "br-eth0--br-ex"
>>
>>             trunks: [0]
>>
>>             Interface "br-eth0--br-ex"
>>
>>                 type: patch
>>
>>                 options: {peer="br-ex--br-eth0"}
>>
>>         Port "eth0"
>>
>>             Interface "eth0"
>>
>>     Bridge "br-eth1"
>>
>>         Port "br-eth1--br-fw-admin"
>>
>>             trunks: [0]
>>
>>             Interface "br-eth1--br-fw-admin"
>>
>>                 type: patch
>>
>>                 options: {peer="br-fw-admin--br-eth1"}
>>
>>         Port "eth1"
>>
>>             Interface "eth1"
>>
>>         Port "br-eth1"
>>
>>             Interface "br-eth1"
>>
>>                 type: internal
>>
>>     Bridge br-ex
>>
>>         Port "br-ex--br-eth0"
>>
>>             trunks: [0]
>>
>>             Interface "br-ex--br-eth0"
>>
>>                 type: patch
>>
>>                 options: {peer="br-eth0--br-ex"}
>>
>>         Port br-ex
>>
>>             Interface br-ex
>>
>>                 type: internal
>>
>>         Port "qg-83437e93-e0"
>>
>>             Interface "qg-83437e93-e0"
>>
>>                 type: internal
>>
>>         Port phy-br-ex
>>
>>             Interface phy-br-ex
>>
>>     Bridge "br-eth5"
>>
>>         Port "br-eth5"
>>
>>             Interface "br-eth5"
>>
>>                 type: internal
>>
>>         Port "eth5"
>>
>>             Interface "eth5"
>>
>>     Bridge "br-eth4"
>>
>>         Port "br-eth4"
>>
>>             Interface "br-eth4"
>>
>>                 type: internal
>>
>>         Port "eth4"
>>
>>             Interface "eth4"
>>
>>     Bridge br-int
>>
>>         Port "tap29cbbeed-16"
>>
>>             tag: 4095
>>
>>             Interface "tap29cbbeed-16"
>>
>>                 type: internal
>>
>>         Port "qr-d80e3634-a4"
>>
>>             tag: 4095
>>
>>             Interface "qr-d80e3634-a4"
>>
>>                 type: internal
>>
>>         Port int-br-ex
>>
>>             Interface int-br-ex
>>
>>         Port br-int
>>
>>             Interface br-int
>>
>>                 type: internal
>>
>>         Port int-br-prv
>>
>>             Interface int-br-prv
>>
>>         Port "tapc8495313-6d"
>>
>>             tag: 1
>>
>>             Interface "tapc8495313-6d"
>>
>>                 type: internal
>>
>>     Bridge br-storage
>>
>>         Port br-storage
>>
>>             Interface br-storage
>>
>>                 type: internal
>>
>>         Port "br-storage--br-eth2"
>>
>>             Interface "br-storage--br-eth2"
>>
>>                 type: patch
>>
>>                 options: {peer="br-eth2--br-storage"}
>>
>>     Bridge br-prv
>>
>>         Port "br-prv--br-eth2"
>>
>>             Interface "br-prv--br-eth2"
>>
>>                 type: patch
>>
>>                 options: {peer="br-eth2--br-prv"}
>>
>>         Port phy-br-prv
>>
>>             Interface phy-br-prv
>>
>>         Port br-prv
>>
>>             Interface br-prv
>>
>>                 type: internal
>>
>>     Bridge br-fw-admin
>>
>>         Port "br-fw-admin--br-eth1"
>>
>>             trunks: [0]
>>
>>             Interface "br-fw-admin--br-eth1"
>>
>>                 type: patch
>>
>>                 options: {peer="br-eth1--br-fw-admin"}
>>
>>         Port br-fw-admin
>>
>>             Interface br-fw-admin
>>
>>                 type: internal
>>
>>     Bridge "br-eth3"
>>
>>         Port "eth3"
>>
>>             Interface "eth3"
>>
>>         Port "br-eth3"
>>
>>             Interface "br-eth3"
>>
>>                 type: internal
>>
>>     ovs_version: "1.10.1"
>>
>>
>>
>> We assume, the above highlighted (green) should have similar
>> configuration as that of yellow highlighted above. Kindly confirm us on the behavior.
>>
>>
>>
>> Thanks and Regards,
>>
>> Gandhi Rajan
>>
>>
>>
>> From: Fuel-dev
>> [mailto:fuel-dev-bounces+gmariapp<mailto:fuel-dev-bounces%2Bgmariapp>=brocade.com@xxxxxxxxxxxxxxxxxxx<mailto:brocade.com@xxxxxxxxxxxxxxxxxxx>] On
>> Behalf Of Karthi Palaniappan (CW)
>> Sent: Thursday, July 10, 2014 10:16 PM
>> To: fuel-dev@xxxxxxxxxxxxxxxxxxx<mailto:fuel-dev@xxxxxxxxxxxxxxxxxxx>
>> Cc: Prakash Kaligotla; Nataraj Mylsamy (CW); Raghunath Mallina (CW);
>> Senthil Thanganadarrosy (CW)
>> Subject: [Fuel-dev] VMs are not getting IP address
>>
>>
>>
>> Hi Fuel-Dev,
>>
>>
>>
>> We are done with the MOS Ice House deployment, also we are done
>> installing Brocade ML2 VCS plugin. When we run health check below 3
>> checks were failing.
>>
>>
>>
>> Failed checks:
>>
>> 1.       Check DNS resolution on compute node
>>
>> 2.       Check network connectivity from instance via floating IP
>>
>> 3.       Check stack autoscaling
>>
>>
>>
>> Controller and compute node couldn’t ping google.com<http://google.com> since the
>> nameserver couldn’t resolve hostname. Andrey suggested us to change
>> nameserver in /etc/resolve.conf, after changing nameserver #Check 1
>> is passed. Andrey mentioned the other 2 failed checks will not impact our plugin testing.
>>
>>
>>
>> We tried to create network and VM, we were able to create network and
>> Virtual machine but those Virtual machines couldn’t get IP address.
>>
>>
>>
>> Topology and configuration details:
>>
>>
>>
>> Both controller and compute node’s eth3 is connected to VDX device
>> so, I have configured bridge mapping as “bridge_mappings = physnet1:br-eth3”.
>>
>>
>>
>> Configuration in /etc/neutron/plugins/ml2/ml2_conf.ini
>>
>> [ml2]
>>
>> tenant_network_types = vlan
>>
>> type_drivers = vlan
>>
>> mechanism_drivers = openvswitch,brocade
>>
>>
>>
>> [ml2_type_vlan]
>>
>> network_vlan_ranges = physnet1:400:500
>>
>>
>>
>> [securitygroup]
>>
>> enable_security_group = True
>>
>> firewall_driver =
>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>
>>
>>
>> [database]
>>
>> connection =
>> mysql://neutron:password@192.168.0.3:3306/neutron_ml2?read_timeout=60<http://neutron:password@192.168.0.3:3306/neutron_ml2?read_timeout=60>
>>
>>
>>
>> [ovs]
>>
>> tenant_network_type = vlan
>>
>> network_vlan_ranges = physnet1:400:500
>>
>> bridge_mappings = physnet1:br-eth3
>>
>> #bridge_mappings = physnet1:br-eth1
>>
>>
>>
>> [ml2_brocade]
>>
>> username = admin
>>
>> password = password
>>
>> address  = 10.25.225.133
>>
>> ostype = NOS
>>
>> physical_networks = physnet1
>>
>>
>>
>> Regards,
>>
>> Karthi
>>
>>
>> --
>> Mailing list: https://launchpad.net/~fuel-dev
>> Post to     : fuel-dev@xxxxxxxxxxxxxxxxxxx<mailto:fuel-dev@xxxxxxxxxxxxxxxxxxx>
>> Unsubscribe : https://launchpad.net/~fuel-dev
>> More help   : https://help.launchpad.net/ListHelp
>>
>
>
>
> --
>
> Kind Regards
> Miroslav Anashkin
> L2 support engineer,
> Mirantis Inc.
> +7(495)640-4944<tel:%2B7%28495%29640-4944> (office receptionist)
> +1(650)587-5200<tel:%2B1%28650%29587-5200> (office receptionist, call from US)
> 35b, Bld. 3, Vorontsovskaya St.
> Moscow, Russia, 109147.
>
> www.mirantis.com<http://www.mirantis.com>
>
> manashkin@xxxxxxxxxxxx<mailto:manashkin@xxxxxxxxxxxx>
>
>
> --
> Mailing list: https://launchpad.net/~fuel-dev
> Post to     : fuel-dev@xxxxxxxxxxxxxxxxxxx<mailto:fuel-dev@xxxxxxxxxxxxxxxxxxx>
> Unsubscribe : https://launchpad.net/~fuel-dev
> More help   : https://help.launchpad.net/ListHelp
>



--
Andrew
Mirantis
Ceph community



--
Andrew
Mirantis
Ceph community

Follow ups

References