fuel-dev team mailing list archive
-
fuel-dev team
-
Mailing list archive
-
Message #01241
Re: By default, vgdisplay does not list cinder-volumes
On 06/25/2014 11:56 AM, Gandhirajan Mariappan (CW) wrote:
> Yes Bogdan. While deploying Cinder node was not selected and hence only controller node configuration is deployed and not cinder node.
> Our setup is like 1 master, 1 controller (which also has cinder) and 1 compute nodes.
>
> Now we have to add Cinder to the existing Controller node. Is there any other way? Or we need to do Reset Environment or Delete Environment?
Yes, I believe you could either add an another node with cinder role or
reset an env. and redeploy it with an adjusted roles layout (or delete
and create a new one as well)
>
> Thanks and Regards,
> Gandhi Rajan
>
> -----Original Message-----
> From: Bogdan Dobrelya [mailto:bdobrelia@xxxxxxxxxxxx]
> Sent: Wednesday, June 25, 2014 1:45 PM
> To: Gandhirajan Mariappan (CW); fuel-dev@xxxxxxxxxxxxxxxxxxx
> Cc: Prakash Kaligotla; Nataraj Mylsamy (CW)
> Subject: Re: [Fuel-dev] By default, vgdisplay does not list cinder-volumes
>
> On 06/25/2014 10:21 AM, Gandhirajan Mariappan (CW) wrote:
>> Hi Fuel-Dev,
>>
>>
>>
>> We have installed MOSt5.0 and deployment is successful. Also verified
>> Controller and compute node configurations are correct except vgdisplay.
>>
>>
>>
>> Our query is "vgdisplay" does not lists "cinder-volumes". Not sure,
>> why cinder-volumes is not listing in vgdisplay. Kindly confirm whether
>> vgdisplay should list cinder-volumes by default or we can add it manually?
>>
>
> Hi.
> Command 'vgdisplay', if was issued at the node with a storage (cinder) role assigned, should provide a 'VG Name cinder' in its output. If it doesn't, that is probably because of the node has no storage role assigned to it...
> E.g.:
>
> [root@nailgun ~]# fuel node list
> id | status | name | cluster | ip | mac
> | roles | pending_roles | online
> ---|--------|------------------|---------|------------|--------
> 8 | ready | Untitled (cb:c3) | 2 | 10.108.0.6 |
> ba:64:12:e5:cc:4a | compute | | True
> 7 | ready | Untitled (2e:98) | 2 | 10.108.0.5 |
> fa:35:24:9d:91:4d | cinder, controller | | True
> 5 | ready | Untitled (4e:65) | 2 | 10.108.0.3 |
> 6e:da:27:3e:a7:4b | cinder, controller | | False
> 6 | ready | Untitled (80:36) | 2 | 10.108.0.4 |
> da:6c:69:3c:29:49 | cinder, controller | | True
>
> [root@nailgun ~]# ssh node-8 vgdisplay | grep 'VG Name'
> Warning: Permanently added 'node-8' (RSA) to the list of known hosts.
> VG Name vm
> VG Name os
>
> [root@nailgun ~]# ssh node-7 vgdisplay | grep 'VG Name'
> Warning: Permanently added 'node-7' (RSA) to the list of known hosts.
> VG Name image
> VG Name cinder
> VG Name os
>
> [root@nailgun ~]# ssh node-6 vgdisplay | grep 'VG Name'
> Warning: Permanently added 'node-6' (RSA) to the list of known hosts.
> VG Name image
> VG Name cinder
> VG Name os
>
> Here I have a 'cinder' vg at controller nodes with cinder role assigned and have no such one at the compute node.
>
>> All related snippets are listed below -
>>
>>
>>
>> *_Few Configurations we cross verified are -_*
>>
>> *root@node-10:~# nova service-list*
>>
>> +------------------+---------+----------+---------+-------+----------------------------+-----------------+
>>
>> | Binary | Host | Zone | Status | State |
>> Updated_at | Disabled Reason |
>>
>> +------------------+---------+----------+---------+-------+----------------------------+-----------------+
>>
>> | nova-conductor | node-10 | internal | enabled | up |
>> 2014-06-25T06:52:39.000000 | - |
>>
>> | nova-consoleauth | node-10 | internal | enabled | up |
>> 2014-06-25T06:52:38.000000 | - |
>>
>> | nova-cert | node-10 | internal | enabled | up |
>> 2014-06-25T06:52:38.000000 | - |
>>
>> | nova-scheduler | node-10 | internal | enabled | up |
>> 2014-06-25T06:52:39.000000 | - |
>>
>> | nova-compute | node-9 | nova | enabled | up |
>> 2014-06-25T06:52:42.000000 | - |
>>
>> +------------------+---------+----------+---------+-------+----------------------------+-----------------+
>>
>> *root@node-10:~# neutron agent-list*
>>
>> +--------------------------------------+--------------------+---------+-------+----------------+
>>
>> | id | agent_type | host |
>> alive | admin_state_up |
>>
>> +--------------------------------------+--------------------+---------+-------+----------------+
>>
>> | 0b0f007a-7d05-4156-9917-92b12b45454b | Open vSwitch agent | node-10
>> | |
>> :-) | True |
>>
>> | 1657aad3-7c03-4ed3-b395-40c4a65a610f | Metadata agent | node-10 |
>> :-) | True |
>>
>> | 519da0ca-e05f-4230-a20d-168288bf9aee | DHCP agent | node-10 |
>> :-) | True |
>>
>> | bb58f6d3-2fa0-444c-98de-40b0be65db42 | L3 agent | node-10 |
>> :-) | True |
>>
>> | cbd1c60c-71e1-42c0-b55e-72063fe37ca9 | Open vSwitch agent | node-9
>> | |
>> :-) | True |
>>
>> +--------------------------------------+--------------------+---------+-------+----------------+
>>
>> *root@node-10:~# nova hypervisor-list*
>>
>> +----+---------------------+
>>
>> | ID | Hypervisor hostname |
>>
>> +----+---------------------+
>>
>> | 1 | node-9 |
>>
>> +----+---------------------+
>>
>>
>>
>> *_Query:_* "vgdisplay" does not listing "cinder-volumes"
>>
>> *root@node-10:~# vgdisplay*
>>
>> --- Volume group ---
>>
>> VG Name image
>>
>> System ID
>>
>> Format lvm2
>>
>> Metadata Areas 1
>>
>> Metadata Sequence No 2
>>
>> VG Access read/write
>>
>> VG Status resizable
>>
>> MAX LV 0
>>
>> Cur LV 1
>>
>> Open LV 1
>>
>> Max PV 0
>>
>> Cur PV 1
>>
>> Act PV 1
>>
>> VG Size 220.84 GiB
>>
>> PE Size 32.00 MiB
>>
>> Total PE 7067
>>
>> Alloc PE / Size 7066 / 220.81 GiB
>>
>> Free PE / Size 1 / 32.00 MiB
>>
>> VG UUID iN0efe-33Od-zbKb-2Wpg-l1mD-qW6f-lXuMJd
>>
>> root@node-10:~#
>>
>>
>>
>>
>>
>> *_Configuration File:_*
>>
>> In vi /etc/cinder/cinder.conf, we have a below line
>>
>> volume_group = cinder-volumes
>>
>>
>>
>>
>>
>> Thanks and Regards,
>>
>> Gandhi Rajan
>>
>>
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Skype #bogdando_at_yahoo.com
> Irc #bogdando
>
--
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando
Follow ups
References