← Back to team overview

openstack team mailing list archive

[Cinder] FilterScheduler not handling multi-backend

 

Hello Folks,

I have cinder volume service, setup with both FC and ISCSI driver
(multi-backend).

Here's cinder.conf

scheduler_host_manager=cinder.scheduler.host_manager.HostManager
scheduler_default_filters=AvailabilityZoneFilter,CapacityFilter
scheduler_default_weighers=CapacityWeigher
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler

*enabled_backends=3PAR-ISCSI,3PAR-FC*

*[3PAR-ISCSI]*
<backend creds>

*[3PAR-FC]*
<backend creds>

*$cinder-manage  host list*
host                            zone
ask27                           nova
ask27@3PAR-FC                   nova
ask27@3PAR-ISCSI                nova

$cinder extra-specs-list
+--------------------------------------+-------+----------------------------------------------------------------------------+
|                  ID                  |  Name |
     extra_specs                                 |
+--------------------------------------+-------+----------------------------------------------------------------------------+
| 08109e24-79e0-4d24-bb8b-f26c39c6f0e2 |   FC  |  {u'persona': u'11 -
VMware', u'volume_backend_name': u'HP3PARFCDriver'}   |
| b94d25b3-c022-4fb1-ba00-53ff9b6901e7 | ISCSI | {u'persona': u'11 -
VMware', u'volume_backend_name': u'HP3PARISCSIDriver'} |
+--------------------------------------+-------+----------------------------------------------------------------------------+

now when,I create volume

*$cinder create --display-name vol99 --volume-type ISCSI 1*
*
*
*
*
*$cinder show 15820d30-44ff-47e3-9dec-63921800e1b9*
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|           Property           |
                   Value
            |
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------+
|         attachments          |
                     []
           |
|      availability_zone       |
                    nova
            |
|           bootable           |
                   false
            |
|          created_at          |
         2013-07-08T06:33:02.186351
           |
|     display_description      |
                    None
            |
|         display_name         |
                   vol99
            |
|              id              |
    15820d30-44ff-47e3-9dec-63921800e1b9
            |
|           metadata           | {u'CPG': u'ESX_CLUSTERS_RAID5_250GB_FC',
u'3ParName': u'osv-FYINMET-R.Od7GOSGADhuQ', u'snapCPG':
u'ESX_CLUSTERS_RAID5_250GB_FC'} |
|    *os-vol-host-attr:host     |
              ask27@3PAR-FC         *
          |
| os-vol-tenant-attr:tenant_id |
      9e27e1aded67424d895bf83a4026484d
            |
|             size             |
                     1
            |
|         snapshot_id          |
                    None
            |
|         source_volid         |
                    None
            |
|            status            |
                 available
            |
|         volume_type          |
                     FC
           |
+------------------------------+----------------------------------------------------------------------------------------------------------------------------------+


Inside cinder scheduler.log

Filtered *[host 'ask27@3PAR-ISCSI': free_capacity_gb: 245, host
'ask27@3PAR-FC': free_capacity_gb: 246]* _schedule
/usr/lib/python2.6/site-packages/cinder/scheduler/filter_scheduler.py:208
*Choosing WeighedHost [host: ask27@3PAR-FC, weight: 246.0]* _schedule
/usr/lib/python2.6/site-packages/cinder/scheduler/filter_scheduler.py:214
*Making asynchronous cast on cinder-volume.ask27@3PAR-FC...*

For volume-type ISCSI, host: ask27@3PAR-FC gets selected wrongly.

Due to this nova volume-attach loads up wrong volume_driver to attach
volume.

I had already looked at bug list, there's nothing i could relate this to.

Has anybody has seen this issue before?


Thanks and Regards,
Yatin Kumbhare

Follow ups