← Back to team overview

openstack team mailing list archive

Re: Configuring More-than-One Cinder Node

 

Logan,

Thank you for the response!

The high availability configuration documentation will be useful. The immediate problem is less with high availability than performance enhancement.

Using the nova-controller and -compute model, I have one cinder node running the api, scheduler, and volume services and the others only running the volume service. The error in my configuration was that the iscsi_ip_address value was set incorrectly; it pointed to the IP address of the "controller" node. Changing it to the IP address of the host itself solved the problem.

As a recap, this is what I think it all means:

To run cinder services for a cluster on multiple nodes, pick on to the the "controller:"
  openstack-cinder-api
  openstack-cinder-scheduler
  openstack-cinder-volume
  tgtd

On the rest of the cinder nodes, run:
 openstack-cinder-volume
 tgtd

I expect this to change for a high availability configuration. Is there, however, anything particularly wrong with the configuration I described?

Thanks,

Craig

On 03/11/2013 06:15 PM, Logan McNaughton wrote:
You'll want to look here:
http://docs.openstack.org/trunk/openstack-ha/content/s-cinder-api.html

You'll need to basically create a virtual IP and load balance between the
nodes running cinder-api and cinder-scheduler. If you want multiple nodes
running cinder-volume, you can add them regularly, like you would with a
nova-compute node.
On Mar 11, 2013 6:51 PM, "Debashis Kundu (dkundu)" <dkundu@xxxxxxxxx> wrote:

O

----- Original Message -----
From: Craig E. Ward [mailto:cward@xxxxxxx]
Sent: Monday, March 11, 2013 05:23 PM
To: openstack@xxxxxxxxxxxxxxxxxxx <openstack@xxxxxxxxxxxxxxxxxxx>
Subject: [Openstack] Configuring More-than-One Cinder Node

I have an installation that wants to deploy two or more cinder nodes
within an
OpenStack (Folsom) cluster. All of the hits I find on Google for
configuring
cinder only describe how to configure the software for a single node. Is it
even possible to have more than one node running the cinder services in a
cluster?

The setup I have has one of the cinder nodes identified as "the" cinder
node to
the compute and other node types. A second node was installed and the
cinder
services started.

On the nova controller node, however, while a new volume could be created
that
was listed in the MySQL database as on the second node, all attempts to
attach
that volume to an instance "silently" failed. The "nova volume-attach"
command
would come back with an id and mapping of instance to volume, but the very
next
"nova volume-list" command continued to show the volume in question as
"available."

If the second cinder node had the cinder-volume service running, volumes on
that node could be deleted. If cinder-volume was not running, the "delete"
would go on forever.

Everything works as expected with only the cinder node configured in
nova.conf
running, i.e. as a single cinder node installation. Volumes can be created,
attached, used, detached, and deleted.

Are there some extra parameters that should be set in either nova.conf or
cinder.conf to indicate that the cinder services are available on
more-than-one
node? Or is what we're trying to do something unexpected and not supported?

Thanks,

Craig

--
Craig E. Ward
USC Information Sciences Institute
cward@xxxxxxx


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp



--
Craig E. Ward
USC Information Sciences Institute
310-448-8271
cward@xxxxxxx




Follow ups

References