← Back to team overview

fuel-dev team mailing list archive

Re: How to configure physical server as compute node

 

Greetings Gandhi,

Thank you for your answers and sorry for delay. It is quite long
configuration manual.
Now I may tell in details what configurations are possible with this
hardware.

Since your servers have only 2 NICs each it would be impossible to install
both, master node and controller as virtual machines on one of the servers
and have 2 bare-metal compute nodes, deployed on other 2 servers.
This configuration requires at least 3 NICs on the server with virtual
machines.
In case of Mirantis Openstack it is impossible to deploy all-in-one
Openstack (Controller+Compute) to the single node, controller always has to
be separate node from compute nodes.

I see 2 possible setup scenarios with available hardware.

Please note - existing OS and all disk drive partitions will be wiped out
on all 3 servers in both scenarios!

Scenario 1. Full bare-metal deployment, 1 master node, 1 non-HA Openstack
Controller, 1 Openstack Compute node.

In this case Cinder node may be combined with either Controller or with
Compute node, Fuel allows such combinations.
It is the most simple setup, by the way.

Preparation steps (based on default Mirantis Openstack settings):

1. Isolate that group of ports of the Brocade VDX where eth0 devices of
each server are connected.
This port group is intended to form Admin network and VLAN tagged Storage,
Management and Private networks.
It should allow both VLAN untagged and VLAN tagged traffic.
It should allow promiscuous traffic, it is Neutron requirement.
No ARP and broadcast traffic should be able to go outside this port group.
No DHCP server should answer on DHCP requests on this port group in Admin
network segment.

By default Mirantis Openstack master node assigns 10.20.0.0/24 network
segment for Admin network.
It is enough for 126 node Openstack cluster.
 You may choose other network segment for Admin network in scope of
Mirantis Openstack master node deployment and change your DHCP settings to
not serve the chosen segment.

2. Port group on Brocade VDX device, which is connected to eth1 interface
on all servers is by default intended to connect Openstack to external
network.
Best option is to leave all these ports to pass untagged traffic, while you
may move one or all of Storage, Private or Management networks to eth1 and
allow corresponding VLAN numbers on this port group.
 This port group may have external gateway, which you should specify on
Network tab of your environment in Fuel UI before deployment.
As with first port group, promiscuous traffic should be allowed for these
ports.
To simplify settings we do not recommend enabling DHCP server for the
network segment, used as External(Public) network.
But your hardware router/gateway should be configured to divert all the
traffic from External network CIDR to appropriate external network and back.


3. Master node installation.
Insert burned CD, mount ISO image via server management interface or plug
prepared USB key with Mirantis Openstack image.
Boot the server from the image and deploy master node.
You may leave all settings default.
If you are going to use other than 10.20.0.0/24 network CIDR for Mirantis
Openstack Admin/PXE network - you have only single time to configure it,
since it goes to almost all master node services and cannot be reconfigured
on the fly. To configure new Admin network segment one must reinstall
master-node completely.

Please find how to get into network configuration menu here:
http://docs.mirantis.com/fuel/fuel-4.1/install-guide.html#changing-network-parameters-during-installation

Please do not forget to remove installation media from master node server
after the master node is completely installed!
Server may accidentally boot from installation media on restart and in this
case master node will be completely re-deployed with default settings!

After your master node is deployed, Fuel UI (Fuel is Mirantis OpenStack
deployment manager) should be accessible at 10.20.0.2:8000 or at the
address/port you have configured during the master node setup.
 You may have to add route to this network segment for your personal
machine.

4. Nodes preparation.
Set your remained servers to boot via PXE.
Start (or reboot) them both after the master node installation finished and
you are able to access Fuel UI.
Both servers should automatically download and boot to bootstrap image.
Please do not select any boot options from the bootstrap menu, necessary
mode is always automatically selected by master node.
In a couple of minutes after the remained servers boot finishes, master
node should report it has 2 unassigned nodes.


5. Create and new environment in Fuel UI.
5a. Click New OpenStack Environment in Fuel UI. Give it a name and choose
operating system (CentOS or Ubuntu) for OpenStack nodes.
5b. Choose Multi-Node deployment mode
5c. Choose KVM as hypervisor type on Compute section.
5d. Choose Neutron with VLAN segmentation (or whatever network provider you
are going to test)
5e. Leave Default Storage Backends for both Cinder and Glance.
5f. It is not necessary to install any additional services. So, if you
don't need these - leave all 3 (Savanna, Murano, Ceilometer) unchecked.

6. Additional environment configuration
Click on the just created environment icon in Fuel UI to expand its
settings.
Click Add Nodes button.
Set Controller check box ON.
Mark check box on one of the servers in the list below the Roles section -
this server will be a controller.
Click Apply Changes

Click Add Nodes button one more time.
Set both Compute and Storage - Cinder LVM check boxes ON.
Mark check box on the remained server in the list below the Roles section.
Click Apply Changes.

7. At this point you should have 2 servers with assigned OpenStack roles.
Mark controller node check box and click Configure Interfaces button.

Here you may drag and drop Private, Storage and Management networks to
assign a network to particular interface.
I recommend to leave interface configuration as is.
The main idea - networks should be assigned to the NIC for which you did
configured proper VLAN numbers on the Brocade VDX device.

Default VLANs are 101 for Management and 102 for Storage - we'll change
these numbers in the next step if required.
Private network may have several VLAN IDs required - these IDs may be also
configured in the next step.
Admin and Public networks by default are untagged.

Check the Compute + Cinder node network interface settings the same way as
we just did for Controller.
Network settings for Compute node should match with Controller.

8. Click Networks tab.
Here you may set necessary network CIDRs for each OpenStack network.
For tests it is recommended to leave these settings default.

Please set VLAN IDs for Management and Storage networks to match with VLAN
ID allowed on Brocase VDX device.

Please set proper VLAN ID range for Neutron L2 Configuration and enable
these VLAN IDs on the Brocade VDX device as well.
One VLAN ID is reguired per tenant, so 5-10 VLAN IDs in the range should be
enough. Default is 30 VLANs.
Too many greatly VLANs increases network connectivity testing time.

Please note: Floating IP range should be inside the same network segment
(CIDR) as Public IP range, but both must not intersect!

Finally, click Verify Networks button.
After 1-2 minutes (depends on number of VLANs and number of nodes) Fuel
inform you everything is OK or where were found network connectivity issues
with the current settings.
Change the network settings accordingly and click Verify Networks again and
so on until it get green.
You may have to re-visit NIC assignment settings in #7 or even current
Brocade VDX settings in #1 - #2 to correct the network settings.

9. Settings tab.
Actually, you have configured all the necessary settings when you created
new OpenStack environment.
The only setting you may want to change here is to enable debug logging.
Please note: It will enable debug log level for all OpenStack services!
You always may enable debug level for particular OpenStack component
manually, after OpenStack is deployed.

10.Now you are ready to click Deploy Changes.
Please click it.

After Fuel finishes with your OpenStack environment deployment you should
get working OpenStack with standard Neutron network driver.

You may change this driver manually to Brocade one (if special Neutron
driver exists for Brocade VDX)


Kind regards,
Miroslav





On Tue, May 6, 2014 at 9:50 AM, Gandhirajan Mariappan (CW) <
gmariapp@xxxxxxxxxxx> wrote:

> Hi Miroslav,
>
>
>
> Kindly find our setup details below and let us know your suggestion to
> form accurate setup for certification –
>
>
>
> a) The number of physical servers you have for certification (I assume 2)
>
> 3
>
>
>
> b) The number of network interfaces, available for tests on each physical
> server.
>
> Physical Server 1 : 2 NICs
>
> Physical Server 2 : 2 NICs
>
> Physical Server 3 : 2 NICs
>
>
>
> All 3 Physical Servers are attached to the Brocade VDX device.
>
>
>
> Thanks and Regards,
>
> Gandhi Rajan
>
>
>
> *From:* Miroslav Anashkin [mailto:manashkin@xxxxxxxxxxxx]
> *Sent:* Tuesday, May 06, 2014 12:47 AM
> *To:* Gandhirajan Mariappan (CW)
> *Cc:* fuel-dev@xxxxxxxxxxxxxxxxxxx; Nataraj Mylsamy (CW);
> DL-GRP-ENG-SQA-Open Stack; Prakash Kaligotla
>
> *Subject:* Re: [Fuel-dev] How to configure physical server as compute node
>
>
>
> Greetings Gandhi,
>
> Virtualbox scripts do the following:
>
> 1. These scripts create everything  - virtual nodes and virtual networks -
> inside the same single physical host machine.
>
> It is impossible to share the same virtual host-only network between
> different physical hosts, so the part of network is virtual and part is
> physical.
>
>
> 2. Virtualbox does not support virtual networks, distributed between
> several physical servers.
>
>
> 3. Virtualbox removes all VLAN tags for all the traffic, coming out from
> Virtualbox virtual networks to the physical networks.
>
> These 3 Virtualbox limitations make completely impossible the scenario,
> where master node and controller are deployed as virtual machines on one
> physical server and compute node is deployed as bare-metal compute node on
> the second physical server.
>
> Such setup is possible with QEMU/KVM, because of better VLAN support.
> But this setup require sufficient quantity of physical network interfaces
> on the server, intended to host virtual machines with master node and
> controller.
>
>
>
>
>
> You mentioned, you have only 2 physical servers under the hand.
>
> Mirantis Openstack require at least 3 servers, one for master node, one
> for Openstack controller and one for Openstack compute.
>
> However, for testing purpose it is possible to install master node to
> every cheap laptop or as virtual machine on a laptop, connected directly to
> the same network switch.
>
> So, before I'll be able more specific, I would like to ask you kindly to
> describe in the following details all hardware you have got for the
> certification:
>
> a) The number of physical servers you have for certification (I assume 2)
>
> b) The number of network interfaces, available for tests on each physical
> server.
>
> Having this information I'll be able to suggest more accurate
> test/certification setup.
>
> Kind regards,
>
> Miroslav
>
>
>
> On Mon, May 5, 2014 at 4:12 PM, Gandhirajan Mariappan (CW) <
> gmariapp@xxxxxxxxxxx> wrote:
>
> Hi,
>
>
>
> We have two physical servers attached to Brocade VDX device. One is Fuel
> Master Node(already configured) and the second one is Compute Node.
>
> We need guidance in configuring/setting up Compute node. Kindly let us
> know the configuration changes (config.sh), so that we will use second
> physical server as a compute node.
>
>
>
> Our understanding is with the existing config.sh, on running launch.sh,
> Master node, controller node and compute nodes will again be created in
> second physical server as well whereas we required only Compute node to be
> installed in second server.
>
>
>
> Thanks and Regards,
>
> Gandhi Rajan
>
>
> --
> Mailing list: https://launchpad.net/~fuel-dev
> Post to     : fuel-dev@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~fuel-dev
> More help   : https://help.launchpad.net/ListHelp
>
>
>
>
> --
>
>
>
> *Kind RegardsMiroslav Anashkin**L2 support engineer,*
> *Mirantis Inc.*
> *+7(495)640-4944 (office receptionist)*
> *+1(650)587-5200 (office receptionist, call from US)*
> *35b, Bld. 3, Vorontsovskaya St.*
> *Moscow, Russia, 109147.*
>
> www.mirantis.com
>
> manashkin@xxxxxxxxxxxx
>



-- 

*Kind Regards*

*Miroslav Anashkin**L2 support engineer**,*
*Mirantis Inc.*
*+7(495)640-4944 (office receptionist)*
*+1(650)587-5200 (office receptionist, call from US)*
*35b, Bld. 3, Vorontsovskaya St.*
*Moscow**, Russia, 109147.*

www.mirantis.com

manashkin@xxxxxxxxxxxx

Follow ups

References