savanna-all team mailing list archive
Mailing list archive
How to construct the savanna-0.1-hdp-img
Thanks Alexander and Sergey.
I'm going to try out these instructions on a Fedora / RHEL image and see
if I'm able to get a working Savanna system.
In the meantime, will you checkin the scripts to the savanna repository?
On 05/14/2013 01:32 PM, Alexander Ignatov wrote:
We didn't check images creation in the latest commit in the trunk of
Here is commit id where created images were worked with Savanna:
From: Alexander Ignatov [mailto:aignatov@xxxxxxxxxxxx]
Sent: Tuesday, May 14, 2013 7:28 PM
To: 'Sergey Lukjanov'; 'Matthew Farrellee'
Subject: RE: 0.1.1 cluster remains Starting
Actually savanna-0.1-hdp-img was created manually in our OpenStack Lab. Now
we are working on automatization of the Savanna image preparation using
diskimage-builder project: https://github.com/stackforge/diskimage-builder
Here are manual and automated descriptions. Please note that now we are
working with Ubuntu cloud image only in Savanna.
Manual steps (for Ubuntu):
1. Run 1 VM instance from cloud image in the OpenStack Lab.
2. Install ssh-server on this instance.
3. Install Oracle JDK. We are using jdk-6u43-linux-x64 in our image.
4. Create groop and user "hadoop" with disabled password.
5. Install some Hadoop version 1.1.X (we are using 1.1.1 version) from
http://archive.apache.org/dist/hadoop/core/ . We are installing Hadoop from
deb packages only. Configuration files related to Hadoop will be placed in
/etc/hadoop dir after installation.
6. Change .bashrc file for in /home/hadoop. Add the following variables:
HADOOP_HOME=/usr/share/hadoop/ and PATH=$PATH:/usr/sbin/
7. Genereate ssh key for hadoop user using ssh-keygen and move id_rsa.pub
from .ssh to authorized_keys 8. chmod 600 /home/hadoop/.ssh/authorized_keys
chown hadoop:hadoop /run/
chown hadoop:hadoop /mnt/
9. Setup password 'swordfish' for root. :) 10. Change etc/ssh/sshd_config:
Set the following params:
11. Change /etc/ssh/ssh_config:
12. Now you can make snapshot from the running instance and use it's id in
the cluster creation requests in Savanna.
Not sure if I didn't miss something in manual steps. Anyway attached script
"80-setup-hadoop" contain all steps repeating manual steps above.
Here are the steps how to create cloud image based on Ubuntu with Apache
Hadoop installed using diskimage-builder project:
1. Clone the repository "https://github.com/stackforge/diskimage-builder"
2. Add ~/diskimage-builder/bin/ directory to your path (for example,
3. Export the following variable
ELEMENTS_PATH=/home/$USER/diskimage-builder/elements/ to your .bashrc. Then
4. Copy file "img-build-sudoers" from ~/disk-image-builder/sudoers.d/ to
Then "chmod 440 /etc/sudoers.d/img-build-sudoers"
5. Move hadoop/ directory from archive hadoop.zip(attached) to
mv hadoop/ /path_to_disk_image_builder/diskimage-builder/elements/
6. Just call the following command to create cloud image is able to run on
hadoop_version=1.1.2 disk-image-create base vm hadoop ubuntu -o
In this command 'hadoop_version' parameter is version of hadoop needs
to be installed.
After Hadoop installation complete, get hadoop_1_1_2.qcow2 image and deploy
it into OpenStack Glance. Diskimage-builder instructions are not well tested
yet but it should work with Savanna.
From: Sergey Lukjanov [mailto:slukjanov@xxxxxxxxxxxx]
Sent: Tuesday, May 14, 2013 11:17 AM
To: Matthew Farrellee
Subject: Re: 0.1.1 cluster remains Starting
Briefly there are several requirements for image:
* hadoop should be installed to the specific folders;
* root should be accessible using login/password;
* some Savanna-specific files and settings should be applied to the
Alex Ignatov can deeply describe the Savanna image preparation process.
Currently we have no public instruction about how to do it, but we are
working on scripts that will automaticly build images using
Description: Zip archive