bigdata-dev team mailing list archive
-
bigdata-dev team
-
Mailing list archive
-
Message #00418
[Merge] lp:~admcleod/charms/trusty/apache-hadoop-compute-slave/hadoop-upgrade into lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk
Andrew McLeod has proposed merging lp:~admcleod/charms/trusty/apache-hadoop-compute-slave/hadoop-upgrade into lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk.
Requested reviews:
Juju Big Data Development (bigdata-dev)
For more details, see:
https://code.launchpad.net/~admcleod/charms/trusty/apache-hadoop-compute-slave/hadoop-upgrade/+merge/275387
Added hadoop-upgrade action to upgrade/rollback hadoop software
Also updated resources.yaml
--
Your team Juju Big Data Development is requested to review the proposed merge of lp:~admcleod/charms/trusty/apache-hadoop-compute-slave/hadoop-upgrade into lp:~bigdata-dev/charms/trusty/apache-hadoop-compute-slave/trunk.
=== modified file 'README.md'
--- README.md 2015-10-06 18:20:36 +0000
+++ README.md 2015-10-22 16:15:48 +0000
@@ -57,6 +57,24 @@
juju set yarn-master ganglia_metrics=true
+## Upgrading
+
+This charm includes the hadoop-upgrade action which will download, untar and
+upgrade the hadoop software to the specified version. This should be used in
+conjunction with the hadoop-pre-upgrade and hadoop-post-upgrade actions on the
+namenode, aka apache-hadoop-hdfs-master, which stops any hadoop related
+processes in its cluster before allowing the upgrade to proceed.
+
+Syntax for this action is:
+
+ juju action do datanode/0 hadoop-upgrade version=X.X.X
+
+This action will upgrade the unit extended status.
+You can also get action results with:
+
+ juju action fetch --wait 0 action-id
+
+
## Deploying in Network-Restricted Environments
The Apache Hadoop charms can be deployed in environments with limited network
=== added directory 'actions'
=== added file 'actions.yaml'
--- actions.yaml 1970-01-01 00:00:00 +0000
+++ actions.yaml 2015-10-22 16:15:48 +0000
@@ -0,0 +1,10 @@
+hadoop-upgrade:
+ description: upgrade (or roll back) hadoop to specified version
+ params:
+ version:
+ type: string
+ description: destination hadoop version X.X.X
+ rollback:
+ type: boolean
+ description: true or false - defaults to false
+ required: [version, rollback]
=== added file 'actions/hadoop-upgrade'
--- actions/hadoop-upgrade 1970-01-01 00:00:00 +0000
+++ actions/hadoop-upgrade 2015-10-22 16:15:48 +0000
@@ -0,0 +1,82 @@
+#!/bin/bash
+export SAVEPATH=$PATH
+. /etc/environment
+export PATH=$PATH:$SAVEPATH
+export JAVA_HOME
+
+current_hadoop_ver=`/usr/lib/hadoop/bin/hadoop version|head -n1|awk '{print $2}'`
+new_hadoop_ver=`action-get version`
+cpu_arch=`lscpu|grep -i arch|awk '{print $2}'`
+rollback=`action-get rollback`
+
+
+if pgrep -f Dproc_datanode ; then
+ action-set result="datanode process detected, upgrade aborted"
+ status-set active "datanode process detected, upgrade aborted"
+ exit 1
+fi
+
+
+if pgrep -f Dproc_nodemanager ; then
+ action-set result="nodemanager process detected, upgrade aborted"
+ status-set active "nodemanager process detected, upgrade aborted"
+ exit 1
+fi
+
+if [ "$new_hadoop_ver" == "$current_hadoop_ver" ] ; then
+ action-set result="Same version already installed, aborting"
+ action-fail "Same version already installed"
+ exit 1
+fi
+
+if [ "${rollback}" == "True" ] ; then
+ if [ -d /usr/lib/hadoop-${new_hadoop_ver} ] ; then
+ rm /usr/lib/hadoop
+ ln -s /usr/lib/hadoop-${new_hadoop_ver} /usr/lib/hadoop
+ if [ -d /usr/lib/hadoop-${current_hadoop_ver}/logs ] ; then
+ mv /usr/lib/hadoop-${current_hadoop_ver}/logs /usr/lib/hadoop/
+ fi
+ fi
+ action-set newhadoop.rollback="successfully rolled back"
+ status-set active "Ready - rollback to ${new_hadoop_ver} complete"
+ exit 0
+fi
+
+status-set maintenance "Fetching hadoop-${new_hadoop_ver}-${cpu_arch}"
+juju-resources fetch hadoop-${new_hadoop_ver}-${cpu_arch}
+if [ ! $? -eq 0 ] ; then
+ action-set newhadoop.fetch="fail"
+ exit 1
+fi
+action-set newhadoop.fetch="success"
+
+status-set maintenance "Verifying hadoop-${new_hadoop_ver}-${cpu_arch}"
+juju-resources verify hadoop-${new_hadoop_ver}-${cpu_arch}
+if [ ! $? -eq 0 ] ; then
+ action-set newhadoop.verify="fail"
+ exit 1
+fi
+action-set newhadoop.verify="success"
+
+new_hadoop_path=`juju-resources resource_path hadoop-${new_hadoop_ver}-${cpu_arch}`
+if [ -h /usr/lib/hadoop ] ; then
+ rm /usr/lib/hadoop
+fi
+
+mv /usr/lib/hadoop/ /usr/lib/hadoop-${current_hadoop_ver}
+ln -s /usr/lib/hadoop-${current_hadoop_ver}/ /usr/lib/hadoop
+current_hadoop_path=hadoop-${current_hadoop_ver}
+
+status-set maintenance "Extracting hadoop-${new_hadoop_ver}-${cpu_arch}"
+tar -zxvf ${new_hadoop_path} -C /usr/lib/
+if [ $? -eq 0 ] ; then
+ if [ -h /usr/lib/hadoop ] ; then
+ rm /usr/lib/hadoop
+ fi
+ ln -s /usr/lib/hadoop-${new_hadoop_ver} /usr/lib/hadoop
+fi
+if [ -d ${current_hadoop_path}/logs ] ; then
+ mv ${current_hadoop_path}/logs ${new_hadoop_path}/
+fi
+action-set result="complete"
+status-set active "Ready - hadoop version ${new_hadoop_ver} installed"
=== modified file 'resources.yaml'
--- resources.yaml 2015-10-06 18:21:31 +0000
+++ resources.yaml 2015-10-22 16:15:48 +0000
@@ -26,3 +26,11 @@
url: https://s3.amazonaws.com/jujubigdata/apache/x86_64/hadoop-2.4.1-a790d39.tar.gz
hash: a790d39baba3a597bd226042496764e0520c2336eedb28a1a3d5c48572d3b672
hash_type: sha256
+ hadoop-2.4.1-x86_64:
+ url: https://s3.amazonaws.com/jujubigdata/apache/x86_64/hadoop-2.4.1-a790d39.tar.gz
+ hash: a790d39baba3a597bd226042496764e0520c2336eedb28a1a3d5c48572d3b672
+ hash_type: sha256
+ hadoop-2.7.1-x86_64:
+ url: http://mirrors.ukfast.co.uk/sites/ftp.apache.org/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
+ hash: 991dc34ea42a80b236ca46ff5d207107bcc844174df0441777248fdb6d8c9aa0
+ hash_type: sha256
Follow ups