← Back to team overview

bigdata-dev team mailing list archive

[Merge] lp:~admcleod/charms/trusty/apache-hadoop-plugin/hadoop-upgrade into lp:~bigdata-dev/charms/trusty/apache-hadoop-plugin/trunk

 

Andrew McLeod has proposed merging lp:~admcleod/charms/trusty/apache-hadoop-plugin/hadoop-upgrade into lp:~bigdata-dev/charms/trusty/apache-hadoop-plugin/trunk.

Requested reviews:
  Juju Big Data Development (bigdata-dev)

For more details, see:
https://code.launchpad.net/~admcleod/charms/trusty/apache-hadoop-plugin/hadoop-upgrade/+merge/275418

Added hadoop-upgrade action, modified resources.yaml
-- 
Your team Juju Big Data Development is requested to review the proposed merge of lp:~admcleod/charms/trusty/apache-hadoop-plugin/hadoop-upgrade into lp:~bigdata-dev/charms/trusty/apache-hadoop-plugin/trunk.
=== modified file 'README.md'
--- README.md	2015-10-06 18:29:02 +0000
+++ README.md	2015-10-22 16:48:34 +0000
@@ -25,9 +25,35 @@
     juju deploy apache-pig pig
     juju add-relation plugin pig
 
+
+## Upgrading
+
+This charm includes the hadoop-upgrade action which will download, untar and
+upgrade the hadoop software to the specified version. This should be used in
+conjunction with the hadoop-pre-upgrade and hadoop-post-upgrade actions on the
+namenode (apache-hadoop-hdfs-master) which stops any hadoop related processes on
+the cluster before allowing the upgrade to proceed.
+
+If different
+versions of hadoop are running on related services, the cluster will not
+function correctly.
+
+The rollback param specifies whether to recreate (overwrite)
+the hadoop software or simply recreate the /usr/lib/hadoop symlink.
+
+Syntax for this action is:
+
+    juju action do datanode/0 hadoop-upgrade version=X.X.X rollback=false
+
+This action will upgrade the unit extended status.
+You can also get action results with:
+
+    juju action fetch --wait 0 action-id
+
+
 ## Benchmarking
 
-    You can perform a terasort benchmark, in order to gauge performance of your environment:
+You can perform a terasort benchmark, in order to gauge performance of your environment:
 
         $ juju action do plugin/0 terasort
         Action queued with id: cbd981e8-3400-4c8f-8df1-c39c55a7eae6

=== modified file 'actions.yaml'
--- actions.yaml	2015-06-08 20:05:37 +0000
+++ actions.yaml	2015-10-22 16:48:34 +0000
@@ -36,3 +36,13 @@
             description: How many tasks to run per jvm. If set to -1, there is no limit.
             type: integer
             default: 1
+hadoop-upgrade:
+  description: upgrade (or roll back) hadoop to specified version
+  params:
+    version:
+      type: string
+      description: destination hadoop version X.X.X
+    rollback:
+      type: boolean 
+      description: true or false - defaults to false
+  required: [version, rollback]

=== added file 'actions/hadoop-upgrade'
--- actions/hadoop-upgrade	1970-01-01 00:00:00 +0000
+++ actions/hadoop-upgrade	2015-10-22 16:48:34 +0000
@@ -0,0 +1,70 @@
+#!/bin/bash
+export SAVEPATH=$PATH
+. /etc/environment
+export PATH=$PATH:$SAVEPATH
+export JAVA_HOME
+
+current_hadoop_ver=`/usr/lib/hadoop/bin/hadoop version|head -n1|awk '{print $2}'`
+new_hadoop_ver=`action-get version`
+cpu_arch=`lscpu|grep -i arch|awk '{print $2}'`
+rollback=`action-get rollback`
+source .venv/bin/activate
+
+
+if [ "$new_hadoop_ver" == "$current_hadoop_ver" ] ; then 
+        action-set result="Same version already installed, aborting"
+        action-fail "Same version already installed"
+        exit 1
+fi
+
+if [ "${rollback}" == "True" ] ; then
+        if [ -d /usr/lib/hadoop-${new_hadoop_ver} ] ; then
+                rm /usr/lib/hadoop
+                ln -s /usr/lib/hadoop-${new_hadoop_ver} /usr/lib/hadoop
+                if [ -d /usr/lib/hadoop-${current_hadoop_ver}/logs ] ; then
+                        mv /usr/lib/hadoop-${current_hadoop_ver}/logs /usr/lib/hadoop/
+                fi
+        fi
+                action-set newhadoop.rollback="successfully rolled back"
+                status-set active "Ready - rollback to ${new_hadoop_ver} complete"
+                exit 0
+fi
+                
+status-set maintenance "Fetching hadoop-${new_hadoop_ver}-${cpu_arch}"
+juju-resources fetch hadoop-${new_hadoop_ver}-${cpu_arch}
+if [ ! $? -eq 0 ] ; then
+        action-set newhadoop.fetch="fail"
+        exit 1
+fi
+action-set newhadoop.fetch="success"
+
+status-set maintenance "Verifying hadoop-${new_hadoop_ver}-${cpu_arch}"
+juju-resources verify hadoop-${new_hadoop_ver}-${cpu_arch}
+if [ ! $? -eq 0 ] ; then
+        action-set newhadoop.verify="fail"
+        exit 1
+fi
+action-set newhadoop.verify="success"
+
+new_hadoop_path=`juju-resources resource_path hadoop-${new_hadoop_ver}-${cpu_arch}`
+if [ -h /usr/lib/hadoop ] ; then
+       rm /usr/lib/hadoop
+fi
+
+mv /usr/lib/hadoop/ /usr/lib/hadoop-${current_hadoop_ver}
+ln -s /usr/lib/hadoop-${current_hadoop_ver}/ /usr/lib/hadoop
+current_hadoop_path=hadoop-${current_hadoop_ver}
+
+status-set maintenance "Extracting hadoop-${new_hadoop_ver}-${cpu_arch}"
+tar -zxvf ${new_hadoop_path} -C /usr/lib/
+if [ $? -eq 0 ] ; then
+        if [ -h /usr/lib/hadoop ] ; then
+                rm /usr/lib/hadoop
+        fi
+        ln -s /usr/lib/hadoop-${new_hadoop_ver} /usr/lib/hadoop
+fi
+if [ -d ${current_hadoop_path}/logs ] ; then
+        mv ${current_hadoop_path}/logs ${new_hadoop_path}/
+fi
+action-set result="complete"
+status-set active "Ready - hadoop version ${new_hadoop_ver} installed"

=== modified file 'resources.yaml'
--- resources.yaml	2015-10-06 18:29:28 +0000
+++ resources.yaml	2015-10-22 16:48:34 +0000
@@ -28,3 +28,11 @@
     url: https://s3.amazonaws.com/jujubigdata/apache/x86_64/hadoop-2.4.1-a790d39.tar.gz
     hash: a790d39baba3a597bd226042496764e0520c2336eedb28a1a3d5c48572d3b672
     hash_type: sha256
+  hadoop-2.4.1-x86_64:
+    url: https://s3.amazonaws.com/jujubigdata/apache/x86_64/hadoop-2.4.1-a790d39.tar.gz
+    hash: a790d39baba3a597bd226042496764e0520c2336eedb28a1a3d5c48572d3b672
+    hash_type: sha256
+  hadoop-2.7.1-x86_64:
+    url: http://mirrors.ukfast.co.uk/sites/ftp.apache.org/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz
+    hash: 991dc34ea42a80b236ca46ff5d207107bcc844174df0441777248fdb6d8c9aa0
+    hash_type: sha256