← Back to team overview

bigdata-dev team mailing list archive

[Merge] lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk into lp:charms/trusty/apache-hadoop-hdfs-master

 

Kevin W Monroe has proposed merging lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk into lp:charms/trusty/apache-hadoop-hdfs-master.

Requested reviews:
  Kevin W Monroe (kwmonroe)

For more details, see:
https://code.launchpad.net/~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk/+merge/268668
-- 
Your team Juju Big Data Development is subscribed to branch lp:~bigdata-dev/charms/trusty/apache-hadoop-hdfs-master/trunk.
=== modified file 'DEV-README.md'
--- DEV-README.md	2015-06-18 17:07:45 +0000
+++ DEV-README.md	2015-08-20 23:11:34 +0000
@@ -61,10 +61,17 @@
 
 ## Manual Deployment
 
+<<<<<<< TREE
 The easiest way to deploy the core Apache Hadoop platform is to use one of
 the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).
 However, to manually deploy the base Apache Hadoop platform without using one
 of the bundles, you can use the following:
+=======
+The easiest way to deploy an Apache Hadoop platform is to use one of
+the [apache bundles](https://jujucharms.com/u/bigdata-charmers/#bundles).
+However, to manually deploy the base Apache Hadoop platform without using one
+of the bundles, you can use the following:
+>>>>>>> MERGE-SOURCE
 
     juju deploy apache-hadoop-hdfs-master hdfs-master
     juju deploy apache-hadoop-hdfs-secondary secondary-namenode

=== modified file 'README.md'
--- README.md	2015-06-29 14:19:09 +0000
+++ README.md	2015-08-20 23:11:34 +0000
@@ -27,6 +27,35 @@
     hadoop jar my-job.jar
 
 
+## Status and Smoke Test
+
+The services provide extended status reporting to indicate when they are ready:
+
+    juju status --format=tabular
+
+This is particularly useful when combined with `watch` to track the on-going
+progress of the deployment:
+
+    watch -n 0.5 juju status --format=tabular
+
+The message for each unit will provide information about that unit's state.
+Once they all indicate that they are ready, you can perform a "smoke test"
+to verify that HDFS is working as expected using the built-in `smoke-test`
+action:
+
+    juju action do smoke-test
+
+After a few seconds or so, you can check the results of the smoke test:
+
+    juju action status
+
+You will see `status: completed` if the smoke test was successful, or
+`status: failed` if it was not.  You can get more information on why it failed
+via:
+
+    juju action fetch <action-id>
+
+
 ## Deploying in Network-Restricted Environments
 
 The Apache Hadoop charms can be deployed in environments with limited network
@@ -49,17 +78,19 @@
 of these resources:
 
     sudo pip install jujuresources
-    juju resources fetch --all apache-hadoop-compute-slave/resources.yaml -d /tmp/resources
-    juju resources serve -d /tmp/resources
+    juju-resources fetch --all /path/to/resources.yaml -d /tmp/resources
+    juju-resources serve -d /tmp/resources
 
 This will fetch all of the resources needed by this charm and serve them via a
-simple HTTP server. You can then set the `resources_mirror` config option to
-have the charm use this server for retrieving resources.
+simple HTTP server. The output from `juju-resources serve` will give you a
+URL that you can set as the `resources_mirror` config option for this charm.
+Setting this option will cause all resources required by this charm to be
+downloaded from the configured URL.
 
 You can fetch the resources for all of the Apache Hadoop charms
 (`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`,
 `apache-hadoop-hdfs-secondary`, `apache-hadoop-plugin`, etc) into a single
-directory and serve them all with a single `juju resources serve` instance.
+directory and serve them all with a single `juju-resources serve` instance.
 
 
 ## Contact Information

=== modified file 'actions.yaml'
--- actions.yaml	2015-06-23 17:15:23 +0000
+++ actions.yaml	2015-08-20 23:11:34 +0000
@@ -4,3 +4,5 @@
     description: All of the HDFS processes can be stopped with this Juju action.
 restart-hdfs:
     description: All of the HDFS processes can be restarted with this Juju action.
+smoke-test:
+    description: Verify that HDFS is working by creating and removing a small file.

=== added file 'actions/smoke-test'
--- actions/smoke-test	1970-01-01 00:00:00 +0000
+++ actions/smoke-test	2015-08-20 23:11:34 +0000
@@ -0,0 +1,53 @@
+#!/usr/bin/env python
+
+import sys
+
+try:
+    from charmhelpers.core import hookenv
+    from charmhelpers.core import unitdata
+    from jujubigdata.utils import run_as
+    charm_ready = unitdata.kv().get('charm.active', False)
+except ImportError:
+    charm_ready = False
+
+if not charm_ready:
+    # might not have hookenv.action_fail available yet
+    from subprocess import call
+    call(['action-fail', 'HDFS service not yet ready'])
+
+
+# verify the hdfs-test directory does not already exist
+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
+if '/tmp/hdfs-test' in output:
+    run_as('ubuntu', 'hdfs', 'dfs', '-rm', '-R', '/tmp/hdfs-test')
+    output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
+    if 'hdfs-test' in output:
+        hookenv.action_fail('Unable to remove existing hdfs-test directory')
+        sys.exit()
+
+# create the directory
+run_as('ubuntu', 'hdfs', 'dfs', '-mkdir', '-p', '/tmp/hdfs-test')
+run_as('ubuntu', 'hdfs', 'dfs', '-chmod', '-R', '777', '/tmp/hdfs-test')
+
+# verify the newly created hdfs-test subdirectory exists
+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
+for line in output.split('\n'):
+    if '/tmp/hdfs-test' in line:
+        if 'ubuntu' not in line or 'drwxrwxrwx' not in line:
+            hookenv.action_fail('Permissions incorrect for hdfs-test directory')
+            sys.exit()
+        break
+else:
+    hookenv.action_fail('Unable to create hdfs-test directory')
+    sys.exit()
+
+# remove the directory
+run_as('ubuntu', 'hdfs', 'dfs', '-rm', '-R', '/tmp/hdfs-test')
+
+# verify the hdfs-test subdirectory has been removed
+output = run_as('ubuntu', 'hdfs', 'dfs', '-ls', '/tmp', capture_output=True)
+if '/tmp/hdfs-test' in output:
+    hookenv.action_fail('Unable to remove hdfs-test directory')
+    sys.exit()
+
+hookenv.action_set({'outcome': 'success'})

=== modified file 'dist.yaml'
--- dist.yaml	2015-04-16 15:46:35 +0000
+++ dist.yaml	2015-08-20 23:11:34 +0000
@@ -73,44 +73,12 @@
     # Only expose ports serving a UI or external API (i.e., namenode and
     # resourcemanager).  Communication among units within the cluster does
     # not need ports to be explicitly opened.
-    # If adding a port here, you will need to update
-    # charmhelpers.contrib.bigdata.handlers.apache or hooks/callbacks.py
-    # to ensure that it is supported.
     namenode:
         port: 8020
-        exposed_on: 'hdfs-master'
     nn_webapp_http:
         port: 50070
         exposed_on: 'hdfs-master'
-    dn_webapp_http:
-        port: 50075
-        exposed_on: 'compute-slave'
-    resourcemanager:
-        port: 8032
-        exposed_on: 'yarn-master'
-    rm_webapp_http:
-        port: 8088
-        exposed_on: 'yarn-master'
-    rm_log:
-        port: 19888
-    nm_webapp_http:
-        port: 8042
-        exposed_on: 'compute-slave'
-    jobhistory:
-        port: 10020
-    jh_webapp_http:
-        port: 19888
-        exposed_on: 'yarn-master'
     # TODO: support SSL
     #nn_webapp_https:
     #    port: 50470
     #    exposed_on: 'hdfs-master'
-    #dn_webapp_https:
-    #    port: 50475
-    #    exposed_on: 'compute-slave'
-    #rm_webapp_https:
-    #    port: 8090
-    #    exposed_on: 'yarn-master'
-    #nm_webapp_https:
-    #    port: 8044
-    #    exposed_on: 'compute-slave'

=== modified file 'hooks/callbacks.py'
--- hooks/callbacks.py	2015-06-25 15:38:21 +0000
+++ hooks/callbacks.py	2015-08-20 23:11:34 +0000
@@ -37,3 +37,7 @@
         hookenv.status_set('waiting', 'Waiting for compute slaves to provide DataNodes')
     else:
         hookenv.status_set('blocked', 'Waiting for relation to compute slaves')
+
+
+def clear_active_flag():
+    unitdata.kv().set('charm.active', False)

=== modified file 'hooks/common.py'
--- hooks/common.py	2015-06-25 15:39:10 +0000
+++ hooks/common.py	2015-08-20 23:11:34 +0000
@@ -100,8 +100,10 @@
                 callbacks.update_active_status,
             ],
             'cleanup': [
+                callbacks.clear_active_flag,
                 charmframework.helpers.close_ports(dist_config.exposed_ports('hdfs-master')),
                 hdfs.stop_namenode,
+                callbacks.update_active_status,
             ],
         },
     ])

=== added file 'hooks/datanode-relation-departed'
--- hooks/datanode-relation-departed	1970-01-01 00:00:00 +0000
+++ hooks/datanode-relation-departed	2015-08-20 23:11:34 +0000
@@ -0,0 +1,26 @@
+#!/usr/bin/env python
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+All hooks in this charm are managed by the Charm Framework.
+The framework helps manage dependencies and preconditions to ensure that
+steps are only executed when they can be successful.  As such, no additional
+code should be added to this hook; instead, please integrate new functionality
+into the 'callbacks' list in hooks/common.py.  New callbacks can be placed
+in hooks/callbacks.py, if necessary.
+
+See http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
+for more information.
+"""
+import common
+common.manage()

=== added file 'hooks/namenode-relation-departed'
--- hooks/namenode-relation-departed	1970-01-01 00:00:00 +0000
+++ hooks/namenode-relation-departed	2015-08-20 23:11:34 +0000
@@ -0,0 +1,26 @@
+#!/usr/bin/env python
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+All hooks in this charm are managed by the Charm Framework.
+The framework helps manage dependencies and preconditions to ensure that
+steps are only executed when they can be successful.  As such, no additional
+code should be added to this hook; instead, please integrate new functionality
+into the 'callbacks' list in hooks/common.py.  New callbacks can be placed
+in hooks/callbacks.py, if necessary.
+
+See http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
+for more information.
+"""
+import common
+common.manage()

=== added file 'hooks/secondary-relation-departed'
--- hooks/secondary-relation-departed	1970-01-01 00:00:00 +0000
+++ hooks/secondary-relation-departed	2015-08-20 23:11:34 +0000
@@ -0,0 +1,26 @@
+#!/usr/bin/env python
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+All hooks in this charm are managed by the Charm Framework.
+The framework helps manage dependencies and preconditions to ensure that
+steps are only executed when they can be successful.  As such, no additional
+code should be added to this hook; instead, please integrate new functionality
+into the 'callbacks' list in hooks/common.py.  New callbacks can be placed
+in hooks/callbacks.py, if necessary.
+
+See http://big-data-charm-helpers.readthedocs.org/en/latest/examples/framework.html
+for more information.
+"""
+import common
+common.manage()

=== modified file 'resources.yaml'
--- resources.yaml	2015-07-24 15:26:06 +0000
+++ resources.yaml	2015-08-20 23:11:34 +0000
@@ -4,7 +4,7 @@
   pathlib:
     pypi: path.py>=7.0
   jujubigdata:
-    pypi: jujubigdata>=2.0.2,<3.0.0
+    pypi: jujubigdata>=4.0.0,<5.0.0
   java-installer:
     # This points to a script which manages installing Java.
     # If replaced with an alternate implementation, it must output *only* two

=== added file 'resources/python/jujuresources-0.2.9.tar.gz'
Binary files resources/python/jujuresources-0.2.9.tar.gz	1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.9.tar.gz	2015-08-20 23:11:34 +0000 differ
=== renamed file 'resources/python/jujuresources-0.2.9.tar.gz' => 'resources/python/jujuresources-0.2.9.tar.gz.moved'

Follow ups