← Back to team overview

bigdata-dev team mailing list archive

[Merge] lp:~bigdata-dev/charms/trusty/apache-zeppelin/trunk into lp:charms/trusty/apache-zeppelin

 

Kevin W Monroe has proposed merging lp:~bigdata-dev/charms/trusty/apache-zeppelin/trunk into lp:charms/trusty/apache-zeppelin.

Requested reviews:
  Kevin W Monroe (kwmonroe)

For more details, see:
https://code.launchpad.net/~bigdata-dev/charms/trusty/apache-zeppelin/trunk/+merge/268675
-- 
Your team Juju Big Data Development is subscribed to branch lp:~bigdata-dev/charms/trusty/apache-zeppelin/trunk.
=== modified file 'README.md'
--- README.md	2015-07-24 16:38:17 +0000
+++ README.md	2015-08-20 23:15:14 +0000
@@ -14,15 +14,15 @@
 
 ## Usage
 
-This charm leverages our pluggable Hadoop model with the `hadoop-plugin`
-interface. This means that you will need to deploy a base Apache Hadoop cluster
-to run Spark. The suggested deployment method is to use the
-[apache-hadoop-spark-zeppelin](https://jujucharms.com/u/bigdata-dev/apache-hadoop-spark-zeppelin/)
-bundle. This will deploy the Apache Hadoop platform with a single Apache Spark
-unit that communicates with the cluster by relating to the
+This is a subordinate charm that requires the `apache-spark` interface. This
+means that you will need to deploy a base Apache Spark cluster to use
+Zeppelin. An easy way to deploy the recommended environment is to use the
+[apache-hadoop-spark-zeppelin](https://jujucharms.com/apache-hadoop-spark-zeppelin)
+bundle. This will deploy the Apache Hadoop platform with an Apache Spark +
+Zeppelin unit that communicates with the cluster by relating to the
 `apache-hadoop-plugin` subordinate charm:
 
-    juju-quickstart u/bigdata-dev/apache-hadoop-spark-zeppelin
+    juju-quickstart apache-hadoop-spark-zeppelin
 
 Alternatively, you may manually deploy the recommended environment as follows:
 

=== modified file 'hooks/callbacks.py'
--- hooks/callbacks.py	2015-07-24 16:38:17 +0000
+++ hooks/callbacks.py	2015-08-20 23:15:14 +0000
@@ -64,17 +64,18 @@
 
     def setup_zeppelin_config(self):
         '''
-        copy Zeppelin's default configuration files to zeppelin_conf property defined
-        in dist.yaml
+        copy the default configuration files to zeppelin_conf property
+        defined in dist.yaml
         '''
-        conf_dir = self.dist_config.path('zeppelin') / 'conf'
-        self.dist_config.path('zeppelin_conf').rmtree_p()
-        conf_dir.copytree(self.dist_config.path('zeppelin_conf'))
+        default_conf = self.dist_config.path('zeppelin') / 'conf'
+        zeppelin_conf = self.dist_config.path('zeppelin_conf')
+        zeppelin_conf.rmtree_p()
+        default_conf.copytree(zeppelin_conf)
         # Now remove the conf included in the tarball and symlink our real conf
         # dir. we've seen issues where zepp doesn't honor ZEPPELIN_CONF_DIR
         # and instead looks for config in ZEPPELIN_HOME/conf.
-        conf_dir.rmtree_p()
-        self.dist_config.path('zeppelin_conf').symlink(conf_dir)
+        default_conf.rmtree_p()
+        zeppelin_conf.symlink(default_conf)
 
         zeppelin_env = self.dist_config.path('zeppelin_conf') / 'zeppelin-env.sh'
         if not zeppelin_env.exists():

=== modified file 'resources.yaml'
--- resources.yaml	2015-07-24 16:38:17 +0000
+++ resources.yaml	2015-08-20 23:15:14 +0000
@@ -4,7 +4,7 @@
   pathlib:
     pypi: path.py>=7.0
   jujubigdata:
-    pypi: jujubigdata>=2.1.0,<3.0.0
+    pypi: jujubigdata>=4.0.0,<5.0.0
 optional_resources:
   zeppelin-ppc64le:
     url: https://git.launchpad.net/bigdata-data/plain/apache/ppc64le/zeppelin-0.5.0-SNAPSHOT.tar.gz?id=c34a21c939f5fce9ab89b95d65fe2df50e7bbab0


Follow ups