← Back to team overview

bigdata-dev team mailing list archive

[Merge] lp:~bigdata-dev/charms/trusty/apache-spark/trunk into lp:charms/trusty/apache-spark

 

Kevin W Monroe has proposed merging lp:~bigdata-dev/charms/trusty/apache-spark/trunk into lp:charms/trusty/apache-spark.

Requested reviews:
  Kevin W Monroe (kwmonroe)

For more details, see:
https://code.launchpad.net/~bigdata-dev/charms/trusty/apache-spark/trunk/+merge/268673
-- 
Your team Juju Big Data Development is subscribed to branch lp:~bigdata-dev/charms/trusty/apache-spark/trunk.
=== modified file 'README.md'
--- README.md	2015-06-25 16:47:56 +0000
+++ README.md	2015-08-20 23:14:26 +0000
@@ -28,12 +28,12 @@
 This charm leverages our pluggable Hadoop model with the `hadoop-plugin`
 interface. This means that you will need to deploy a base Apache Hadoop cluster
 to run Spark. The suggested deployment method is to use the
-[apache-hadoop-spark](https://jujucharms.com/u/bigdata-dev/apache-hadoop-spark/)
+[apache-hadoop-spark](https://jujucharms.com/apache-hadoop-spark/)
 bundle. This will deploy the Apache Hadoop platform with a single Apache Spark
 unit that communicates with the cluster by relating to the
 `apache-hadoop-plugin` subordinate charm:
 
-    juju-quickstart u/bigdata-dev/apache-hadoop-spark
+    juju-quickstart apache-hadoop-spark
 
 Alternatively, you may manually deploy the recommended environment as follows:
 
@@ -75,7 +75,7 @@
 
  Deploy Apache Zeppelin and relate it to the Spark unit:
 
-       juju deploy cs:~bigdata-dev/trusty/apache-zeppelin zeppelin
+       juju deploy apache-zeppelin zeppelin
        juju add-relation spark zeppelin
 
  Once the relation has been made, access the web interface at
@@ -87,7 +87,7 @@
  can combine code execution, rich text, mathematics, plots and rich media.
  Deploy IPython Notebook for Spark and relate it to the Spark unit:
 
-       juju deploy cs:~bigdata-dev/trusty/apache-spark-notebook notebook
+       juju deploy apache-spark-notebook notebook
        juju add-relation spark notebook
 
  Once the relation has been made, access the web interface at

=== modified file 'hooks/callbacks.py'
--- hooks/callbacks.py	2015-07-24 16:28:49 +0000
+++ hooks/callbacks.py	2015-08-20 23:14:26 +0000
@@ -80,13 +80,17 @@
 
     def setup_spark_config(self):
         '''
-        copy Spark's default configuration files to spark_conf property defined
-        in dist.yaml
+        copy the default configuration files to spark_conf property
+        defined in dist.yaml
         '''
-        conf_dir = self.dist_config.path('spark') / 'conf'
-        self.dist_config.path('spark_conf').rmtree_p()
-        conf_dir.copytree(self.dist_config.path('spark_conf'))
-        conf_dir.rmtree_p()
+        default_conf = self.dist_config.path('spark') / 'conf'
+        spark_conf = self.dist_config.path('spark_conf')
+        spark_conf.rmtree_p()
+        default_conf.copytree(spark_conf)
+        # Now remove the conf included in the tarball and symlink our real conf
+        default_conf.rmtree_p()
+        spark_conf.symlink(default_conf)
+
         spark_env = self.dist_config.path('spark_conf') / 'spark-env.sh'
         if not spark_env.exists():
             (self.dist_config.path('spark_conf') / 'spark-env.sh.template').copy(spark_env)

=== modified file 'resources.yaml'
--- resources.yaml	2015-07-24 16:28:49 +0000
+++ resources.yaml	2015-08-20 23:14:26 +0000
@@ -4,7 +4,7 @@
   pathlib:
     pypi: path.py>=7.0
   jujubigdata:
-    pypi: jujubigdata>=2.1.0,<3.0.0
+    pypi: jujubigdata>=4.0.0,<5.0.0
 optional_resources:
   spark-ppc64le:
     url: https://git.launchpad.net/bigdata-data/plain/apache/ppc64le/spark-1.3.1-bin-2.4.0.tgz?id=45f439740a08b93ae72bc48a7103ebf58dbfa60b


Follow ups