bigdata-dev team mailing list archive
-
bigdata-dev team
-
Mailing list archive
-
Message #00292
[Merge] lp:~bigdata-dev/charms/trusty/apache-spark/trunk into lp:charms/trusty/apache-spark
Kevin W Monroe has proposed merging lp:~bigdata-dev/charms/trusty/apache-spark/trunk into lp:charms/trusty/apache-spark.
Requested reviews:
Juju Big Data Development (bigdata-dev)
For more details, see:
https://code.launchpad.net/~bigdata-dev/charms/trusty/apache-spark/trunk/+merge/271382
--
Your team Juju Big Data Development is requested to review the proposed merge of lp:~bigdata-dev/charms/trusty/apache-spark/trunk into lp:charms/trusty/apache-spark.
=== modified file 'README.md'
--- README.md 2015-08-20 22:04:25 +0000
+++ README.md 2015-09-16 21:18:54 +0000
@@ -1,6 +1,6 @@
## Overview
-### Spark 1.3.x cluster for YARN & HDFS
+### Spark 1.4.x cluster for YARN & HDFS
Apache Spark™ is a fast and general purpose engine for large-scale data
processing. Key features:
=== modified file 'metadata.yaml'
--- metadata.yaml 2015-06-04 22:00:39 +0000
+++ metadata.yaml 2015-09-16 21:18:54 +0000
@@ -1,5 +1,5 @@
name: apache-spark
-summary: Data warehouse infrastructure built on top of Hadoop
+summary: Apache Spark is a fast engine for large-scale data processing
maintainer: Amir Sanjar <amir.sanjar@xxxxxxxxxxxxx>
description: |
Apache Spark is a fast and general engine for large-scale data processing.
=== modified file 'resources.yaml'
--- resources.yaml 2015-08-24 23:28:33 +0000
+++ resources.yaml 2015-09-16 21:18:54 +0000
@@ -11,6 +11,6 @@
hash: 204fc101f3c336692feeb1401225407c575a90d42af5d7ab83937b59c42a6af9
hash_type: sha256
spark-x86_64:
- url: http://www.apache.org/dist/spark/spark-1.3.0/spark-1.3.0-bin-hadoop2.4.tgz
- hash: 094b5116231b6fec8d3991492c06683126ce66a17f910d82780a1a9106b41547
+ url: http://www.apache.org/dist/spark/spark-1.4.1/spark-1.4.1-bin-hadoop2.4.tgz
+ hash: bc8c79188db9a2b6104da21b3b380838edf1e40acbc79c84ef2ed2ad82ecdbc3
hash_type: sha256
=== added file 'resources/python/jujuresources-0.2.11.tar.gz'
Binary files resources/python/jujuresources-0.2.11.tar.gz 1970-01-01 00:00:00 +0000 and resources/python/jujuresources-0.2.11.tar.gz 2015-09-16 21:18:54 +0000 differ
=== removed file 'resources/python/jujuresources-0.2.9.tar.gz'
Binary files resources/python/jujuresources-0.2.9.tar.gz 2015-06-29 20:49:03 +0000 and resources/python/jujuresources-0.2.9.tar.gz 1970-01-01 00:00:00 +0000 differ
=== modified file 'tests/00-setup'
--- tests/00-setup 2015-04-22 15:27:27 +0000
+++ tests/00-setup 2015-09-16 21:18:54 +0000
@@ -1,5 +1,8 @@
#!/bin/bash
-sudo add-apt-repository ppa:juju/stable -y
-sudo apt-get update
-sudo apt-get install python3 amulet -y
+if ! dpkg -s amulet &> /dev/null; then
+ echo Installing Amulet...
+ sudo add-apt-repository -y ppa:juju/stable
+ sudo apt-get update
+ sudo apt-get -y install amulet
+fi
=== modified file 'tests/100-deploy-spark-hdfs-yarn'
--- tests/100-deploy-spark-hdfs-yarn 2015-08-24 23:28:33 +0000
+++ tests/100-deploy-spark-hdfs-yarn 2015-09-16 21:18:54 +0000
@@ -1,4 +1,4 @@
-#!/usr/bin/python3
+#!/usr/bin/env python3
import unittest
import amulet
@@ -14,10 +14,10 @@
def setUpClass(cls):
cls.d = amulet.Deployment(series='trusty')
# Deploy a hadoop cluster
- cls.d.add('yarn-master', charm='cs:~bigdata-dev/trusty/apache-hadoop-yarn-master')
- cls.d.add('hdfs-master', charm='cs:~bigdata-dev/trusty/apache-hadoop-hdfs-master')
- cls.d.add('compute-slave', charm='cs:~bigdata-dev/trusty/apache-hadoop-compute-slave', units=3)
- cls.d.add('plugin', charm='cs:~bigdata-dev/trusty/apache-hadoop-plugin')
+ cls.d.add('yarn-master', charm='cs:trusty/apache-hadoop-yarn-master')
+ cls.d.add('hdfs-master', charm='cs:trusty/apache-hadoop-hdfs-master')
+ cls.d.add('compute-slave', charm='cs:trusty/apache-hadoop-compute-slave', units=3)
+ cls.d.add('plugin', charm='cs:trusty/apache-hadoop-plugin')
cls.d.relate('yarn-master:namenode', 'hdfs-master:namenode')
cls.d.relate('compute-slave:nodemanager', 'yarn-master:nodemanager')
cls.d.relate('compute-slave:datanode', 'hdfs-master:datanode')
@@ -25,11 +25,11 @@
cls.d.relate('plugin:namenode', 'hdfs-master:namenode')
# Add Spark Service
- cls.d.add('spark', charm='cs:~bigdata-dev/trusty/apache-spark')
+ cls.d.add('spark', charm='cs:trusty/apache-spark')
cls.d.relate('spark:hadoop-plugin', 'plugin:hadoop-plugin')
cls.d.setup(timeout=3600)
- cls.d.sentry.wait()
+ cls.d.sentry.wait(timeout=3600)
cls.unit = cls.d.sentry.unit['spark/0']
###########################################################################
Follow ups