← Back to team overview

bigdata-dev team mailing list archive

[Merge] lp:~bigdata-dev/charms/trusty/apache-hue/rewrite into lp:~bigdata-dev/charms/trusty/apache-hue/trunk

 

Kevin W Monroe has proposed merging lp:~bigdata-dev/charms/trusty/apache-hue/rewrite into lp:~bigdata-dev/charms/trusty/apache-hue/trunk.

Requested reviews:
  Juju Big Data Development (bigdata-dev)

For more details, see:
https://code.launchpad.net/~bigdata-dev/charms/trusty/apache-hue/rewrite/+merge/255130

rewritten with charmhelpers, resources.yaml, dist.yaml. Also plays nice with the Hive relation.
-- 
Your team Juju Big Data Development is requested to review the proposed merge of lp:~bigdata-dev/charms/trusty/apache-hue/rewrite into lp:~bigdata-dev/charms/trusty/apache-hue/trunk.
=== added file 'LICENSE'
--- LICENSE	1970-01-01 00:00:00 +0000
+++ LICENSE	2015-04-02 19:14:51 +0000
@@ -0,0 +1,177 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS

=== removed file 'README.example'
--- README.example	2014-12-12 04:41:42 +0000
+++ README.example	1970-01-01 00:00:00 +0000
@@ -1,41 +0,0 @@
-# Overview
-
-Describe the intended usage of this charm and anything unique about how this charm relates to others here.
-
-This README will be displayed in the Charm Store, it should be either Markdown or RST. Ideal READMEs include instructions on how to use the charm, expected usage, and charm features that your audience might be interested in. For an example of a well written README check out Hadoop: http://jujucharms.com/charms/precise/hadoop
-
-Use this as a Markdown reference if you need help with the formatting of this README: http://askubuntu.com/editing-help
-
-This charm provides [service](http://example.com). Add a description here of what the service itself actually does.
-
-Also remember to check the [icon guidelines](https://juju.ubuntu.com/docs/authors-charm-icon.html) so that your charm looks good in the Juju GUI.
-
-# Usage
-
-Step by step instructions on using the charm:
-
-    juju deploy servicename
-
-and so on. If you're providing a web service or something that the end user needs to go to, tell them here, especially if you're deploying a service that might listen to a non-default port.
-
-You can then browse to http://ip-address to configure the service.
-
-
-If the charm has any recommendations for running at scale, outline them in examples here. For example if you have a memcached relation that improves performance, mention it here.
-
-
-This not only helps users but gives people a place to start if they want to help you add features to your charm.
-
-# Configuration
-
-The configuration options will be listed on the charm store, however If you're making assumptions or opinionated decisions in the charm (like setting a default administrator password), you should detail that here so the user knows how to change it immediately, etc.
-
-# Contact Information
-
-Though this will be listed in the charm store itself don't assume a user will know that, so include that information here:
-
-
-- Upstream website
-- Upstream bug tracker
-- Upstream mailing list or contact information
-- Feel free to add things if it's useful for users

=== added file 'README.md'
--- README.md	1970-01-01 00:00:00 +0000
+++ README.md	2015-04-02 19:14:51 +0000
@@ -0,0 +1,32 @@
+## Overview
+
+Hue aggregates the most common Apache Hadoop components into a single interface
+and targets the user experience. Its main goal is to have the users "just use"
+Hadoop without worrying about underlying complexity or using a command line.
+Learn more at [gethue.com](http://gethue.com/).
+
+## Usage
+
+You may manually deploy the recommended environment as follows:
+
+    juju deploy apache-hadoop-hdfs-master hdfs-master
+    juju deploy apache-hadoop-yarn-master yarn-master
+    juju deploy apache-hadoop-compute-slave compute-slave
+    juju deploy apache-hue hue
+
+    juju add-relation yarn-master hdfs-master
+    juju add-relation compute-slave yarn-master
+    juju add-relation compute-slave hdfs-master
+    juju add-relation hue hdfs-master
+    juju add-relation hue yarn-master
+
+## Contact Information
+
+[bigdata-dev@xxxxxxxxxxxxx](mailto:bigdata-dev@xxxxxxxxxxxxx)
+
+## Help
+
+- [Hue homepage](http://gethue.com)
+- [Juju big data team](https://launchpad.net/~bigdata-dev)
+- [Juju community](https://jujucharms.com/community)
+- [Juju mailing list](https://lists.ubuntu.com/mailman/listinfo/juju)

=== modified file 'config.yaml'
--- config.yaml	2015-03-05 05:07:52 +0000
+++ config.yaml	2015-04-02 19:14:51 +0000
@@ -1,5 +1,7 @@
 options:
-  hue-version:
-    type: string
-    default: "3.7.1"
-    description: "Apache HUE version - for IBM POWER set to 3.6.0"
+    resources_mirror:
+        type: string
+        default: ''
+        description: |
+            URL from which to fetch resources (e.g., Hadoop binaries) instead
+            of Launchpad.

=== added file 'copyright'
--- copyright	1970-01-01 00:00:00 +0000
+++ copyright	2015-04-02 19:14:51 +0000
@@ -0,0 +1,16 @@
+Format: http://dep.debian.net/deps/dep5/
+
+Files: *
+Copyright: Copyright 2015, Canonical Ltd., All Rights Reserved.
+License: Apache License 2.0
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+ .
+     http://www.apache.org/licenses/LICENSE-2.0
+ .
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.

=== added file 'dist.yaml'
--- dist.yaml	1970-01-01 00:00:00 +0000
+++ dist.yaml	2015-04-02 19:14:51 +0000
@@ -0,0 +1,39 @@
+# This file contains values that are likely to change per distribution.
+# The aim is to make it easier to update / extend the charms with
+# minimal changes to the shared code in charmhelpers.
+packages:
+    - 'libkrb5-dev'
+    - 'libmysqlclient-dev'
+    - 'libldap2-dev'
+    - 'libsasl2-dev'
+    - 'libsqlite3-dev'
+    - 'libssl-dev'
+    - 'libxml2-dev'
+    - 'libxslt-dev'
+    - 'make'
+    - 'python2.7-dev'
+groups:
+    - 'hadoop'
+users:
+    hue:
+        groups: ['hadoop']
+dirs:
+    hue:
+        path: '/usr/lib/hue'
+        owner: 'hue'
+        group: 'hadoop'
+    hue_conf:
+        path: '/etc/hue/conf'
+    hue_tmp:
+        path: '/tmp/hue_build'
+ports:
+    # Ports that need to be exposed, overridden, or manually specified.
+    # Only expose ports serving a UI or external API (i.e., namenode and
+    # resourcemanager).  Communication among units within the cluster does
+    # not need ports to be explicitly opened.
+    # If adding a port here, you will need to update
+    # charmhelpers.contrib.bigdata.handlers.apache or hooks/callbacks.py
+    # to ensure that it is supported.
+    hue_http:
+        port: 8888
+        exposed_on: 'hue'

=== removed directory 'files'
=== removed directory 'files/archives'
=== removed directory 'files/archives/ppc64le'
=== removed directory 'files/archives/x86_64'
=== removed directory 'files/scripts'
=== removed file 'files/scripts/build.sh'
--- files/scripts/build.sh	2014-12-12 04:41:42 +0000
+++ files/scripts/build.sh	1970-01-01 00:00:00 +0000
@@ -1,3 +0,0 @@
-#!/bin/bash
-ln -s /usr/lib/python2.7/plat-*/_sysconfigdata_nd.py /usr/lib/python2.7/
-PREFIX=/usr/share make install

=== removed file 'files/scripts/stop_hue.sh'
--- files/scripts/stop_hue.sh	2014-12-14 21:08:45 +0000
+++ files/scripts/stop_hue.sh	1970-01-01 00:00:00 +0000
@@ -1,2 +0,0 @@
-#!/bin/bash
-sudo ps -x | grep runserver | awk '{print $1}' | xargs sudo kill
\ No newline at end of file

=== removed file 'hooks/actions.py'
--- hooks/actions.py	2014-12-12 04:41:42 +0000
+++ hooks/actions.py	1970-01-01 00:00:00 +0000
@@ -1,5 +0,0 @@
-from charmhelpers.core import hookenv
-
-
-def log_start(service_name):
-    hookenv.log('apache-hue starting')

=== removed file 'hooks/bdutils.py'
--- hooks/bdutils.py	2014-12-14 21:08:45 +0000
+++ hooks/bdutils.py	1970-01-01 00:00:00 +0000
@@ -1,164 +0,0 @@
-#!/usr/bin/python
-import os
-import pwd
-import grp
-import subprocess
-import signal
-import fileinput
-from shutil import rmtree
-from charmhelpers.core.hookenv import log
-
-def createPropertyElement(name, value):
-    import xml.etree.ElementTree as ET
-    propertyE = ET.Element("property")
-    eName = ET.Element("name")
-    eName.text = name
-    propertyE.append(eName)
-    eValue = ET.Element("value")
-    eValue.text = value
-    propertyE.append(eValue)
-    return propertyE
-    
-def setHadoopConfigXML (xmlfileNamePath, name, value):
-    import xml.dom.minidom as minidom
-    log("==> setHadoopConfigXML ","INFO")
-    import xml.etree.ElementTree as ET
-    found = False
-    with open(xmlfileNamePath,'rb+') as f:    
-        root = ET.parse(f).getroot()
-        proList = root.findall("property")
-        for p in proList:
-            if found:
-                break
-            cList = p.getchildren()
-            for c in cList:
-                if c.text == name:
-                    p.find("value").text = value
-                    found = True
-                    break
-
-        f.seek(0)
-        if not found:            
-            root.append(createPropertyElement(name, value))
-            reparsed = minidom.parseString(ET.tostring(root, encoding='UTF-8'))
-            f.write('\n'.join([line for line in reparsed.toprettyxml(newl='\r\n', indent="\t").split('\n') if line.strip()]))
-        else:
-            f.write(ET.tostring(root, encoding='UTF-8'))
-        f.truncate()
-
-def setDirPermission(path, owner, group, access):
-    log("==> setDirPermission")
-    if os.path.isdir(path):
-        rmtree(path)
-    os.makedirs( path)
-    os.chmod(path, access)
-    chownRecursive(path, owner, group)
-
-def chownRecursive(path, owner, group):
-    print ("==> chownRecursive ")
-    uid = pwd.getpwnam(owner).pw_uid
-    gid = grp.getgrnam(group).gr_gid
-    os.chown(path, uid, gid)
-    for root, dirs, files in os.walk(path):
-        for momo in dirs:
-            os.chown(os.path.join(root, momo), uid, gid)
-        for momo in files:
-            os.chown(os.path.join(root, momo), uid, gid)
-            
-def chmodRecursive(path, mode):
-    for r,d,f in os.walk(path):
-        os.chmod( r , mode)
-            
-def wgetPkg(pkgName, crypType):
-    log("==> wgetPkg ")
-    crypFileName= pkgName+'.'+crypType    
-    cmd = 'wget '+pkgName
-    subprocess.call(cmd.split())
-    if crypType:
-        cmd = ['wget', crypFileName]
-        subprocess.call(cmd)
-    #TODO -- cryption validation
-     
-def append_bashrc(line):
-    log("==> append_bashrc","INFO")
-    with open(os.path.join(os.path.sep, 'home','ubuntu','.bashrc'),'a') as bf:
-        bf.writelines(line)
-        
-def fileSetKV(filePath, key, value):
-    log ("===> fileSetKV ({}, {}, {})".format(filePath, key, value))
-    found = False
-    with open(filePath) as f:
-        contents = f.readlines()
-        for l in range(0, len(contents)):
-            if contents[l].startswith(key):
-                contents[l] = key+value+"\n"
-                found = True
-    if not found:
-        log ("*** Key={} not found, adding key+value= {}".format(key,value))
-        contents.append(key+value+"\n")         
-    with open(filePath, 'wb') as f:
-        f.writelines(contents)
-
-def fileRemoveKey(filePath, key):
-    log ("===> fileRemoveK ({}, {})".format(filePath, key))
-    found = False
-    with open(filePath) as f:
-        contents = f.readlines()
-        for l in range(0, len(contents)):
-            if contents[l].startswith(key):
-                contents.pop(l)
-                found = True
-    if found:
-        with open(filePath, 'wb') as f:
-            f.writelines(contents)
-       
-def setHadoopEnvVarFromFile(scriptName):
-    log("==> setHadoopEnvVarFromFile","INFO")
-    with open(scriptName) as f:
-        lines = f.readlines()
-        for l in lines:
-            if l.startswith("#") or l == '\n':
-                continue
-            else:
-                ll = l.split("=")
-                m = ll[0]+" = "+ll[1].strip().strip(';').strip("\"").strip()
-                #log ("==> {} ".format("\""+m+"\""))
-                os.environ[ll[0]] = ll[1].strip().strip(';').strip("\"").strip()
-                
-def is_jvm_service_active(processname):
-    cmd=["jps"]
-    p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
-    out, err = p.communicate()
-    if err == None and str(out).find(processname) != -1:
-        return True
-    else:
-        return False
-
-def kill_java_process(process):
-    cmd=["jps"]
-    p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
-    out, err = p.communicate()
-    cmd = out.split()
-    for i in range(0, len(cmd)):
-        if cmd[i] == process:
-            pid = int(cmd[i-1])
-            os.kill(pid, signal.SIGTERM)
-    return 0
-
-
-def fileLineSearchReplace(filepath, oldline, newline):
-    log(" fileLineSearchReplace ({}, {}, {})". format(filepath, oldline, newline))
-    for line in fileinput.input(filepath, inplace=True):
-        print line.replace(oldline, newline), 
-        
-def fconfigured(filename):
-    fpath = os.path.join(os.path.sep, 'home', 'ubuntu', filename)
-    if os.path.isfile(fpath):
-        return True
-    else:
-        touch(fpath)
-        return False      
-    
-def touch(fname, times=None):
-    with open(fname, 'a'):
-        os.utime(fname, times)

=== added file 'hooks/callbacks.py'
--- hooks/callbacks.py	1970-01-01 00:00:00 +0000
+++ hooks/callbacks.py	2015-04-02 19:14:51 +0000
@@ -0,0 +1,98 @@
+import os
+from subprocess import check_call, Popen
+
+import jujuresources
+from charmhelpers.core import host
+from charmhelpers.core import charmdata
+from charmhelpers.contrib.bigdata import utils
+from glob import glob
+
+
+class Hue(object):
+    def __init__(self, dist_config):
+        self.dist_config = dist_config
+        self.resources = {
+            'hue': 'hue-%s' % host.cpu_arch(),
+        }
+        self.verify_resources = utils.verify_resources(*self.resources.values())
+
+    def is_installed(self):
+        return charmdata.kv.get('hue.installed')
+
+    def install(self, force=False):
+        if not force and self.is_installed():
+            return
+        self.dist_config.add_users()
+        self.dist_config.add_dirs()
+        self.dist_config.add_packages()
+        jujuresources.install(self.resources['hue'],
+                              destination=self.dist_config.path('hue_tmp'),
+                              skip_top_level=True)
+        # pkg bug in python; workaround with symlink:
+        # https://bugs.launchpad.net/ubuntu/+source/python2.7/+bug/1115466
+        source_file = glob("/usr/lib/python2.7/plat-*/_sysconfigdata_nd.py")[0]
+        check_call(['ln', '-sf',
+                   source_file,
+                   "/usr/lib/python2.7/"])
+        with host.chdir(self.dist_config.path('hue_tmp')):
+            check_call(['make', 'install'],
+                       env=dict(os.environ,
+                       CONF_DIR=self.dist_config.path('hue_conf'),
+                       PREFIX=self.dist_config.path('hue').dirname()))
+        self.dist_config.path('hue_tmp').rmtree_p()
+        host.chownr(self.dist_config.path('hue'), 'hue', 'hadoop')
+        self.setup_hue_config()
+        charmdata.kv.set('hue.installed', True)
+
+    def setup_hue_config(self):
+        # copy default config into our conf dir
+        conf_dir = self.dist_config.path('hue') / 'desktop/conf'
+        self.dist_config.path('hue_conf').rmtree_p()
+        conf_dir.copytree(self.dist_config.path('hue_conf'))
+
+    def configure_hue(self):
+        hue_bin = self.dist_config.path('hue') / 'build/env/bin'
+        with utils.environment_edit_in_place('/etc/environment') as env:
+            if hue_bin not in env['PATH']:
+                env['PATH'] = ':'.join([env['PATH'], hue_bin])
+            env['HUE_CONF_DIR'] = self.dist_config.path('hue_conf')
+
+        hue_ini = self.dist_config.path('hue_conf') / 'hue.ini'
+        nn = charmdata.kv.get('relations.ready')['namenode'].values()[0]
+        rm = charmdata.kv.get('relations.ready')['resourcemanager'].values()[0]
+
+        utils.re_edit_in_place(hue_ini, {
+            r'(\s*)#*\s*fs_defaultfs=.*':
+                r'\1fs_defaultfs=hdfs://%s:%s' % (nn['private-address'], nn['port']),
+            r'(\s*)#*\s*webhdfs_url=.*':
+                r'\1webhdfs_url=http://%s:50070/webhdfs/v1' % nn['private-address'],
+            r'(\s*)#*\s*resourcemanager_host=.*':
+                r'\1resourcemanager_host=%s' % rm['private-address'],
+            r'(\s*)#*\s*resourcemanager_api_url=.*':
+                r'\1resourcemanager_api_url=http://%s:8088' % rm['private-address'],
+            r'(\s*)#*\s*history_server_api_url=.*':
+                r'\1history_server_api_url=http://%s:19888' % rm['private-address'],
+        })
+
+    def configure_for_hive(self):
+        hue_ini = self.dist_config.path('hue_conf') / 'hue.ini'
+        hive = charmdata.kv.get('relations.ready')['hive'].values()[0]
+
+        utils.re_edit_in_place(hue_ini, {
+            r'(\s*)#*\s*hive_server_host=.*': r'\1hive_server_host=%s' % hive['private-address'],
+            r'(\s*)#*\s*hive_server_port=.*': r'\1hive_server_port=%s' % hive['port'],
+        })
+
+    def start(self):
+        Popen(['su', 'hue', '-c',
+              self.dist_config.path('hue') / 'build/env/bin/supervisor -d'],
+              env=dict(os.environ,
+                       CONF_DIR=self.dist_config.path('hue_conf')))
+
+    def stop(self):
+        check_call(['killall', '-9', '-u', 'hue'])
+
+    def cleanup(self):
+        self.dist_config.remove_users()
+        self.dist_config.path('hue').rmtree_p()
+        self.dist_config.path('hue_conf').rmtree_p()

=== removed directory 'hooks/charmhelpers'
=== removed file 'hooks/charmhelpers/__init__.py'
=== removed directory 'hooks/charmhelpers/core'
=== removed file 'hooks/charmhelpers/core/__init__.py'
=== removed file 'hooks/charmhelpers/core/fstab.py'
--- hooks/charmhelpers/core/fstab.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/core/fstab.py	1970-01-01 00:00:00 +0000
@@ -1,114 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-
-__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@xxxxxxxxxxxxx>'
-
-import os
-
-
-class Fstab(file):
-    """This class extends file in order to implement a file reader/writer
-    for file `/etc/fstab`
-    """
-
-    class Entry(object):
-        """Entry class represents a non-comment line on the `/etc/fstab` file
-        """
-        def __init__(self, device, mountpoint, filesystem,
-                     options, d=0, p=0):
-            self.device = device
-            self.mountpoint = mountpoint
-            self.filesystem = filesystem
-
-            if not options:
-                options = "defaults"
-
-            self.options = options
-            self.d = d
-            self.p = p
-
-        def __eq__(self, o):
-            return str(self) == str(o)
-
-        def __str__(self):
-            return "{} {} {} {} {} {}".format(self.device,
-                                              self.mountpoint,
-                                              self.filesystem,
-                                              self.options,
-                                              self.d,
-                                              self.p)
-
-    DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
-
-    def __init__(self, path=None):
-        if path:
-            self._path = path
-        else:
-            self._path = self.DEFAULT_PATH
-        file.__init__(self, self._path, 'r+')
-
-    def _hydrate_entry(self, line):
-        return Fstab.Entry(*filter(
-            lambda x: x not in ('', None),
-            line.strip("\n").split(" ")))
-
-    @property
-    def entries(self):
-        self.seek(0)
-        for line in self.readlines():
-            try:
-                if not line.startswith("#"):
-                    yield self._hydrate_entry(line)
-            except ValueError:
-                pass
-
-    def get_entry_by_attr(self, attr, value):
-        for entry in self.entries:
-            e_attr = getattr(entry, attr)
-            if e_attr == value:
-                return entry
-        return None
-
-    def add_entry(self, entry):
-        if self.get_entry_by_attr('device', entry.device):
-            return False
-
-        self.write(str(entry) + '\n')
-        self.truncate()
-        return entry
-
-    def remove_entry(self, entry):
-        self.seek(0)
-
-        lines = self.readlines()
-
-        found = False
-        for index, line in enumerate(lines):
-            if not line.startswith("#"):
-                if self._hydrate_entry(line) == entry:
-                    found = True
-                    break
-
-        if not found:
-            return False
-
-        lines.remove(line)
-
-        self.seek(0)
-        self.write(''.join(lines))
-        self.truncate()
-        return True
-
-    @classmethod
-    def remove_by_mountpoint(cls, mountpoint, path=None):
-        fstab = cls(path=path)
-        entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
-        if entry:
-            return fstab.remove_entry(entry)
-        return False
-
-    @classmethod
-    def add(cls, device, mountpoint, filesystem, options=None, path=None):
-        return cls(path=path).add_entry(Fstab.Entry(device,
-                                                    mountpoint, filesystem,
-                                                    options=options))

=== removed file 'hooks/charmhelpers/core/hookenv.py'
--- hooks/charmhelpers/core/hookenv.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/core/hookenv.py	1970-01-01 00:00:00 +0000
@@ -1,498 +0,0 @@
-"Interactions with the Juju environment"
-# Copyright 2013 Canonical Ltd.
-#
-# Authors:
-#  Charm Helpers Developers <juju@xxxxxxxxxxxxxxxx>
-
-import os
-import json
-import yaml
-import subprocess
-import sys
-import UserDict
-from subprocess import CalledProcessError
-
-CRITICAL = "CRITICAL"
-ERROR = "ERROR"
-WARNING = "WARNING"
-INFO = "INFO"
-DEBUG = "DEBUG"
-MARKER = object()
-
-cache = {}
-
-
-def cached(func):
-    """Cache return values for multiple executions of func + args
-
-    For example:
-
-        @cached
-        def unit_get(attribute):
-            pass
-
-        unit_get('test')
-
-    will cache the result of unit_get + 'test' for future calls.
-    """
-    def wrapper(*args, **kwargs):
-        global cache
-        key = str((func, args, kwargs))
-        try:
-            return cache[key]
-        except KeyError:
-            res = func(*args, **kwargs)
-            cache[key] = res
-            return res
-    return wrapper
-
-
-def flush(key):
-    """Flushes any entries from function cache where the
-    key is found in the function+args """
-    flush_list = []
-    for item in cache:
-        if key in item:
-            flush_list.append(item)
-    for item in flush_list:
-        del cache[item]
-
-
-def log(message, level=None):
-    """Write a message to the juju log"""
-    command = ['juju-log']
-    if level:
-        command += ['-l', level]
-    command += [message]
-    subprocess.call(command)
-
-
-class Serializable(UserDict.IterableUserDict):
-    """Wrapper, an object that can be serialized to yaml or json"""
-
-    def __init__(self, obj):
-        # wrap the object
-        UserDict.IterableUserDict.__init__(self)
-        self.data = obj
-
-    def __getattr__(self, attr):
-        # See if this object has attribute.
-        if attr in ("json", "yaml", "data"):
-            return self.__dict__[attr]
-        # Check for attribute in wrapped object.
-        got = getattr(self.data, attr, MARKER)
-        if got is not MARKER:
-            return got
-        # Proxy to the wrapped object via dict interface.
-        try:
-            return self.data[attr]
-        except KeyError:
-            raise AttributeError(attr)
-
-    def __getstate__(self):
-        # Pickle as a standard dictionary.
-        return self.data
-
-    def __setstate__(self, state):
-        # Unpickle into our wrapper.
-        self.data = state
-
-    def json(self):
-        """Serialize the object to json"""
-        return json.dumps(self.data)
-
-    def yaml(self):
-        """Serialize the object to yaml"""
-        return yaml.dump(self.data)
-
-
-def execution_environment():
-    """A convenient bundling of the current execution context"""
-    context = {}
-    context['conf'] = config()
-    if relation_id():
-        context['reltype'] = relation_type()
-        context['relid'] = relation_id()
-        context['rel'] = relation_get()
-    context['unit'] = local_unit()
-    context['rels'] = relations()
-    context['env'] = os.environ
-    return context
-
-
-def in_relation_hook():
-    """Determine whether we're running in a relation hook"""
-    return 'JUJU_RELATION' in os.environ
-
-
-def relation_type():
-    """The scope for the current relation hook"""
-    return os.environ.get('JUJU_RELATION', None)
-
-
-def relation_id():
-    """The relation ID for the current relation hook"""
-    return os.environ.get('JUJU_RELATION_ID', None)
-
-
-def local_unit():
-    """Local unit ID"""
-    return os.environ['JUJU_UNIT_NAME']
-
-
-def remote_unit():
-    """The remote unit for the current relation hook"""
-    return os.environ['JUJU_REMOTE_UNIT']
-
-
-def service_name():
-    """The name service group this unit belongs to"""
-    return local_unit().split('/')[0]
-
-
-def hook_name():
-    """The name of the currently executing hook"""
-    return os.path.basename(sys.argv[0])
-
-
-class Config(dict):
-    """A Juju charm config dictionary that can write itself to
-    disk (as json) and track which values have changed since
-    the previous hook invocation.
-
-    Do not instantiate this object directly - instead call
-    ``hookenv.config()``
-
-    Example usage::
-
-        >>> # inside a hook
-        >>> from charmhelpers.core import hookenv
-        >>> config = hookenv.config()
-        >>> config['foo']
-        'bar'
-        >>> config['mykey'] = 'myval'
-        >>> config.save()
-
-
-        >>> # user runs `juju set mycharm foo=baz`
-        >>> # now we're inside subsequent config-changed hook
-        >>> config = hookenv.config()
-        >>> config['foo']
-        'baz'
-        >>> # test to see if this val has changed since last hook
-        >>> config.changed('foo')
-        True
-        >>> # what was the previous value?
-        >>> config.previous('foo')
-        'bar'
-        >>> # keys/values that we add are preserved across hooks
-        >>> config['mykey']
-        'myval'
-        >>> # don't forget to save at the end of hook!
-        >>> config.save()
-
-    """
-    CONFIG_FILE_NAME = '.juju-persistent-config'
-
-    def __init__(self, *args, **kw):
-        super(Config, self).__init__(*args, **kw)
-        self._prev_dict = None
-        self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
-        if os.path.exists(self.path):
-            self.load_previous()
-
-    def load_previous(self, path=None):
-        """Load previous copy of config from disk so that current values
-        can be compared to previous values.
-
-        :param path:
-
-            File path from which to load the previous config. If `None`,
-            config is loaded from the default location. If `path` is
-            specified, subsequent `save()` calls will write to the same
-            path.
-
-        """
-        self.path = path or self.path
-        with open(self.path) as f:
-            self._prev_dict = json.load(f)
-
-    def changed(self, key):
-        """Return true if the value for this key has changed since
-        the last save.
-
-        """
-        if self._prev_dict is None:
-            return True
-        return self.previous(key) != self.get(key)
-
-    def previous(self, key):
-        """Return previous value for this key, or None if there
-        is no "previous" value.
-
-        """
-        if self._prev_dict:
-            return self._prev_dict.get(key)
-        return None
-
-    def save(self):
-        """Save this config to disk.
-
-        Preserves items in _prev_dict that do not exist in self.
-
-        """
-        if self._prev_dict:
-            for k, v in self._prev_dict.iteritems():
-                if k not in self:
-                    self[k] = v
-        with open(self.path, 'w') as f:
-            json.dump(self, f)
-
-
-@cached
-def config(scope=None):
-    """Juju charm configuration"""
-    config_cmd_line = ['config-get']
-    if scope is not None:
-        config_cmd_line.append(scope)
-    config_cmd_line.append('--format=json')
-    try:
-        config_data = json.loads(subprocess.check_output(config_cmd_line))
-        if scope is not None:
-            return config_data
-        return Config(config_data)
-    except ValueError:
-        return None
-
-
-@cached
-def relation_get(attribute=None, unit=None, rid=None):
-    """Get relation information"""
-    _args = ['relation-get', '--format=json']
-    if rid:
-        _args.append('-r')
-        _args.append(rid)
-    _args.append(attribute or '-')
-    if unit:
-        _args.append(unit)
-    try:
-        return json.loads(subprocess.check_output(_args))
-    except ValueError:
-        return None
-    except CalledProcessError, e:
-        if e.returncode == 2:
-            return None
-        raise
-
-
-def relation_set(relation_id=None, relation_settings={}, **kwargs):
-    """Set relation information for the current unit"""
-    relation_cmd_line = ['relation-set']
-    if relation_id is not None:
-        relation_cmd_line.extend(('-r', relation_id))
-    for k, v in (relation_settings.items() + kwargs.items()):
-        if v is None:
-            relation_cmd_line.append('{}='.format(k))
-        else:
-            relation_cmd_line.append('{}={}'.format(k, v))
-    subprocess.check_call(relation_cmd_line)
-    # Flush cache of any relation-gets for local unit
-    flush(local_unit())
-
-
-@cached
-def relation_ids(reltype=None):
-    """A list of relation_ids"""
-    reltype = reltype or relation_type()
-    relid_cmd_line = ['relation-ids', '--format=json']
-    if reltype is not None:
-        relid_cmd_line.append(reltype)
-        return json.loads(subprocess.check_output(relid_cmd_line)) or []
-    return []
-
-
-@cached
-def related_units(relid=None):
-    """A list of related units"""
-    relid = relid or relation_id()
-    units_cmd_line = ['relation-list', '--format=json']
-    if relid is not None:
-        units_cmd_line.extend(('-r', relid))
-    return json.loads(subprocess.check_output(units_cmd_line)) or []
-
-
-@cached
-def relation_for_unit(unit=None, rid=None):
-    """Get the json represenation of a unit's relation"""
-    unit = unit or remote_unit()
-    relation = relation_get(unit=unit, rid=rid)
-    for key in relation:
-        if key.endswith('-list'):
-            relation[key] = relation[key].split()
-    relation['__unit__'] = unit
-    return relation
-
-
-@cached
-def relations_for_id(relid=None):
-    """Get relations of a specific relation ID"""
-    relation_data = []
-    relid = relid or relation_ids()
-    for unit in related_units(relid):
-        unit_data = relation_for_unit(unit, relid)
-        unit_data['__relid__'] = relid
-        relation_data.append(unit_data)
-    return relation_data
-
-
-@cached
-def relations_of_type(reltype=None):
-    """Get relations of a specific type"""
-    relation_data = []
-    reltype = reltype or relation_type()
-    for relid in relation_ids(reltype):
-        for relation in relations_for_id(relid):
-            relation['__relid__'] = relid
-            relation_data.append(relation)
-    return relation_data
-
-
-@cached
-def relation_types():
-    """Get a list of relation types supported by this charm"""
-    charmdir = os.environ.get('CHARM_DIR', '')
-    mdf = open(os.path.join(charmdir, 'metadata.yaml'))
-    md = yaml.safe_load(mdf)
-    rel_types = []
-    for key in ('provides', 'requires', 'peers'):
-        section = md.get(key)
-        if section:
-            rel_types.extend(section.keys())
-    mdf.close()
-    return rel_types
-
-
-@cached
-def relations():
-    """Get a nested dictionary of relation data for all related units"""
-    rels = {}
-    for reltype in relation_types():
-        relids = {}
-        for relid in relation_ids(reltype):
-            units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
-            for unit in related_units(relid):
-                reldata = relation_get(unit=unit, rid=relid)
-                units[unit] = reldata
-            relids[relid] = units
-        rels[reltype] = relids
-    return rels
-
-
-@cached
-def is_relation_made(relation, keys='private-address'):
-    '''
-    Determine whether a relation is established by checking for
-    presence of key(s).  If a list of keys is provided, they
-    must all be present for the relation to be identified as made
-    '''
-    if isinstance(keys, str):
-        keys = [keys]
-    for r_id in relation_ids(relation):
-        for unit in related_units(r_id):
-            context = {}
-            for k in keys:
-                context[k] = relation_get(k, rid=r_id,
-                                          unit=unit)
-            if None not in context.values():
-                return True
-    return False
-
-
-def open_port(port, protocol="TCP"):
-    """Open a service network port"""
-    _args = ['open-port']
-    _args.append('{}/{}'.format(port, protocol))
-    subprocess.check_call(_args)
-
-
-def close_port(port, protocol="TCP"):
-    """Close a service network port"""
-    _args = ['close-port']
-    _args.append('{}/{}'.format(port, protocol))
-    subprocess.check_call(_args)
-
-
-@cached
-def unit_get(attribute):
-    """Get the unit ID for the remote unit"""
-    _args = ['unit-get', '--format=json', attribute]
-    try:
-        return json.loads(subprocess.check_output(_args))
-    except ValueError:
-        return None
-
-
-def unit_private_ip():
-    """Get this unit's private IP address"""
-    return unit_get('private-address')
-
-
-class UnregisteredHookError(Exception):
-    """Raised when an undefined hook is called"""
-    pass
-
-
-class Hooks(object):
-    """A convenient handler for hook functions.
-
-    Example:
-        hooks = Hooks()
-
-        # register a hook, taking its name from the function name
-        @hooks.hook()
-        def install():
-            ...
-
-        # register a hook, providing a custom hook name
-        @hooks.hook("config-changed")
-        def config_changed():
-            ...
-
-        if __name__ == "__main__":
-            # execute a hook based on the name the program is called by
-            hooks.execute(sys.argv)
-    """
-
-    def __init__(self):
-        super(Hooks, self).__init__()
-        self._hooks = {}
-
-    def register(self, name, function):
-        """Register a hook"""
-        self._hooks[name] = function
-
-    def execute(self, args):
-        """Execute a registered hook based on args[0]"""
-        hook_name = os.path.basename(args[0])
-        if hook_name in self._hooks:
-            self._hooks[hook_name]()
-        else:
-            raise UnregisteredHookError(hook_name)
-
-    def hook(self, *hook_names):
-        """Decorator, registering them as hooks"""
-        def wrapper(decorated):
-            for hook_name in hook_names:
-                self.register(hook_name, decorated)
-            else:
-                self.register(decorated.__name__, decorated)
-                if '_' in decorated.__name__:
-                    self.register(
-                        decorated.__name__.replace('_', '-'), decorated)
-            return decorated
-        return wrapper
-
-
-def charm_dir():
-    """Return the root directory of the current charm"""
-    return os.environ.get('CHARM_DIR')

=== removed file 'hooks/charmhelpers/core/host.py'
--- hooks/charmhelpers/core/host.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/core/host.py	1970-01-01 00:00:00 +0000
@@ -1,325 +0,0 @@
-"""Tools for working with the host system"""
-# Copyright 2012 Canonical Ltd.
-#
-# Authors:
-#  Nick Moffitt <nick.moffitt@xxxxxxxxxxxxx>
-#  Matthew Wedgwood <matthew.wedgwood@xxxxxxxxxxxxx>
-
-import os
-import pwd
-import grp
-import random
-import string
-import subprocess
-import hashlib
-import apt_pkg
-
-from collections import OrderedDict
-
-from hookenv import log
-from fstab import Fstab
-
-
-def service_start(service_name):
-    """Start a system service"""
-    return service('start', service_name)
-
-
-def service_stop(service_name):
-    """Stop a system service"""
-    return service('stop', service_name)
-
-
-def service_restart(service_name):
-    """Restart a system service"""
-    return service('restart', service_name)
-
-
-def service_reload(service_name, restart_on_failure=False):
-    """Reload a system service, optionally falling back to restart if
-    reload fails"""
-    service_result = service('reload', service_name)
-    if not service_result and restart_on_failure:
-        service_result = service('restart', service_name)
-    return service_result
-
-
-def service(action, service_name):
-    """Control a system service"""
-    cmd = ['service', service_name, action]
-    return subprocess.call(cmd) == 0
-
-
-def service_running(service):
-    """Determine whether a system service is running"""
-    try:
-        output = subprocess.check_output(['service', service, 'status'])
-    except subprocess.CalledProcessError:
-        return False
-    else:
-        if ("start/running" in output or "is running" in output):
-            return True
-        else:
-            return False
-
-
-def adduser(username, password=None, shell='/bin/bash', system_user=False):
-    """Add a user to the system"""
-    try:
-        user_info = pwd.getpwnam(username)
-        log('user {0} already exists!'.format(username))
-    except KeyError:
-        log('creating user {0}'.format(username))
-        cmd = ['useradd']
-        if system_user or password is None:
-            cmd.append('--system')
-        else:
-            cmd.extend([
-                '--create-home',
-                '--shell', shell,
-                '--password', password,
-            ])
-        cmd.append(username)
-        subprocess.check_call(cmd)
-        user_info = pwd.getpwnam(username)
-    return user_info
-
-
-def add_user_to_group(username, group):
-    """Add a user to a group"""
-    cmd = [
-        'gpasswd', '-a',
-        username,
-        group
-    ]
-    log("Adding user {} to group {}".format(username, group))
-    subprocess.check_call(cmd)
-
-
-def rsync(from_path, to_path, flags='-r', options=None):
-    """Replicate the contents of a path"""
-    options = options or ['--delete', '--executability']
-    cmd = ['/usr/bin/rsync', flags]
-    cmd.extend(options)
-    cmd.append(from_path)
-    cmd.append(to_path)
-    log(" ".join(cmd))
-    return subprocess.check_output(cmd).strip()
-
-
-def symlink(source, destination):
-    """Create a symbolic link"""
-    log("Symlinking {} as {}".format(source, destination))
-    cmd = [
-        'ln',
-        '-sf',
-        source,
-        destination,
-    ]
-    subprocess.check_call(cmd)
-
-
-def mkdir(path, owner='root', group='root', perms=0555, force=False):
-    """Create a directory"""
-    log("Making dir {} {}:{} {:o}".format(path, owner, group,
-                                          perms))
-    uid = pwd.getpwnam(owner).pw_uid
-    gid = grp.getgrnam(group).gr_gid
-    realpath = os.path.abspath(path)
-    if os.path.exists(realpath):
-        if force and not os.path.isdir(realpath):
-            log("Removing non-directory file {} prior to mkdir()".format(path))
-            os.unlink(realpath)
-    else:
-        os.makedirs(realpath, perms)
-    os.chown(realpath, uid, gid)
-
-
-def write_file(path, content, owner='root', group='root', perms=0444):
-    """Create or overwrite a file with the contents of a string"""
-    log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
-    uid = pwd.getpwnam(owner).pw_uid
-    gid = grp.getgrnam(group).gr_gid
-    with open(path, 'w') as target:
-        os.fchown(target.fileno(), uid, gid)
-        os.fchmod(target.fileno(), perms)
-        target.write(content)
-
-
-def fstab_remove(mp):
-    """Remove the given mountpoint entry from /etc/fstab
-    """
-    return Fstab.remove_by_mountpoint(mp)
-
-
-def fstab_add(dev, mp, fs, options=None):
-    """Adds the given device entry to the /etc/fstab file
-    """
-    return Fstab.add(dev, mp, fs, options=options)
-
-
-def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
-    """Mount a filesystem at a particular mountpoint"""
-    cmd_args = ['mount']
-    if options is not None:
-        cmd_args.extend(['-o', options])
-    cmd_args.extend([device, mountpoint])
-    try:
-        subprocess.check_output(cmd_args)
-    except subprocess.CalledProcessError, e:
-        log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
-        return False
-
-    if persist:
-        return fstab_add(device, mountpoint, filesystem, options=options)
-    return True
-
-
-def umount(mountpoint, persist=False):
-    """Unmount a filesystem"""
-    cmd_args = ['umount', mountpoint]
-    try:
-        subprocess.check_output(cmd_args)
-    except subprocess.CalledProcessError, e:
-        log('Error unmounting {}\n{}'.format(mountpoint, e.output))
-        return False
-
-    if persist:
-        return fstab_remove(mountpoint)
-    return True
-
-
-def mounts():
-    """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
-    with open('/proc/mounts') as f:
-        # [['/mount/point','/dev/path'],[...]]
-        system_mounts = [m[1::-1] for m in [l.strip().split()
-                                            for l in f.readlines()]]
-    return system_mounts
-
-
-def file_hash(path):
-    """Generate a md5 hash of the contents of 'path' or None if not found """
-    if os.path.exists(path):
-        h = hashlib.md5()
-        with open(path, 'r') as source:
-            h.update(source.read())  # IGNORE:E1101 - it does have update
-        return h.hexdigest()
-    else:
-        return None
-
-
-def restart_on_change(restart_map, stopstart=False):
-    """Restart services based on configuration files changing
-
-    This function is used a decorator, for example
-
-        @restart_on_change({
-            '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
-            })
-        def ceph_client_changed():
-            ...
-
-    In this example, the cinder-api and cinder-volume services
-    would be restarted if /etc/ceph/ceph.conf is changed by the
-    ceph_client_changed function.
-    """
-    def wrap(f):
-        def wrapped_f(*args):
-            checksums = {}
-            for path in restart_map:
-                checksums[path] = file_hash(path)
-            f(*args)
-            restarts = []
-            for path in restart_map:
-                if checksums[path] != file_hash(path):
-                    restarts += restart_map[path]
-            services_list = list(OrderedDict.fromkeys(restarts))
-            if not stopstart:
-                for service_name in services_list:
-                    service('restart', service_name)
-            else:
-                for action in ['stop', 'start']:
-                    for service_name in services_list:
-                        service(action, service_name)
-        return wrapped_f
-    return wrap
-
-
-def lsb_release():
-    """Return /etc/lsb-release in a dict"""
-    d = {}
-    with open('/etc/lsb-release', 'r') as lsb:
-        for l in lsb:
-            k, v = l.split('=')
-            d[k.strip()] = v.strip()
-    return d
-
-
-def pwgen(length=None):
-    """Generate a random pasword."""
-    if length is None:
-        length = random.choice(range(35, 45))
-    alphanumeric_chars = [
-        l for l in (string.letters + string.digits)
-        if l not in 'l0QD1vAEIOUaeiou']
-    random_chars = [
-        random.choice(alphanumeric_chars) for _ in range(length)]
-    return(''.join(random_chars))
-
-
-def list_nics(nic_type):
-    '''Return a list of nics of given type(s)'''
-    if isinstance(nic_type, basestring):
-        int_types = [nic_type]
-    else:
-        int_types = nic_type
-    interfaces = []
-    for int_type in int_types:
-        cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
-        ip_output = subprocess.check_output(cmd).split('\n')
-        ip_output = (line for line in ip_output if line)
-        for line in ip_output:
-            if line.split()[1].startswith(int_type):
-                interfaces.append(line.split()[1].replace(":", ""))
-    return interfaces
-
-
-def set_nic_mtu(nic, mtu):
-    '''Set MTU on a network interface'''
-    cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
-    subprocess.check_call(cmd)
-
-
-def get_nic_mtu(nic):
-    cmd = ['ip', 'addr', 'show', nic]
-    ip_output = subprocess.check_output(cmd).split('\n')
-    mtu = ""
-    for line in ip_output:
-        words = line.split()
-        if 'mtu' in words:
-            mtu = words[words.index("mtu") + 1]
-    return mtu
-
-
-def get_nic_hwaddr(nic):
-    cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
-    ip_output = subprocess.check_output(cmd)
-    hwaddr = ""
-    words = ip_output.split()
-    if 'link/ether' in words:
-        hwaddr = words[words.index('link/ether') + 1]
-    return hwaddr
-
-
-def cmp_pkgrevno(package, revno, pkgcache=None):
-    '''Compare supplied revno with the revno of the installed package
-       1 => Installed revno is greater than supplied arg
-       0 => Installed revno is the same as supplied arg
-      -1 => Installed revno is less than supplied arg
-    '''
-    if not pkgcache:
-        apt_pkg.init()
-        pkgcache = apt_pkg.Cache()
-    pkg = pkgcache[package]
-    return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)

=== removed directory 'hooks/charmhelpers/fetch'
=== removed file 'hooks/charmhelpers/fetch/__init__.py'
--- hooks/charmhelpers/fetch/__init__.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/fetch/__init__.py	1970-01-01 00:00:00 +0000
@@ -1,349 +0,0 @@
-import importlib
-import time
-from yaml import safe_load
-from charmhelpers.core.host import (
-    lsb_release
-)
-from urlparse import (
-    urlparse,
-    urlunparse,
-)
-import subprocess
-from charmhelpers.core.hookenv import (
-    config,
-    log,
-)
-import apt_pkg
-import os
-
-
-CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
-deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
-"""
-PROPOSED_POCKET = """# Proposed
-deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
-"""
-CLOUD_ARCHIVE_POCKETS = {
-    # Folsom
-    'folsom': 'precise-updates/folsom',
-    'precise-folsom': 'precise-updates/folsom',
-    'precise-folsom/updates': 'precise-updates/folsom',
-    'precise-updates/folsom': 'precise-updates/folsom',
-    'folsom/proposed': 'precise-proposed/folsom',
-    'precise-folsom/proposed': 'precise-proposed/folsom',
-    'precise-proposed/folsom': 'precise-proposed/folsom',
-    # Grizzly
-    'grizzly': 'precise-updates/grizzly',
-    'precise-grizzly': 'precise-updates/grizzly',
-    'precise-grizzly/updates': 'precise-updates/grizzly',
-    'precise-updates/grizzly': 'precise-updates/grizzly',
-    'grizzly/proposed': 'precise-proposed/grizzly',
-    'precise-grizzly/proposed': 'precise-proposed/grizzly',
-    'precise-proposed/grizzly': 'precise-proposed/grizzly',
-    # Havana
-    'havana': 'precise-updates/havana',
-    'precise-havana': 'precise-updates/havana',
-    'precise-havana/updates': 'precise-updates/havana',
-    'precise-updates/havana': 'precise-updates/havana',
-    'havana/proposed': 'precise-proposed/havana',
-    'precise-havana/proposed': 'precise-proposed/havana',
-    'precise-proposed/havana': 'precise-proposed/havana',
-    # Icehouse
-    'icehouse': 'precise-updates/icehouse',
-    'precise-icehouse': 'precise-updates/icehouse',
-    'precise-icehouse/updates': 'precise-updates/icehouse',
-    'precise-updates/icehouse': 'precise-updates/icehouse',
-    'icehouse/proposed': 'precise-proposed/icehouse',
-    'precise-icehouse/proposed': 'precise-proposed/icehouse',
-    'precise-proposed/icehouse': 'precise-proposed/icehouse',
-    # Juno
-    'juno': 'trusty-updates/juno',
-    'trusty-juno': 'trusty-updates/juno',
-    'trusty-juno/updates': 'trusty-updates/juno',
-    'trusty-updates/juno': 'trusty-updates/juno',
-    'juno/proposed': 'trusty-proposed/juno',
-    'juno/proposed': 'trusty-proposed/juno',
-    'trusty-juno/proposed': 'trusty-proposed/juno',
-    'trusty-proposed/juno': 'trusty-proposed/juno',
-}
-
-# The order of this list is very important. Handlers should be listed in from
-# least- to most-specific URL matching.
-FETCH_HANDLERS = (
-    'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
-    'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
-)
-
-APT_NO_LOCK = 100  # The return code for "couldn't acquire lock" in APT.
-APT_NO_LOCK_RETRY_DELAY = 10  # Wait 10 seconds between apt lock checks.
-APT_NO_LOCK_RETRY_COUNT = 30  # Retry to acquire the lock X times.
-
-
-class SourceConfigError(Exception):
-    pass
-
-
-class UnhandledSource(Exception):
-    pass
-
-
-class AptLockError(Exception):
-    pass
-
-
-class BaseFetchHandler(object):
-
-    """Base class for FetchHandler implementations in fetch plugins"""
-
-    def can_handle(self, source):
-        """Returns True if the source can be handled. Otherwise returns
-        a string explaining why it cannot"""
-        return "Wrong source type"
-
-    def install(self, source):
-        """Try to download and unpack the source. Return the path to the
-        unpacked files or raise UnhandledSource."""
-        raise UnhandledSource("Wrong source type {}".format(source))
-
-    def parse_url(self, url):
-        return urlparse(url)
-
-    def base_url(self, url):
-        """Return url without querystring or fragment"""
-        parts = list(self.parse_url(url))
-        parts[4:] = ['' for i in parts[4:]]
-        return urlunparse(parts)
-
-
-def filter_installed_packages(packages):
-    """Returns a list of packages that require installation"""
-    apt_pkg.init()
-
-    # Tell apt to build an in-memory cache to prevent race conditions (if
-    # another process is already building the cache).
-    apt_pkg.config.set("Dir::Cache::pkgcache", "")
-
-    cache = apt_pkg.Cache()
-    _pkgs = []
-    for package in packages:
-        try:
-            p = cache[package]
-            p.current_ver or _pkgs.append(package)
-        except KeyError:
-            log('Package {} has no installation candidate.'.format(package),
-                level='WARNING')
-            _pkgs.append(package)
-    return _pkgs
-
-
-def apt_install(packages, options=None, fatal=False):
-    """Install one or more packages"""
-    if options is None:
-        options = ['--option=Dpkg::Options::=--force-confold']
-
-    cmd = ['apt-get', '--assume-yes']
-    cmd.extend(options)
-    cmd.append('install')
-    if isinstance(packages, basestring):
-        cmd.append(packages)
-    else:
-        cmd.extend(packages)
-    log("Installing {} with options: {}".format(packages,
-                                                options))
-    _run_apt_command(cmd, fatal)
-
-
-def apt_upgrade(options=None, fatal=False, dist=False):
-    """Upgrade all packages"""
-    if options is None:
-        options = ['--option=Dpkg::Options::=--force-confold']
-
-    cmd = ['apt-get', '--assume-yes']
-    cmd.extend(options)
-    if dist:
-        cmd.append('dist-upgrade')
-    else:
-        cmd.append('upgrade')
-    log("Upgrading with options: {}".format(options))
-    _run_apt_command(cmd, fatal)
-
-
-def apt_update(fatal=False):
-    """Update local apt cache"""
-    cmd = ['apt-get', 'update']
-    _run_apt_command(cmd, fatal)
-
-
-def apt_purge(packages, fatal=False):
-    """Purge one or more packages"""
-    cmd = ['apt-get', '--assume-yes', 'purge']
-    if isinstance(packages, basestring):
-        cmd.append(packages)
-    else:
-        cmd.extend(packages)
-    log("Purging {}".format(packages))
-    _run_apt_command(cmd, fatal)
-
-
-def apt_hold(packages, fatal=False):
-    """Hold one or more packages"""
-    cmd = ['apt-mark', 'hold']
-    if isinstance(packages, basestring):
-        cmd.append(packages)
-    else:
-        cmd.extend(packages)
-    log("Holding {}".format(packages))
-
-    if fatal:
-        subprocess.check_call(cmd)
-    else:
-        subprocess.call(cmd)
-
-
-def add_source(source, key=None):
-    if source is None:
-        log('Source is not present. Skipping')
-        return
-
-    if (source.startswith('ppa:') or
-        source.startswith('http') or
-        source.startswith('deb ') or
-            source.startswith('cloud-archive:')):
-        subprocess.check_call(['add-apt-repository', '--yes', source])
-    elif source.startswith('cloud:'):
-        apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
-                    fatal=True)
-        pocket = source.split(':')[-1]
-        if pocket not in CLOUD_ARCHIVE_POCKETS:
-            raise SourceConfigError(
-                'Unsupported cloud: source option %s' %
-                pocket)
-        actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
-        with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
-            apt.write(CLOUD_ARCHIVE.format(actual_pocket))
-    elif source == 'proposed':
-        release = lsb_release()['DISTRIB_CODENAME']
-        with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
-            apt.write(PROPOSED_POCKET.format(release))
-    if key:
-        subprocess.check_call(['apt-key', 'adv', '--keyserver',
-                               'hkp://keyserver.ubuntu.com:80', '--recv',
-                               key])
-
-
-def configure_sources(update=False,
-                      sources_var='install_sources',
-                      keys_var='install_keys'):
-    """
-    Configure multiple sources from charm configuration
-
-    Example config:
-        install_sources:
-          - "ppa:foo"
-          - "http://example.com/repo precise main"
-        install_keys:
-          - null
-          - "a1b2c3d4"
-
-    Note that 'null' (a.k.a. None) should not be quoted.
-    """
-    sources = safe_load(config(sources_var))
-    keys = config(keys_var)
-    if keys is not None:
-        keys = safe_load(keys)
-    if isinstance(sources, basestring) and (
-            keys is None or isinstance(keys, basestring)):
-        add_source(sources, keys)
-    else:
-        if not len(sources) == len(keys):
-            msg = 'Install sources and keys lists are different lengths'
-            raise SourceConfigError(msg)
-        for src_num in range(len(sources)):
-            add_source(sources[src_num], keys[src_num])
-    if update:
-        apt_update(fatal=True)
-
-
-def install_remote(source):
-    """
-    Install a file tree from a remote source
-
-    The specified source should be a url of the form:
-        scheme://[host]/path[#[option=value][&...]]
-
-    Schemes supported are based on this modules submodules
-    Options supported are submodule-specific"""
-    # We ONLY check for True here because can_handle may return a string
-    # explaining why it can't handle a given source.
-    handlers = [h for h in plugins() if h.can_handle(source) is True]
-    installed_to = None
-    for handler in handlers:
-        try:
-            installed_to = handler.install(source)
-        except UnhandledSource:
-            pass
-    if not installed_to:
-        raise UnhandledSource("No handler found for source {}".format(source))
-    return installed_to
-
-
-def install_from_config(config_var_name):
-    charm_config = config()
-    source = charm_config[config_var_name]
-    return install_remote(source)
-
-
-def plugins(fetch_handlers=None):
-    if not fetch_handlers:
-        fetch_handlers = FETCH_HANDLERS
-    plugin_list = []
-    for handler_name in fetch_handlers:
-        package, classname = handler_name.rsplit('.', 1)
-        try:
-            handler_class = getattr(
-                importlib.import_module(package),
-                classname)
-            plugin_list.append(handler_class())
-        except (ImportError, AttributeError):
-            # Skip missing plugins so that they can be ommitted from
-            # installation if desired
-            log("FetchHandler {} not found, skipping plugin".format(
-                handler_name))
-    return plugin_list
-
-
-def _run_apt_command(cmd, fatal=False):
-    """
-    Run an APT command, checking output and retrying if the fatal flag is set
-    to True.
-
-    :param: cmd: str: The apt command to run.
-    :param: fatal: bool: Whether the command's output should be checked and
-        retried.
-    """
-    env = os.environ.copy()
-
-    if 'DEBIAN_FRONTEND' not in env:
-        env['DEBIAN_FRONTEND'] = 'noninteractive'
-
-    if fatal:
-        retry_count = 0
-        result = None
-
-        # If the command is considered "fatal", we need to retry if the apt
-        # lock was not acquired.
-
-        while result is None or result == APT_NO_LOCK:
-            try:
-                result = subprocess.check_call(cmd, env=env)
-            except subprocess.CalledProcessError, e:
-                retry_count = retry_count + 1
-                if retry_count > APT_NO_LOCK_RETRY_COUNT:
-                    raise
-                result = e.returncode
-                log("Couldn't acquire DPKG lock. Will retry in {} seconds."
-                    "".format(APT_NO_LOCK_RETRY_DELAY))
-                time.sleep(APT_NO_LOCK_RETRY_DELAY)
-
-    else:
-        subprocess.call(cmd, env=env)

=== removed file 'hooks/charmhelpers/fetch/archiveurl.py'
--- hooks/charmhelpers/fetch/archiveurl.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/fetch/archiveurl.py	1970-01-01 00:00:00 +0000
@@ -1,63 +0,0 @@
-import os
-import urllib2
-import urlparse
-
-from charmhelpers.fetch import (
-    BaseFetchHandler,
-    UnhandledSource
-)
-from charmhelpers.payload.archive import (
-    get_archive_handler,
-    extract,
-)
-from charmhelpers.core.host import mkdir
-
-
-class ArchiveUrlFetchHandler(BaseFetchHandler):
-    """Handler for archives via generic URLs"""
-    def can_handle(self, source):
-        url_parts = self.parse_url(source)
-        if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
-            return "Wrong source type"
-        if get_archive_handler(self.base_url(source)):
-            return True
-        return False
-
-    def download(self, source, dest):
-        # propogate all exceptions
-        # URLError, OSError, etc
-        proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
-        if proto in ('http', 'https'):
-            auth, barehost = urllib2.splituser(netloc)
-            if auth is not None:
-                source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
-                username, password = urllib2.splitpasswd(auth)
-                passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
-                # Realm is set to None in add_password to force the username and password
-                # to be used whatever the realm
-                passman.add_password(None, source, username, password)
-                authhandler = urllib2.HTTPBasicAuthHandler(passman)
-                opener = urllib2.build_opener(authhandler)
-                urllib2.install_opener(opener)
-        response = urllib2.urlopen(source)
-        try:
-            with open(dest, 'w') as dest_file:
-                dest_file.write(response.read())
-        except Exception as e:
-            if os.path.isfile(dest):
-                os.unlink(dest)
-            raise e
-
-    def install(self, source):
-        url_parts = self.parse_url(source)
-        dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
-        if not os.path.exists(dest_dir):
-            mkdir(dest_dir, perms=0755)
-        dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
-        try:
-            self.download(source, dld_file)
-        except urllib2.URLError as e:
-            raise UnhandledSource(e.reason)
-        except OSError as e:
-            raise UnhandledSource(e.strerror)
-        return extract(dld_file)

=== removed file 'hooks/charmhelpers/fetch/bzrurl.py'
--- hooks/charmhelpers/fetch/bzrurl.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/fetch/bzrurl.py	1970-01-01 00:00:00 +0000
@@ -1,50 +0,0 @@
-import os
-from charmhelpers.fetch import (
-    BaseFetchHandler,
-    UnhandledSource
-)
-from charmhelpers.core.host import mkdir
-
-try:
-    from bzrlib.branch import Branch
-except ImportError:
-    from charmhelpers.fetch import apt_install
-    apt_install("python-bzrlib")
-    from bzrlib.branch import Branch
-
-
-class BzrUrlFetchHandler(BaseFetchHandler):
-    """Handler for bazaar branches via generic and lp URLs"""
-    def can_handle(self, source):
-        url_parts = self.parse_url(source)
-        if url_parts.scheme not in ('bzr+ssh', 'lp'):
-            return False
-        else:
-            return True
-
-    def branch(self, source, dest):
-        url_parts = self.parse_url(source)
-        # If we use lp:branchname scheme we need to load plugins
-        if not self.can_handle(source):
-            raise UnhandledSource("Cannot handle {}".format(source))
-        if url_parts.scheme == "lp":
-            from bzrlib.plugin import load_plugins
-            load_plugins()
-        try:
-            remote_branch = Branch.open(source)
-            remote_branch.bzrdir.sprout(dest).open_branch()
-        except Exception as e:
-            raise e
-
-    def install(self, source):
-        url_parts = self.parse_url(source)
-        branch_name = url_parts.path.strip("/").split("/")[-1]
-        dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
-                                branch_name)
-        if not os.path.exists(dest_dir):
-            mkdir(dest_dir, perms=0755)
-        try:
-            self.branch(source, dest_dir)
-        except OSError as e:
-            raise UnhandledSource(e.strerror)
-        return dest_dir

=== removed directory 'hooks/charmhelpers/lib'
=== removed file 'hooks/charmhelpers/lib/__init__.py'
=== removed file 'hooks/charmhelpers/lib/ceph_utils.py'
--- hooks/charmhelpers/lib/ceph_utils.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/lib/ceph_utils.py	1970-01-01 00:00:00 +0000
@@ -1,315 +0,0 @@
-#
-# Copyright 2012 Canonical Ltd.
-#
-# This file is sourced from lp:openstack-charm-helpers
-#
-# Authors:
-#  James Page <james.page@xxxxxxxxxx>
-#  Adam Gandelman <adamg@xxxxxxxxxx>
-#
-
-import commands
-import json
-import subprocess
-import os
-import shutil
-import time
-import lib.utils as utils
-
-KEYRING = '/etc/ceph/ceph.client.%s.keyring'
-KEYFILE = '/etc/ceph/ceph.client.%s.key'
-
-CEPH_CONF = """[global]
- auth supported = %(auth)s
- keyring = %(keyring)s
- mon host = %(mon_hosts)s
- log to syslog = %(use_syslog)s
- err to syslog = %(use_syslog)s
- clog to syslog = %(use_syslog)s
-"""
-
-
-def execute(cmd):
-    subprocess.check_call(cmd)
-
-
-def execute_shell(cmd):
-    subprocess.check_call(cmd, shell=True)
-
-
-def install():
-    ceph_dir = "/etc/ceph"
-    if not os.path.isdir(ceph_dir):
-        os.mkdir(ceph_dir)
-    utils.install('ceph-common')
-
-
-def rbd_exists(service, pool, rbd_img):
-    (rc, out) = commands.getstatusoutput('rbd list --id %s --pool %s' %
-                                         (service, pool))
-    return rbd_img in out
-
-
-def create_rbd_image(service, pool, image, sizemb):
-    cmd = [
-        'rbd',
-        'create',
-        image,
-        '--size',
-        str(sizemb),
-        '--id',
-        service,
-        '--pool',
-        pool]
-    execute(cmd)
-
-
-def pool_exists(service, name):
-    (rc, out) = commands.getstatusoutput("rados --id %s lspools" % service)
-    return name in out
-
-
-def ceph_version():
-    ''' Retrieve the local version of ceph '''
-    if os.path.exists('/usr/bin/ceph'):
-        cmd = ['ceph', '-v']
-        output = subprocess.check_output(cmd)
-        output = output.split()
-        if len(output) > 3:
-            return output[2]
-        else:
-            return None
-    else:
-        return None
-
-
-def get_osds(service):
-    '''
-    Return a list of all Ceph Object Storage Daemons
-    currently in the cluster
-    '''
-    version = ceph_version()
-    if version and version >= '0.56':
-        cmd = ['ceph', '--id', service, 'osd', 'ls', '--format=json']
-        return json.loads(subprocess.check_output(cmd))
-    else:
-        return None
-
-
-def create_pool(service, name, replicas=2):
-    ''' Create a new RADOS pool '''
-    if pool_exists(service, name):
-        utils.juju_log('WARNING',
-                       "Ceph pool {} already exists, "
-                       "skipping creation".format(name))
-        return
-
-    osds = get_osds(service)
-    if osds:
-        pgnum = (len(osds) * 100 / replicas)
-    else:
-        # NOTE(james-page): Default to 200 for older ceph versions
-        # which don't support OSD query from cli
-        pgnum = 200
-
-    cmd = [
-        'ceph', '--id', service,
-        'osd', 'pool', 'create',
-        name, str(pgnum)
-    ]
-    subprocess.check_call(cmd)
-    cmd = [
-        'ceph', '--id', service,
-        'osd', 'pool', 'set', name,
-        'size', str(replicas)
-    ]
-    subprocess.check_call(cmd)
-
-
-def keyfile_path(service):
-    return KEYFILE % service
-
-
-def keyring_path(service):
-    return KEYRING % service
-
-
-def create_keyring(service, key):
-    keyring = keyring_path(service)
-    if os.path.exists(keyring):
-        utils.juju_log('INFO', 'ceph: Keyring exists at %s.' % keyring)
-    cmd = [
-        'ceph-authtool',
-        keyring,
-        '--create-keyring',
-        '--name=client.%s' % service,
-        '--add-key=%s' % key]
-    execute(cmd)
-    utils.juju_log('INFO', 'ceph: Created new ring at %s.' % keyring)
-
-
-def create_key_file(service, key):
-    # create a file containing the key
-    keyfile = keyfile_path(service)
-    if os.path.exists(keyfile):
-        utils.juju_log('INFO', 'ceph: Keyfile exists at %s.' % keyfile)
-    fd = open(keyfile, 'w')
-    fd.write(key)
-    fd.close()
-    utils.juju_log('INFO', 'ceph: Created new keyfile at %s.' % keyfile)
-
-
-def get_ceph_nodes():
-    hosts = []
-    for r_id in utils.relation_ids('ceph'):
-        for unit in utils.relation_list(r_id):
-            hosts.append(utils.relation_get('private-address',
-                                            unit=unit, rid=r_id))
-    return hosts
-
-
-def configure(service, key, auth, use_syslog):
-    create_keyring(service, key)
-    create_key_file(service, key)
-    hosts = get_ceph_nodes()
-    mon_hosts = ",".join(map(str, hosts))
-    keyring = keyring_path(service)
-    with open('/etc/ceph/ceph.conf', 'w') as ceph_conf:
-        ceph_conf.write(CEPH_CONF % locals())
-    modprobe_kernel_module('rbd')
-
-
-def image_mapped(image_name):
-    (rc, out) = commands.getstatusoutput('rbd showmapped')
-    return image_name in out
-
-
-def map_block_storage(service, pool, image):
-    cmd = [
-        'rbd',
-        'map',
-        '%s/%s' % (pool, image),
-        '--user',
-        service,
-        '--secret',
-        keyfile_path(service)]
-    execute(cmd)
-
-
-def filesystem_mounted(fs):
-    return subprocess.call(['grep', '-wqs', fs, '/proc/mounts']) == 0
-
-
-def make_filesystem(blk_device, fstype='ext4'):
-    count = 0
-    e_noent = os.errno.ENOENT
-    while not os.path.exists(blk_device):
-        if count >= 10:
-            utils.juju_log('ERROR', 'ceph: gave up waiting on block '
-                                    'device %s' % blk_device)
-            raise IOError(e_noent, os.strerror(e_noent), blk_device)
-        utils.juju_log('INFO', 'ceph: waiting for block device %s '
-                               'to appear' % blk_device)
-        count += 1
-        time.sleep(1)
-    else:
-        utils.juju_log('INFO', 'ceph: Formatting block device %s '
-                               'as filesystem %s.' % (blk_device, fstype))
-        execute(['mkfs', '-t', fstype, blk_device])
-
-
-def place_data_on_ceph(service, blk_device, data_src_dst, fstype='ext4'):
-    # mount block device into /mnt
-    cmd = ['mount', '-t', fstype, blk_device, '/mnt']
-    execute(cmd)
-
-    # copy data to /mnt
-    try:
-        copy_files(data_src_dst, '/mnt')
-    except:
-        pass
-
-    # umount block device
-    cmd = ['umount', '/mnt']
-    execute(cmd)
-
-    _dir = os.stat(data_src_dst)
-    uid = _dir.st_uid
-    gid = _dir.st_gid
-
-    # re-mount where the data should originally be
-    cmd = ['mount', '-t', fstype, blk_device, data_src_dst]
-    execute(cmd)
-
-    # ensure original ownership of new mount.
-    cmd = ['chown', '-R', '%s:%s' % (uid, gid), data_src_dst]
-    execute(cmd)
-
-
-# TODO: re-use
-def modprobe_kernel_module(module):
-    utils.juju_log('INFO', 'Loading kernel module')
-    cmd = ['modprobe', module]
-    execute(cmd)
-    cmd = 'echo %s >> /etc/modules' % module
-    execute_shell(cmd)
-
-
-def copy_files(src, dst, symlinks=False, ignore=None):
-    for item in os.listdir(src):
-        s = os.path.join(src, item)
-        d = os.path.join(dst, item)
-        if os.path.isdir(s):
-            shutil.copytree(s, d, symlinks, ignore)
-        else:
-            shutil.copy2(s, d)
-
-
-def ensure_ceph_storage(service, pool, rbd_img, sizemb, mount_point,
-                        blk_device, fstype, system_services=[],
-                        rbd_pool_replicas=2):
-    """
-    To be called from the current cluster leader.
-    Ensures given pool and RBD image exists, is mapped to a block device,
-    and the device is formatted and mounted at the given mount_point.
-
-    If formatting a device for the first time, data existing at mount_point
-    will be migrated to the RBD device before being remounted.
-
-    All services listed in system_services will be stopped prior to data
-    migration and restarted when complete.
-    """
-    # Ensure pool, RBD image, RBD mappings are in place.
-    if not pool_exists(service, pool):
-        utils.juju_log('INFO', 'ceph: Creating new pool %s.' % pool)
-        create_pool(service, pool, replicas=rbd_pool_replicas)
-
-    if not rbd_exists(service, pool, rbd_img):
-        utils.juju_log('INFO', 'ceph: Creating RBD image (%s).' % rbd_img)
-        create_rbd_image(service, pool, rbd_img, sizemb)
-
-    if not image_mapped(rbd_img):
-        utils.juju_log('INFO', 'ceph: Mapping RBD Image as a Block Device.')
-        map_block_storage(service, pool, rbd_img)
-
-    # make file system
-    # TODO: What happens if for whatever reason this is run again and
-    # the data is already in the rbd device and/or is mounted??
-    # When it is mounted already, it will fail to make the fs
-    # XXX: This is really sketchy!  Need to at least add an fstab entry
-    #      otherwise this hook will blow away existing data if its executed
-    #      after a reboot.
-    if not filesystem_mounted(mount_point):
-        make_filesystem(blk_device, fstype)
-
-        for svc in system_services:
-            if utils.running(svc):
-                utils.juju_log('INFO',
-                               'Stopping services %s prior to migrating '
-                               'data' % svc)
-                utils.stop(svc)
-
-        place_data_on_ceph(service, blk_device, mount_point, fstype)
-
-        for svc in system_services:
-            utils.start(svc)

=== removed file 'hooks/charmhelpers/lib/cluster_utils.py'
--- hooks/charmhelpers/lib/cluster_utils.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/lib/cluster_utils.py	1970-01-01 00:00:00 +0000
@@ -1,128 +0,0 @@
-#
-# Copyright 2012 Canonical Ltd.
-#
-# This file is sourced from lp:openstack-charm-helpers
-#
-# Authors:
-#  James Page <james.page@xxxxxxxxxx>
-#  Adam Gandelman <adamg@xxxxxxxxxx>
-#
-
-from utils import (
-    juju_log,
-    relation_ids,
-    relation_list,
-    relation_get,
-    get_unit_hostname,
-    config_get)
-import subprocess
-import os
-
-
-def is_clustered():
-    for r_id in (relation_ids('ha') or []):
-        for unit in (relation_list(r_id) or []):
-            clustered = relation_get('clustered',
-                                     rid=r_id,
-                                     unit=unit)
-            if clustered:
-                return True
-    return False
-
-
-def is_leader(resource):
-    cmd = [
-        "crm", "resource",
-        "show", resource]
-    try:
-        status = subprocess.check_output(cmd)
-    except subprocess.CalledProcessError:
-        return False
-    else:
-        if get_unit_hostname() in status:
-            return True
-        else:
-            return False
-
-
-def peer_units():
-    peers = []
-    for r_id in (relation_ids('cluster') or []):
-        for unit in (relation_list(r_id) or []):
-            peers.append(unit)
-    return peers
-
-
-def oldest_peer(peers):
-    local_unit_no = os.getenv('JUJU_UNIT_NAME').split('/')[1]
-    for peer in peers:
-        remote_unit_no = peer.split('/')[1]
-        if remote_unit_no < local_unit_no:
-            return False
-    return True
-
-
-def eligible_leader(resource):
-    if is_clustered():
-        if not is_leader(resource):
-            juju_log('INFO', 'Deferring action to CRM leader.')
-            return False
-    else:
-        peers = peer_units()
-        if peers and not oldest_peer(peers):
-            juju_log('INFO', 'Deferring action to oldest service unit.')
-            return False
-    return True
-
-
-def https():
-    '''
-    Determines whether enough data has been provided in configuration
-    or relation data to configure HTTPS
-    .
-    returns: boolean
-    '''
-    if config_get('use-https') == "yes":
-        return True
-    if config_get('ssl_cert') and config_get('ssl_key'):
-        return True
-    for r_id in relation_ids('identity-service'):
-        for unit in relation_list(r_id):
-            if (relation_get('https_keystone', rid=r_id, unit=unit) and
-                    relation_get('ssl_cert', rid=r_id, unit=unit) and
-                    relation_get('ssl_key', rid=r_id, unit=unit) and
-                    relation_get('ca_cert', rid=r_id, unit=unit)):
-                return True
-    return False
-
-
-def determine_api_port(public_port):
-    '''
-    Determine correct API server listening port based on
-    existence of HTTPS reverse proxy and/or haproxy.
-
-    public_port: int: standard public port for given service
-
-    returns: int: the correct listening port for the API service
-    '''
-    i = 0
-    if len(peer_units()) > 0 or is_clustered():
-        i += 1
-    if https():
-        i += 1
-    return public_port - (i * 10)
-
-
-def determine_haproxy_port(public_port):
-    '''
-    Description: Determine correct proxy listening port based on public IP +
-    existence of HTTPS reverse proxy.
-
-    public_port: int: standard public port for given service
-
-    returns: int: the correct listening port for the HAProxy service
-    '''
-    i = 0
-    if https():
-        i += 1
-    return public_port - (i * 10)

=== removed file 'hooks/charmhelpers/lib/utils.py'
--- hooks/charmhelpers/lib/utils.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/lib/utils.py	1970-01-01 00:00:00 +0000
@@ -1,221 +0,0 @@
-#
-# Copyright 2012 Canonical Ltd.
-#
-# This file is sourced from lp:openstack-charm-helpers
-#
-# Authors:
-#  James Page <james.page@xxxxxxxxxx>
-#  Paul Collins <paul.collins@xxxxxxxxxxxxx>
-#  Adam Gandelman <adamg@xxxxxxxxxx>
-#
-
-import json
-import os
-import subprocess
-import socket
-import sys
-
-
-def do_hooks(hooks):
-    hook = os.path.basename(sys.argv[0])
-
-    try:
-        hook_func = hooks[hook]
-    except KeyError:
-        juju_log('INFO',
-                 "This charm doesn't know how to handle '{}'.".format(hook))
-    else:
-        hook_func()
-
-
-def install(*pkgs):
-    cmd = [
-        'apt-get',
-        '-y',
-        'install']
-    for pkg in pkgs:
-        cmd.append(pkg)
-    subprocess.check_call(cmd)
-
-TEMPLATES_DIR = 'templates'
-
-try:
-    import jinja2
-except ImportError:
-    install('python-jinja2')
-    import jinja2
-
-try:
-    import dns.resolver
-except ImportError:
-    install('python-dnspython')
-    import dns.resolver
-
-
-def render_template(template_name, context, template_dir=TEMPLATES_DIR):
-    templates = jinja2.Environment(loader=jinja2.FileSystemLoader(
-                                   template_dir))
-    template = templates.get_template(template_name)
-    return template.render(context)
-
-# Protocols
-TCP = 'TCP'
-UDP = 'UDP'
-
-
-def expose(port, protocol='TCP'):
-    cmd = [
-        'open-port',
-        '{}/{}'.format(port, protocol)]
-    subprocess.check_call(cmd)
-
-
-def juju_log(severity, message):
-    cmd = [
-        'juju-log',
-        '--log-level', severity,
-        message]
-    subprocess.check_call(cmd)
-
-
-def relation_ids(relation):
-    cmd = [
-        'relation-ids',
-        relation]
-    result = str(subprocess.check_output(cmd)).split()
-    if result == "":
-        return None
-    else:
-        return result
-
-
-def relation_list(rid):
-    cmd = [
-        'relation-list',
-        '-r', rid]
-    result = str(subprocess.check_output(cmd)).split()
-    if result == "":
-        return None
-    else:
-        return result
-
-
-def relation_get(attribute, unit=None, rid=None):
-    cmd = [
-        'relation-get']
-    if rid:
-        cmd.append('-r')
-        cmd.append(rid)
-    cmd.append(attribute)
-    if unit:
-        cmd.append(unit)
-    value = subprocess.check_output(cmd).strip()  # IGNORE:E1103
-    if value == "":
-        return None
-    else:
-        return value
-
-
-def relation_set(**kwargs):
-    cmd = [
-        'relation-set']
-    args = []
-    for k, v in kwargs.items():
-        if k == 'rid':
-            if v:
-                cmd.append('-r')
-                cmd.append(v)
-        else:
-            args.append('{}={}'.format(k, v))
-    cmd += args
-    subprocess.check_call(cmd)
-
-
-def unit_get(attribute):
-    cmd = [
-        'unit-get',
-        attribute]
-    value = subprocess.check_output(cmd).strip()  # IGNORE:E1103
-    if value == "":
-        return None
-    else:
-        return value
-
-
-def config_get(attribute):
-    cmd = [
-        'config-get',
-        '--format',
-        'json']
-    out = subprocess.check_output(cmd).strip()  # IGNORE:E1103
-    cfg = json.loads(out)
-
-    try:
-        return cfg[attribute]
-    except KeyError:
-        return None
-
-
-def get_unit_hostname():
-    return socket.gethostname()
-
-
-def get_host_ip(hostname=unit_get('private-address')):
-    try:
-        # Test to see if already an IPv4 address
-        socket.inet_aton(hostname)
-        return hostname
-    except socket.error:
-        answers = dns.resolver.query(hostname, 'A')
-        if answers:
-            return answers[0].address
-    return None
-
-
-def _svc_control(service, action):
-    subprocess.check_call(['service', service, action])
-
-
-def restart(*services):
-    for service in services:
-        _svc_control(service, 'restart')
-
-
-def stop(*services):
-    for service in services:
-        _svc_control(service, 'stop')
-
-
-def start(*services):
-    for service in services:
-        _svc_control(service, 'start')
-
-
-def reload(*services):
-    for service in services:
-        try:
-            _svc_control(service, 'reload')
-        except subprocess.CalledProcessError:
-            # Reload failed - either service does not support reload
-            # or it was not running - restart will fixup most things
-            _svc_control(service, 'restart')
-
-
-def running(service):
-    try:
-        output = subprocess.check_output(['service', service, 'status'])
-    except subprocess.CalledProcessError:
-        return False
-    else:
-        if ("start/running" in output or "is running" in output):
-            return True
-        else:
-            return False
-
-
-def is_relation_made(relation, key='private-address'):
-    for r_id in (relation_ids(relation) or []):
-        for unit in (relation_list(r_id) or []):
-            if relation_get(key, rid=r_id, unit=unit):
-                return True
-    return False

=== removed file 'hooks/charmhelpers/setup.py'
--- hooks/charmhelpers/setup.py	2014-12-14 21:08:45 +0000
+++ hooks/charmhelpers/setup.py	1970-01-01 00:00:00 +0000
@@ -1,12 +0,0 @@
-#!/usr/bin/env python
-
-from distutils.core import setup
-
-setup(name='charmhelpers',
-     version='1.0',
-     description='this is dumb',
-     author='nobody',
-     author_email='dummy@amulet',
-     url='http://google.com',
-     packages=[],
-)

=== modified file 'hooks/common.py' (properties changed: -x to +x)
--- hooks/common.py	2014-12-14 21:08:45 +0000
+++ hooks/common.py	2015-04-02 19:14:51 +0000
@@ -1,18 +1,86 @@
-#!/usr/bin/python
-import os
-import subprocess
-from charmhelpers.core.hookenv import log, unit_get
-import shlex
-
-hue_home = os.path.join(os.path.sep, "usr", "share", "hue")
-hue_ini = os.path.join(os.path.sep, hue_home, "desktop", "conf","hue.ini")
-stop_hue_script = os.path.join(os.path.sep, os.environ['CHARM_DIR'], "files", "scripts", "stop_hue.sh")
-
-def restart_hue():
-    # stop_hue_script is a workaround to shutdown "hue runserver" - needs a better solution
-    # from hue community - sent an email 12/13/2014 - amir sanjar
-    subprocess.call(stop_hue_script)
-    ip = unit_get('private-address')
-    hue_exe = os.path.join(os.path.sep, hue_home, "build", "env", "bin", "hue")
-    cmd = hue_exe+" runserver "+ip+":8000" 
-    subprocess.Popen(shlex.split(cmd),stdout=subprocess.PIPE,stderr=subprocess.PIPE)
\ No newline at end of file
+#!/usr/bin/env python
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+"""
+Common implementation for all hooks.
+"""
+
+import jujuresources
+
+
+def bootstrap_resources():
+    """
+    Install required resources defined in resources.yaml
+    """
+    mirror_url = jujuresources.config_get('resources_mirror')
+    if not jujuresources.fetch(mirror_url=mirror_url):
+        jujuresources.juju_log('Resources unavailable; manual intervention required', 'ERROR')
+        return False
+    jujuresources.install(['pathlib', 'pyaml', 'six', 'charmhelpers'])
+    return True
+
+
+def manage():
+    if not bootstrap_resources():
+        # defer until resources are available, since charmhelpers, and thus
+        # the framework, are required (will require manual intervention)
+        return
+
+    from charmhelpers.core import ch_framework
+    from charmhelpers.contrib import bigdata
+    import callbacks
+
+    hue_reqs = ['groups', 'users', 'dirs', 'packages', 'ports']
+    dist_config = bigdata.utils.DistConfig(filename='dist.yaml',
+                                           required_keys=hue_reqs)
+    hue = callbacks.Hue(dist_config)
+    manager = ch_framework.Manager([
+        {
+            'name': 'hue',
+            'provides': [],
+            'requires': [
+                bigdata.relations.NameNode(),
+                bigdata.relations.ResourceManager(),
+                hue.verify_resources,
+            ],
+            'callbacks': [
+                hue.install,
+                hue.configure_hue,
+                hue.start,
+                ch_framework.helpers.open_ports(dist_config.exposed_ports('hue')),
+            ],
+            'cleanup': [
+                ch_framework.helpers.close_ports(dist_config.exposed_ports('hue')),
+                hue.stop,
+                hue.cleanup,
+            ],
+        },
+        {
+            'name': 'hue-hive',
+            'provides': [],
+            'requires': [
+                hue.is_installed,
+                bigdata.relations.Hive,
+            ],
+            'callbacks': [
+                hue.configure_for_hive,
+                hue.stop,
+                hue.start,
+            ],
+            'cleanup': [],
+        },
+    ])
+    manager.manage()
+
+
+if __name__ == '__main__':
+    manage()

=== modified file 'hooks/config-changed'
--- hooks/config-changed	2014-12-14 21:08:45 +0000
+++ hooks/config-changed	2015-04-02 19:14:51 +0000
@@ -1,3 +1,15 @@
-#!/usr/bin/python
-#import services
-#services.manage()
+#!/usr/bin/env python
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import common
+common.manage()

=== modified file 'hooks/hive-relation-changed'
--- hooks/hive-relation-changed	2015-03-05 05:07:52 +0000
+++ hooks/hive-relation-changed	2015-04-02 19:14:51 +0000
@@ -1,29 +1,15 @@
-#!/usr/bin/python
-
-import os
-import sys
-from charmhelpers.core.hookenv import relation_get
-from bdutils import fileLineSearchReplace, fconfigured
-from common import hue_ini, restart_hue
-sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib'))
-from charmhelpers.core import (
-    hookenv,
-    host,
-)
-
-hooks = hookenv.Hooks()
-log = hookenv.log
-@hooks.hook('hive-relation-changed')
-def hive_relation_changed():
-    log("hue ==> hive-relation-changed")
-    hive_ip = relation_get('private-address')
-    log("hue ==> hive IP={}".format(hive_ip),"INFO")
-    fileLineSearchReplace(hue_ini, "## hive_server_host=localhost" , \
-                          "hive_server_host="+hive_ip)
-    
-    restart_hue()
-    
-    
-if __name__ == "__main__":
-    # execute a hook based on the name the program is called by
-    hooks.execute(sys.argv)
\ No newline at end of file
+#!/usr/bin/env python
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import common
+common.manage()

=== modified file 'hooks/install'
--- hooks/install	2015-03-05 05:07:52 +0000
+++ hooks/install	2015-04-02 19:14:51 +0000
@@ -1,71 +1,17 @@
 #!/usr/bin/python
-import platform
-import os
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 import setup
-import subprocess
-import shutil
-
 setup.pre_install()
-from charmhelpers import fetch
-#from charmhelpers.fetch import apt_install, apt_update,  apt_purge
-import tarfile
-import sys
-from common import stop_hue_script
-
-from charmhelpers.core.hookenv import log, open_port
-from charmhelpers.lib.utils import config_get
-from bdutils import wgetPkg
-
-
-def install():
-    log('Installing apache-hue')
-    cpu = platform.machine()
-    hue_version = config_get("hue-version")
-    # for development only - add the packages to lxc ubuntu image template
-    home = os.path.join(os.path.sep, "home","ubuntu")
-    img_src = os.path.join(os.path.sep, home,"images")
-    hue_filename = "hue-{}.tgz".format(hue_version)
-    hue_img_src = os.path.join(os.path.sep, img_src, hue_filename )
-    if not os.path.isdir(img_src):
-        hue_url = "https://dl.dropboxusercontent.com/u/730827/hue/releases/{v}/{n}".format(v=hue_version, n=hue_filename)
-        os.mkdir(img_src)
-        os.chdir(img_src)
-        wgetPkg(hue_url, hue_img_src)
-
-    
-    build_script = os.path.join(os.path.sep, os.environ['CHARM_DIR'], "files", "scripts", "build.sh")
-    shutil.copy2( stop_hue_script, home)
-    fetch.apt_update()
-    if cpu == "ppc64le":
-        cmd = img_src+"/ibm-java-ppc64le-sdk-7.1-2.0.bin -i silent"
-        subprocess.call(cmd.split())
-    else: 
-        fetch.apt_install(fetch.filter_installed_packages(['openjdk-7-jdk']))
-    packages = [
-                "python2.7-dev",
-                "make",
-                "libsasl2-dev",
-                "libmysqlclient-dev",
-                "libkrb5-dev",
-                "libxml2-dev",
-                "libxslt1-dev",
-                "libsqlite3-dev",
-                "libssl-dev",
-                "libldap2-dev",
-                "python-pip"
-                ]
-    fetch.apt_install(packages)
-    ##### build and install Hue
-    if not tarfile.is_tarfile(hue_img_src):
-        log('Unable to extract Hue Tarball')
-        sys.exit(1)
-    with tarfile.open(hue_img_src) as tball:
-        tball.extractall(home)
-        
-    os.chdir(os.path.join(os.path.sep, home, "hue-{}".format(hue_version)))
-    subprocess.call(build_script)
-    
-    open_port(8000)
-    open_port(8888)
-if __name__ == "__main__":
-    install()
+
+import common
+common.manage()

=== added file 'hooks/namenode-relation-changed'
--- hooks/namenode-relation-changed	1970-01-01 00:00:00 +0000
+++ hooks/namenode-relation-changed	2015-04-02 19:14:51 +0000
@@ -0,0 +1,15 @@
+#!/usr/bin/env python
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import common
+common.manage()

=== removed file 'hooks/namenode-relation-changed'
--- hooks/namenode-relation-changed	2014-12-14 21:08:45 +0000
+++ hooks/namenode-relation-changed	1970-01-01 00:00:00 +0000
@@ -1,44 +0,0 @@
-#!/usr/bin/python
-import os
-import sys
-import subprocess
-from charmhelpers.core.hookenv import relation_get
-from bdutils import fileLineSearchReplace, fconfigured
-from common import hue_ini, restart_hue
-
-sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib'))
-from charmhelpers.core import (
-    hookenv,
-    host,
-)
-
-hooks = hookenv.Hooks()
-log = hookenv.log
-
-SERVICE = 'apache-hue'
-@hooks.hook('namenode-relation-changed')
-def namenode_relation_changed():
-    import time
-    ready  = relation_get("ready")
-    if ready != "true":
-        log ("Hue  ==> namenide not ready","INFO")
-        sys.exit(0)
-    # do nothing if namenode connection is configured 
-    if fconfigured("hadoop_nn"):
-        return    
-    log('hue ==> namenode-relation-changed')
-    namenodeIP = relation_get('private-address')
-    log("hue ==> namenode_IP={}".format(namenodeIP),"INFO")
-    
-    namenode = namenodeIP+":8020"
-    fileLineSearchReplace(hue_ini, "fs_defaultfs=hdfs://localhost:8020", \
-                          "fs_defaultfs=hdfs://"+namenode)
-    fileLineSearchReplace(hue_ini, "## webhdfs_url=http://localhost:50070/webhdfs/v1",\
-                          "webhdfs_url=http://"+namenodeIP+":50070/webhdfs/v1";)
-    restart_hue()
-    
-
-
-if __name__ == "__main__":
-    # execute a hook based on the name the program is called by
-    hooks.execute(sys.argv)

=== added file 'hooks/resourcemanager-relation-changed'
--- hooks/resourcemanager-relation-changed	1970-01-01 00:00:00 +0000
+++ hooks/resourcemanager-relation-changed	2015-04-02 19:14:51 +0000
@@ -0,0 +1,15 @@
+#!/usr/bin/env python
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import common
+common.manage()

=== removed file 'hooks/resourcemanager-relation-changed'
--- hooks/resourcemanager-relation-changed	2014-12-14 21:08:45 +0000
+++ hooks/resourcemanager-relation-changed	1970-01-01 00:00:00 +0000
@@ -1,42 +0,0 @@
-#!/usr/bin/python
-
-import os
-import sys
-from charmhelpers.core.hookenv import relation_get
-from bdutils import fileLineSearchReplace, fconfigured
-from common import hue_ini, restart_hue
-sys.path.insert(0, os.path.join(os.environ['CHARM_DIR'], 'lib'))
-from charmhelpers.core import (
-    hookenv,
-    host,
-)
-
-hooks = hookenv.Hooks()
-log = hookenv.log
-@hooks.hook('resourcemanager-relation-changed')
-def resourcemanager_relation_changed():
-    log("hue ==> resourcemanager-relation-changed")
-    ready  = relation_get("ready")
-    if ready != "true":
-        log ("Hue  ==> resourcemanager not ready","INFO")
-        sys.exit(0)
-    # do nothing if resourcemanager connection is configured 
-    if fconfigured("hadoop_rm"):
-        return 
-    resourcemanager_ip = relation_get('private-address')
-    log("hue ==> resourcemanager IP={}".format(resourcemanager_ip),"INFO")
-    fileLineSearchReplace(hue_ini, "## resourcemanager_host=localhost" , \
-                          "resourcemanager_host="+resourcemanager_ip)
-    fileLineSearchReplace(hue_ini, "## resourcemanager_port=8032","resourcemanager_port=8032")
-    fileLineSearchReplace(hue_ini, "## resourcemanager_api_url=http://localhost:8088",\
-                          "resourcemanager_api_url=http://"+resourcemanager_ip+":8088";)
-    fileLineSearchReplace(hue_ini, " ## proxy_api_url=http://localhost:8088",\
-                          "proxy_api_url=http://"+resourcemanager_ip+":8088";)
-    fileLineSearchReplace(hue_ini, "# history_server_api_url=http://localhost:19888",\
-                          "# history_server_api_url=http://"+resourcemanager_ip+":19888";)
-    restart_hue()
-    
-    
-if __name__ == "__main__":
-    # execute a hook based on the name the program is called by
-    hooks.execute(sys.argv)

=== removed file 'hooks/services.py'
--- hooks/services.py	2014-12-12 04:41:42 +0000
+++ hooks/services.py	1970-01-01 00:00:00 +0000
@@ -1,31 +0,0 @@
-#!/usr/bin/python
-
-from charmhelpers.core.services.base import ServiceManager
-from charmhelpers.core.services import helpers
-
-import actions
-
-
-def manage():
-    manager = ServiceManager([
-        {
-            'service': 'apache-hue',
-            'ports': [],  # ports to after start
-            'provided_data': [
-                # context managers for provided relations
-                # e.g.: helpers.HttpRelation()
-            ],
-            'required_data': [
-                # data (contexts) required to start the service
-                # e.g.: helpers.RequiredConfig('domain', 'auth_key'),
-                #       helpers.MysqlRelation(),
-            ],
-            'data_ready': [
-                helpers.render_template(
-                    source='upstart.conf',
-                    target='/etc/init/apache-hue'),
-                actions.log_start,
-            ],
-        },
-    ])
-    manager.manage()

=== modified file 'hooks/setup.py'
--- hooks/setup.py	2014-12-12 04:41:42 +0000
+++ hooks/setup.py	2015-04-02 19:14:51 +0000
@@ -1,17 +1,36 @@
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+import subprocess
+from glob import glob
+
+
 def pre_install():
     """
     Do any setup required before the install hook.
     """
-    install_charmhelpers()
-
-
-def install_charmhelpers():
+    install_pip()
+    install_jujuresources()
+
+
+def install_pip():
+    subprocess.check_call(['apt-get', 'install', '-yq', 'python-pip', 'bzr'])
+
+
+def install_jujuresources():
     """
-    Install the charmhelpers library, if not present.
+    Install the bundled jujuresources library, if not present.
     """
     try:
-        import charmhelpers  # noqa
+        import jujuresources  # noqa
     except ImportError:
-        import subprocess
-        subprocess.check_call(['apt-get', 'install', '-y', 'python-pip'])
-        subprocess.check_call(['pip', 'install', 'charmhelpers'])
+        jr_archive = glob('resources/jujuresources-*.tar.gz')[0]
+        subprocess.check_call(['pip', 'install', jr_archive])

=== removed file 'hooks/start'
--- hooks/start	2014-12-14 21:08:45 +0000
+++ hooks/start	1970-01-01 00:00:00 +0000
@@ -1,3 +0,0 @@
-#!/usr/bin/python
-#import services
-#services.manage()

=== removed file 'hooks/stop'
--- hooks/stop	2014-12-14 21:08:45 +0000
+++ hooks/stop	1970-01-01 00:00:00 +0000
@@ -1,3 +0,0 @@
-#!/usr/bin/python
-#import services
-#services.manage()

=== removed file 'hooks/upgrade-charm'
--- hooks/upgrade-charm	2014-12-12 04:41:42 +0000
+++ hooks/upgrade-charm	1970-01-01 00:00:00 +0000
@@ -1,3 +0,0 @@
-#!/usr/bin/python
-import services
-services.manage()

=== modified file 'metadata.yaml'
--- metadata.yaml	2015-03-05 05:07:52 +0000
+++ metadata.yaml	2015-04-02 19:14:51 +0000
@@ -1,15 +1,14 @@
 name: apache-hue
 summary: Hue is a Web interface for analyzing data with Apache Hadoop
-maintainer: sanjar <amir.sanjar@xxxxxxxxxxxxx>
+maintainer: Amir Sanjar <amir.sanjar@xxxxxxxxxxxxx>
 description: |
- Hue aggregates the most common Apache Hadoop components into a single interface
- and targets the user  experience. Its main goal is to have the users "just use"
- Hadoop without worrying about the underlying complexity or using a command line.
-tags:
-  # Replace "misc" with one or more whitelisted tags from this list:
-  # https://juju.ubuntu.com/docs/authors-charm-metadata.html#charm-metadata
-  - big_data
-subordinate: false
+  Hue aggregates the most common Apache Hadoop components into a single
+  interface and targets the user experience. Its main goal is to have the users
+  "just use" Hadoop without worrying about underlying complexity or using a
+  command line.
+
+  Learn more at http://gethue.com
+tags: ["bigdata", "hadoop", "apache"]
 requires:
   resourcemanager:
     interface: mapred

=== added directory 'resources'
=== added file 'resources.yaml'
--- resources.yaml	1970-01-01 00:00:00 +0000
+++ resources.yaml	2015-04-02 19:14:51 +0000
@@ -0,0 +1,18 @@
+options:
+  output_dir: /home/ubuntu/resources
+resources:
+  pathlib:
+    pypi: path.py>=7.0
+  pyaml:
+    pypi: pyaml
+  six:
+    pypi: six
+  charmhelpers:
+    pypi: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/kevin.monroe%40canonical.com-20150402180035-8mjz2eh620t1jfox/charmhelpers0.2.2.ta-20150304033309-4fa7ewnosqavnwms-1/charmhelpers-0.2.2.tar.gz
+    hash: 150c86c30056b0d61c3c5be43a92d95fc32d8356b19b6203fdda25c975d56375
+    hash_type: sha256
+optional_resources:
+  hue-x86_64:
+    url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/kevin.monroe%40canonical.com-20150319213217-x8oypzc2b91kzofg/hue3.7.1.tgz-20150319213159-85pa61gu4qx772i3-1/hue-3.7.1.tgz
+    hash: a921b1baa6598e6e5a8b878043793a06bab2c05a4a637b4895d56301f10d2290
+    hash_type: sha256

=== added file 'resources/jujuresources-0.2.5.tar.gz'
Binary files resources/jujuresources-0.2.5.tar.gz	1970-01-01 00:00:00 +0000 and resources/jujuresources-0.2.5.tar.gz	2015-04-02 19:14:51 +0000 differ
=== removed directory 'templates'
=== removed file 'templates/upstart.conf'
--- templates/upstart.conf	2014-12-12 04:41:42 +0000
+++ templates/upstart.conf	1970-01-01 00:00:00 +0000
@@ -1,13 +0,0 @@
-description "apache-hue"
-author "sanjar <sanjar@sanjar-acer>"
-
-start on runlevel [2345]
-stop on runlevel [016]
-
-respawn
-
-console log
-script
-    echo Fake service; sleeping for an hour...
-    sleep 360
-end script

=== modified file 'tests/00-setup'
--- tests/00-setup	2014-12-12 04:41:42 +0000
+++ tests/00-setup	2015-04-02 19:14:51 +0000
@@ -1,5 +1,8 @@
 #!/bin/bash
 
-sudo add-apt-repository ppa:juju/stable -y
-sudo apt-get update
-sudo apt-get install amulet python-requests -y
+if ! dpkg -s amulet &> /dev/null; then
+    echo Installing Amulet...
+    sudo add-apt-repository -y ppa:juju/stable
+    sudo apt-get update
+    sudo apt-get -y install amulet
+fi

=== added file 'tests/01-basic-deployment.py'
--- tests/01-basic-deployment.py	1970-01-01 00:00:00 +0000
+++ tests/01-basic-deployment.py	2015-04-02 19:14:51 +0000
@@ -0,0 +1,40 @@
+#!/usr/bin/env python3
+
+import unittest
+import amulet
+
+
+class TestDeploy(unittest.TestCase):
+    """
+    Basic deployment test for Hue.
+
+    This charm cannot do anything useful by itself, so integration testing
+    is done in the bundle.
+    """
+
+    @classmethod
+    def setUpClass(cls):
+        cls.d = amulet.Deployment(series='trusty')
+        cls.d.add('apache-hadoop-client')
+        cls.d.setup(timeout=9000)
+        cls.d.sentry.wait()
+        cls.unit = cls.d.sentry.unit['apache-hue/0']
+
+    def test_deploy(self):
+        output, retcode = self.unit.run("pgrep -a java")
+        assert 'ResourceManager' not in output, "ResourceManager should not be started"
+        assert 'JobHistoryServer' not in output, "JobHistoryServer should not be started"
+        assert 'NodeManager' not in output, "NodeManager should not be started"
+        assert 'NameNode' not in output, "NameNode should not be started"
+        assert 'SecondaryNameNode' not in output, "SecondaryNameNode should not be started"
+        assert 'DataNode' not in output, "DataServer should not be started"
+
+    def test_dist_config(self):
+        # test_dist_config.py is run on the deployed unit because it
+        # requires the Juju context to properly validate dist.yaml
+        output, retcode = self.unit.run("tests/remote/test_dist_config.py")
+        self.assertEqual(retcode, 0, 'Remote dist config test failed:\n{}'.format(output))
+
+
+if __name__ == '__main__':
+    unittest.main()

=== removed file 'tests/10-deploy'
--- tests/10-deploy	2014-12-12 04:41:42 +0000
+++ tests/10-deploy	1970-01-01 00:00:00 +0000
@@ -1,51 +0,0 @@
-#!/usr/bin/env python3
-
-import amulet
-import requests
-import unittest
-
-
-class TestDeployment(unittest.TestCase):
-    @classmethod
-    def setUpClass(cls):
-        cls.deployment = amulet.Deployment()
-
-        cls.deployment.add('apache-hue')
-        cls.deployment.expose('apache-hue')
-
-        try:
-            cls.deployment.setup(timeout=900)
-            cls.deployment.sentry.wait()
-        except amulet.helpers.TimeoutError:
-            amulet.raise_status(amulet.SKIP, msg="Environment wasn't stood up in time")
-        except:
-            raise
-        cls.unit = cls.deployment.sentry.unit['apache-hue/0']
-
-    def test_case(self):
-        # Now you can use self.deployment.sentry.unit[UNIT] to address each of
-        # the units and perform more in-depth steps.  You can also reference
-        # the first unit as self.unit.
-        # There are three test statuses that can be triggered with
-        # amulet.raise_status():
-        #   - amulet.PASS
-        #   - amulet.FAIL
-        #   - amulet.SKIP
-        # Each unit has the following methods:
-        #   - .info - An array of the information of that unit from Juju
-        #   - .file(PATH) - Get the details of a file on that unit
-        #   - .file_contents(PATH) - Get plain text output of PATH file from that unit
-        #   - .directory(PATH) - Get details of directory
-        #   - .directory_contents(PATH) - List files and folders in PATH on that unit
-        #   - .relation(relation, service:rel) - Get relation data from return service
-        #          add tests here to confirm service is up and working properly
-        # For example, to confirm that it has a functioning HTTP server:
-        #     page = requests.get('http://{}'.format(self.unit.info['public-address']))
-        #     page.raise_for_status()
-        # More information on writing Amulet tests can be found at:
-        #     https://juju.ubuntu.com/docs/tools-amulet.html
-        pass
-
-
-if __name__ == '__main__':
-    unittest.main()

=== added directory 'tests/remote'
=== added file 'tests/remote/test_dist_config.py'
--- tests/remote/test_dist_config.py	1970-01-01 00:00:00 +0000
+++ tests/remote/test_dist_config.py	2015-04-02 19:14:51 +0000
@@ -0,0 +1,72 @@
+#!/usr/bin/env python
+
+import grp
+import os
+import pwd
+import unittest
+
+from charmhelpers.contrib import bigdata
+
+
+class TestDistConfig(unittest.TestCase):
+    """
+    Test that the ``dist.yaml`` settings were applied properly, such as users, groups, and dirs.
+
+    This is done as a remote test on the deployed unit rather than a regular
+    test under ``tests/`` because filling in the ``dist.yaml`` requires Juju
+    context (e.g., config).
+    """
+    @classmethod
+    def setUpClass(cls):
+        config = None
+        config_dir = os.environ['JUJU_CHARM_DIR']
+        config_file = 'dist.yaml'
+        if os.path.isfile(os.path.join(config_dir, config_file)):
+            config = os.path.join(config_dir, config_file)
+        if not config:
+            raise IOError('Could not find {} in {}'.format(config_file, config_dir))
+        reqs = ['vendor', 'hadoop_version', 'packages', 'groups', 'users',
+                'dirs', 'ports']
+        cls.dist_config = bigdata.utils.DistConfig(config, reqs)
+
+    def test_groups(self):
+        for name in self.dist_config.groups:
+            try:
+                grp.getgrnam(name)
+            except KeyError:
+                self.fail('Group {} is missing'.format(name))
+
+    def test_users(self):
+        for username, details in self.dist_config.users.items():
+            try:
+                user = pwd.getpwnam(username)
+            except KeyError:
+                self.fail('User {} is missing'.format(username))
+            for groupname in details['groups']:
+                try:
+                    group = grp.getgrnam(groupname)
+                except KeyError:
+                    self.fail('Group {} referenced by user {} does not exist'.format(
+                        groupname, username))
+                if group.gr_gid != user.pw_gid:
+                    self.assertIn(username, group.gr_mem, 'User {} not in group {}'.format(
+                        username, groupname))
+
+    def test_dirs(self):
+        for name, details in self.dist_config.dirs.items():
+            dirpath = self.dist_config.path(name)
+            self.assertTrue(dirpath.isdir(), 'Dir {} is missing'.format(name))
+            stat = dirpath.stat()
+            owner = pwd.getpwuid(stat.st_uid).pw_name
+            group = grp.getgrgid(stat.st_gid).gr_name
+            perms = stat.st_mode & ~0o40000
+            self.assertEqual(owner, details.get('owner', 'root'),
+                             'Dir {} ({}) has wrong owner: {}'.format(name, dirpath, owner))
+            self.assertEqual(group, details.get('group', 'root'),
+                             'Dir {} ({}) has wrong group: {}'.format(name, dirpath, group))
+            self.assertEqual(perms, details.get('perms', 0o755),
+                             'Dir {} ({}) has wrong perms: 0o{:o}'.format(name, dirpath, perms))
+
+
+if __name__ == '__main__':
+    unittest.main()

=== removed directory 'unit_tests'
=== removed file 'unit_tests/test_actions.py'
--- unit_tests/test_actions.py	2014-12-12 04:41:42 +0000
+++ unit_tests/test_actions.py	1970-01-01 00:00:00 +0000
@@ -1,21 +0,0 @@
-#!/usr/bin/env python
-
-import sys
-import mock
-import unittest
-from pkg_resources import resource_filename
-
-# allow importing actions from the hooks directory
-sys.path.append(resource_filename(__name__, '../hooks'))
-import actions
-
-
-class TestActions(unittest.TestCase):
-    @mock.patch('charmhelpers.core.hookenv.log')
-    def test_log_start(self, log):
-        actions.log_start('test-service')
-        log.assert_called_once_with('apache-hue starting')
-
-
-if __name__ == '__main__':
-    unittest.main()