bigdata-dev team mailing list archive
-
bigdata-dev team
-
Mailing list archive
-
Message #00076
[Merge] lp:~bigdata-dev/charms/trusty/apache-hadoop-client/minimal into lp:~bigdata-dev/charms/trusty/apache-hadoop-client/trunk
Cory Johns has proposed merging lp:~bigdata-dev/charms/trusty/apache-hadoop-client/minimal into lp:~bigdata-dev/charms/trusty/apache-hadoop-client/trunk.
Requested reviews:
Big Data Charmers (bigdata-charmers)
For more details, see:
https://code.launchpad.net/~bigdata-dev/charms/trusty/apache-hadoop-client/minimal/+merge/258939
Refactored to use apache-hadoop-plugin instead of duplicating all the code.
--
Your team Juju Big Data Development is subscribed to branch lp:~bigdata-dev/charms/trusty/apache-hadoop-client/trunk.
=== modified file 'README.md'
--- README.md 2015-05-06 17:04:07 +0000
+++ README.md 2015-05-12 21:54:26 +0000
@@ -10,11 +10,12 @@
## Usage
-This charm is intended to be deployed via one of the
-[bundles](https://jujucharms.com/q/bigdata-dev/apache?type=bundle).
-For example:
+This charm is intended to be connected to the
+[core bundle](https://jujucharms.com/u/bigdata-dev/apache-core-batch-processing/):
- juju quickstart u/bigdata-dev/apache-core-batch-processing
+ juju quickstart apache-core-batch-processing
+ juju deploy apache-hadoop-client client
+ juju add-relation client plugin
This will deploy the Apache Hadoop platform with a single client unit.
From there, you can manually load and run map-reduce jobs:
@@ -24,46 +25,9 @@
hadoop jar my-job.jar
-## Deploying in Network-Restricted Environments
-
-The Apache Hadoop charms can be deployed in environments with limited network
-access. To deploy in this environment, you will need a local mirror to serve
-the packages and resources required by these charms.
-
-
-### Mirroring Packages
-
-You can setup a local mirror for apt packages using squid-deb-proxy.
-For instructions on configuring juju to use this, see the
-[Juju Proxy Documentation](https://juju.ubuntu.com/docs/howto-proxies.html).
-
-
-### Mirroring Resources
-
-In addition to apt packages, the Apache Hadoop charms require a few binary
-resources, which are normally hosted on Launchpad. If access to Launchpad
-is not available, the `jujuresources` library makes it easy to create a mirror
-of these resources:
-
- sudo pip install jujuresources
- juju resources fetch --all apache-hadoop-client/resources.yaml -d /tmp/resources
- juju resources serve -d /tmp/resources
-
-This will fetch all of the resources needed by this charm and serve them via a
-simple HTTP server. You can then set the `resources_mirror` config option to
-have the charm use this server for retrieving resources.
-
-You can fetch the resources for all of the Apache Hadoop charms
-(`apache-hadoop-hdfs-master`, `apache-hadoop-yarn-master`,
-`apache-hadoop-compute-slave`, `apache-hadoop-client`, etc) into a single
-directory and serve them all with a single `juju resources serve` instance.
-
-
## Contact Information
-* Amir Sanjar <amir.sanjar@xxxxxxxxxxxxx>
-* Cory Johns <cory.johns@xxxxxxxxxxxxx>
-* Kevin Monroe <kevin.monroe@xxxxxxxxxxxxx>
+[bigdata-dev@xxxxxxxxxxxxx](mailto:bigdata-dev@xxxxxxxxxxxxx)
## Hadoop
=== modified file 'config.yaml'
--- config.yaml 2015-04-03 16:49:17 +0000
+++ config.yaml 2015-05-12 21:54:26 +0000
@@ -1,6 +1,1 @@
-options:
- resources_mirror:
- type: string
- default: ''
- description: |
- URL from which to fetch resources (e.g., Hadoop binaries) instead of Launchpad.
+options: {}
=== removed file 'dist.yaml'
--- dist.yaml 2015-04-16 15:45:57 +0000
+++ dist.yaml 1970-01-01 00:00:00 +0000
@@ -1,116 +0,0 @@
-# This file contains values that are likely to change per distribution.
-# The aim is to make it easier to update / extend the charms with
-# minimal changes to the shared code in charmhelpers.
-vendor: 'apache'
-hadoop_version: '2.4.1'
-packages:
- - 'libsnappy1'
- - 'libsnappy-dev'
- - 'openssl'
- - 'liblzo2-2'
-groups:
- - 'hadoop'
- - 'mapred'
- - 'supergroup'
-users:
- ubuntu:
- groups: ['hadoop', 'mapred', 'supergroup']
- hdfs:
- groups: ['hadoop']
- mapred:
- groups: ['hadoop', 'mapred']
- yarn:
- groups: ['hadoop']
-dirs:
- hadoop:
- path: '/usr/lib/hadoop'
- perms: 0777
- hadoop_conf:
- path: '/etc/hadoop/conf'
- hadoop_tmp:
- path: '/tmp/hadoop'
- perms: 0777
- mapred_log:
- path: '/var/log/hadoop/mapred'
- owner: 'mapred'
- group: 'hadoop'
- perms: 0755
- mapred_run:
- path: '/var/run/hadoop/mapred'
- owner: 'mapred'
- group: 'hadoop'
- perms: 0755
- yarn_tmp:
- path: '/tmp/hadoop-yarn'
- perms: 0777
- yarn_log_dir:
- path: '/var/log/hadoop/yarn'
- owner: 'yarn'
- group: 'hadoop'
- perms: 0755
- hdfs_log_dir:
- path: '/var/log/hadoop/hdfs'
- owner: 'hdfs'
- group: 'hadoop'
- perms: 0755
- hdfs_dir_base:
- path: '/usr/local/hadoop/data'
- owner: 'hdfs'
- group: 'hadoop'
- perms: 0755
- cache_base:
- path: '{dirs[hdfs_dir_base]}/cache'
- owner: 'hdfs'
- group: 'hadoop'
- perms: 01775
- cache_dir:
- path: '{dirs[hdfs_dir_base]}/cache/hadoop'
- owner: 'hdfs'
- group: 'hadoop'
- perms: 0775
-ports:
- # Ports that need to be exposed, overridden, or manually specified.
- # Only expose ports serving a UI or external API (i.e., namenode and
- # resourcemanager). Communication among units within the cluster does
- # not need ports to be explicitly opened.
- # If adding a port here, you will need to update
- # charmhelpers.contrib.bigdata.handlers.apache or hooks/callbacks.py
- # to ensure that it is supported.
- namenode:
- port: 8020
- exposed_on: 'hdfs-master'
- nn_webapp_http:
- port: 50070
- exposed_on: 'hdfs-master'
- dn_webapp_http:
- port: 50075
- exposed_on: 'compute-slave'
- resourcemanager:
- port: 8032
- exposed_on: 'yarn-master'
- rm_webapp_http:
- port: 8088
- exposed_on: 'yarn-master'
- rm_log:
- port: 19888
- nm_webapp_http:
- port: 8042
- exposed_on: 'compute-slave'
- jobhistory:
- port: 10020
- jh_webapp_http:
- port: 19888
- exposed_on: 'yarn-master'
- # TODO: support SSL
- #nn_webapp_https:
- # port: 50470
- # exposed_on: 'hdfs-master'
- #dn_webapp_https:
- # port: 50475
- # exposed_on: 'compute-slave'
- #rm_webapp_https:
- # port: 8090
- # exposed_on: 'yarn-master'
- #nm_webapp_https:
- # port: 8044
- # exposed_on: 'compute-slave'
=== removed file 'hooks/callbacks.py'
--- hooks/callbacks.py 2015-05-08 17:26:18 +0000
+++ hooks/callbacks.py 1970-01-01 00:00:00 +0000
@@ -1,36 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Callbacks for additional setup tasks.
-
-Add any additional tasks / setup here. If a callback is used by mutliple
-charms, consider refactoring it up to the charmhelpers library.
-"""
-from charmhelpers.contrib.bigdata import utils
-from charmhelpers.core import hookenv
-
-
-def update_etc_hosts():
- if hookenv.in_relation_hook():
- # send our hostname on the relation
- local_host = hookenv.local_unit().replace('/', '-')
- hookenv.relation_set(hostname=local_host)
-
- # get /etc/hosts entries from the master
- master_hosts = hookenv.relation_get('etc_hosts')
-
- # update /etc/hosts on the local unit if we have master hostname data
- if master_hosts:
- hookenv.log('Updating /etc/hosts from %s' % hookenv.remote_unit())
- utils.update_etc_hosts(master_hosts)
- else:
- hookenv.log('No /etc/hosts updates from %s' % hookenv.remote_unit())
=== removed file 'hooks/common.py'
--- hooks/common.py 2015-05-07 15:08:59 +0000
+++ hooks/common.py 1970-01-01 00:00:00 +0000
@@ -1,93 +0,0 @@
-#!/usr/bin/env python
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Common implementation for all hooks.
-"""
-
-import jujuresources
-
-
-def bootstrap_resources():
- """
- Install required resources defined in resources.yaml
- """
- mirror_url = jujuresources.config_get('resources_mirror')
- if not jujuresources.fetch(mirror_url=mirror_url):
- jujuresources.juju_log('Resources unavailable; manual intervention required', 'ERROR')
- return False
- jujuresources.install(['pathlib', 'pyaml', 'six', 'charmhelpers'])
- return True
-
-
-def manage():
- if not bootstrap_resources():
- # defer until resources are available, since charmhelpers, and thus
- # the framework, are required (will require manual intervention)
- return
-
- from charmhelpers.core import charmframework
- from charmhelpers.contrib import bigdata
- import callbacks # noqa (ignore when linting)
-
- # list of keys required to be in the dist.yaml
- client_reqs = ['vendor', 'hadoop_version', 'packages', 'groups', 'users',
- 'dirs', 'ports']
- dist_config = bigdata.utils.DistConfig(filename='dist.yaml',
- required_keys=client_reqs)
- hadoop = bigdata.handlers.apache.HadoopBase(dist_config)
- hdfs = bigdata.handlers.apache.HDFS(hadoop)
- yarn = bigdata.handlers.apache.YARN(hadoop)
- manager = charmframework.Manager([
- {
- 'name': 'hadoop-base',
- 'requires': [
- hadoop.verify_conditional_resources,
- ],
- 'callbacks': [
- hadoop.install,
- ],
- },
- {
- 'name': 'client-hdfs',
- 'provides': [
- ],
- 'requires': [
- hadoop.is_installed,
- bigdata.relations.NameNode(spec=hadoop.client_spec),
- ],
- 'callbacks': [
- callbacks.update_etc_hosts,
- hdfs.configure_client,
- ],
- },
- {
- 'name': 'client-yarn',
- 'provides': [
- ],
- 'requires': [
- hadoop.is_installed,
- bigdata.relations.ResourceManager(spec=hadoop.client_spec),
- ],
- 'callbacks': [
- callbacks.update_etc_hosts,
- yarn.install_demo,
- yarn.configure_client,
- ],
- 'cleanup': [],
- },
- ])
- manager.manage()
-
-
-if __name__ == '__main__':
- manage()
=== removed file 'hooks/config-changed'
--- hooks/config-changed 2015-02-09 18:13:28 +0000
+++ hooks/config-changed 1970-01-01 00:00:00 +0000
@@ -1,15 +0,0 @@
-#!/usr/bin/env python
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import common
-common.manage()
=== added file 'hooks/hadoop-plugin-relation-changed'
--- hooks/hadoop-plugin-relation-changed 1970-01-01 00:00:00 +0000
+++ hooks/hadoop-plugin-relation-changed 2015-05-12 21:54:26 +0000
@@ -0,0 +1,4 @@
+#!/bin/bash
+if [[ "$(relation-get hdfs-ready)" == "True" ]]; then
+ hooks/status-set active "Ready to run mapreduce jobs"
+fi
=== added file 'hooks/hadoop-plugin-relation-joined'
--- hooks/hadoop-plugin-relation-joined 1970-01-01 00:00:00 +0000
+++ hooks/hadoop-plugin-relation-joined 2015-05-12 21:54:26 +0000
@@ -0,0 +1,2 @@
+#!/bin/bash
+hooks/status-set waiting "Waiting for Hadoop to be ready"
=== modified file 'hooks/install'
--- hooks/install 2015-02-09 18:13:28 +0000
+++ hooks/install 2015-05-12 21:54:26 +0000
@@ -1,17 +1,2 @@
-#!/usr/bin/python
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import setup
-setup.pre_install()
-
-import common
-common.manage()
+#!/bin/bash
+hooks/status-set blocked "Please add relation to apache-hadoop-plugin"
=== removed file 'hooks/namenode-relation-changed'
--- hooks/namenode-relation-changed 2015-05-07 15:08:59 +0000
+++ hooks/namenode-relation-changed 1970-01-01 00:00:00 +0000
@@ -1,16 +0,0 @@
-#!/usr/bin/env python
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import common
-
-common.manage()
=== removed file 'hooks/resourcemanager-relation-changed'
--- hooks/resourcemanager-relation-changed 2015-05-07 15:08:59 +0000
+++ hooks/resourcemanager-relation-changed 1970-01-01 00:00:00 +0000
@@ -1,16 +0,0 @@
-#!/usr/bin/env python
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import common
-
-common.manage()
=== removed file 'hooks/setup.py'
--- hooks/setup.py 2015-03-03 21:29:00 +0000
+++ hooks/setup.py 1970-01-01 00:00:00 +0000
@@ -1,35 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import subprocess
-from glob import glob
-
-def pre_install():
- """
- Do any setup required before the install hook.
- """
- install_pip()
- install_jujuresources()
-
-
-def install_pip():
- subprocess.check_call(['apt-get', 'install', '-yq', 'python-pip', 'bzr'])
-
-
-def install_jujuresources():
- """
- Install the bundled jujuresources library, if not present.
- """
- try:
- import jujuresources # noqa
- except ImportError:
- jr_archive = glob('resources/jujuresources-*.tar.gz')[0]
- subprocess.check_call(['pip', 'install', jr_archive])
=== removed file 'hooks/start'
--- hooks/start 2015-02-09 18:13:28 +0000
+++ hooks/start 1970-01-01 00:00:00 +0000
@@ -1,15 +0,0 @@
-#!/usr/bin/env python
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import common
-common.manage()
=== added file 'hooks/status-set'
--- hooks/status-set 1970-01-01 00:00:00 +0000
+++ hooks/status-set 2015-05-12 21:54:26 +0000
@@ -0,0 +1,5 @@
+#!/bin/bash
+# Wrapper around status-set for use with older versions of Juju
+if which status-set > /dev/null; then
+ status-set "$@"
+fi
=== removed file 'hooks/stop'
--- hooks/stop 2015-02-09 18:13:28 +0000
+++ hooks/stop 1970-01-01 00:00:00 +0000
@@ -1,15 +0,0 @@
-#!/usr/bin/env python
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import common
-common.manage()
=== modified file 'metadata.yaml'
--- metadata.yaml 2015-05-06 17:04:07 +0000
+++ metadata.yaml 2015-05-12 21:54:26 +0000
@@ -8,8 +8,6 @@
This charm manages a dedicated client node as a place to
run mapreduce jobs.
tags: ["applications", "bigdata", "hadoop", "apache"]
-requires:
- namenode:
- interface: dfs
- resourcemanager:
- interface: mapred
+provides:
+ hadoop-plugin:
+ interface: hadoop-plugin
=== removed directory 'resources'
=== removed file 'resources.yaml'
--- resources.yaml 2015-05-11 17:36:44 +0000
+++ resources.yaml 1970-01-01 00:00:00 +0000
@@ -1,34 +0,0 @@
-options:
- output_dir: /home/ubuntu/resources
-resources:
- pathlib:
- pypi: path.py>=7.0
- pyaml:
- pypi: pyaml
- six:
- pypi: six
- charmhelpers:
- pypi: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/kevin.monroe%40canonical.com-20150511173636-5rblzf5r2o1zcv2p/charmhelpers0.2.3.ta-20150417221203-zg62z8c220egc3ch-1/charmhelpers-0.2.3.tar.gz
- hash: 44340a6fd6f192bcc9d390c0d9c3901d4fc190166485b107047bc1c6ba102a2f
- hash_type: sha256
- java-installer:
- # This points to a script which manages installing Java.
- # If replaced with an alternate implementation, it must output *only* two
- # lines containing the JAVA_HOME path, and the Java version, respectively,
- # on stdout. Upon error, it must exit with a non-zero exit code.
- url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/cory.johns%40canonical.com-20150312205309-2ji1etk44gep01w1/javainstaller.sh-20150311213053-4vq7369jhlvc6qy8-1/java-installer.sh
- hash: 130984f1dc3bc624d4245234d0fca22f529d234d0eaa1241c5e9f701319bdea9
- hash_type: sha256
-optional_resources:
- hadoop-aarch64:
- url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/kevin.monroe%40canonical.com-20150303192631-swhrf8f7q82si75t/hadoop2.4.1.tar.gz-20150303192554-7gqslr4m8ahkwiax-2/hadoop-2.4.1.tar.gz
- hash: 03ad135835bfe413f85fe176259237a8
- hash_type: md5
- hadoop-ppc64le:
- url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/kevin.monroe%40canonical.com-20150130165209-nuz1myezjpdx7eus/hadoop2.4.1ppc64le.t-20150130165148-s8i19s002ht88gio-2/hadoop-2.4.1-ppc64le.tar.gz
- hash: 09942b168a3db0d183b281477d3dae9deb7b7bc4b5783ba5cda3965b62e71bd5
- hash_type: sha256
- hadoop-x86_64:
- url: http://bazaar.launchpad.net/~bigdata-dev/bigdata-data/trunk/download/cory.johns%40canonical.com-20150116154822-x5osw3zfhw6e03b1/hadoop2.4.1.tar.gz-20150116154748-yfa2j12rr5m53xd3-1/hadoop-2.4.1.tar.gz
- hash: a790d39baba3a597bd226042496764e0520c2336eedb28a1a3d5c48572d3b672
- hash_type: sha256
=== removed file 'resources/jujuresources-0.2.5.tar.gz'
Binary files resources/jujuresources-0.2.5.tar.gz 2015-03-03 19:56:49 +0000 and resources/jujuresources-0.2.5.tar.gz 1970-01-01 00:00:00 +0000 differ
=== modified file 'tests/01-basic-deployment.py'
--- tests/01-basic-deployment.py 2015-03-04 00:07:45 +0000
+++ tests/01-basic-deployment.py 2015-05-12 21:54:26 +0000
@@ -29,12 +29,6 @@
assert 'SecondaryNameNode' not in output, "SecondaryNameNode should not be started"
assert 'DataNode' not in output, "DataServer should not be started"
- def test_dist_config(self):
- # test_dist_config.py is run on the deployed unit because it
- # requires the Juju context to properly validate dist.yaml
- output, retcode = self.unit.run("tests/remote/test_dist_config.py")
- self.assertEqual(retcode, 0, 'Remote dist config test failed:\n{}'.format(output))
-
if __name__ == '__main__':
unittest.main()
=== removed directory 'tests/remote'
=== removed file 'tests/remote/test_dist_config.py'
--- tests/remote/test_dist_config.py 2015-03-19 20:09:17 +0000
+++ tests/remote/test_dist_config.py 1970-01-01 00:00:00 +0000
@@ -1,72 +0,0 @@
-#!/usr/bin/env python
-
-import grp
-import os
-import pwd
-import unittest
-
-from charmhelpers.contrib import bigdata
-
-
-class TestDistConfig(unittest.TestCase):
- """
- Test that the ``dist.yaml`` settings were applied properly, such as users, groups, and dirs.
-
- This is done as a remote test on the deployed unit rather than a regular
- test under ``tests/`` because filling in the ``dist.yaml`` requires Juju
- context (e.g., config).
- """
- @classmethod
- def setUpClass(cls):
- config = None
- config_dir = os.environ['JUJU_CHARM_DIR']
- config_file = 'dist.yaml'
- if os.path.isfile(os.path.join(config_dir, config_file)):
- config = os.path.join(config_dir, config_file)
- if not config:
- raise IOError('Could not find {} in {}'.format(config_file, config_dir))
- reqs = ['vendor', 'hadoop_version', 'packages', 'groups', 'users',
- 'dirs']
- cls.dist_config = bigdata.utils.DistConfig(config, reqs)
-
- def test_groups(self):
- for name in self.dist_config.groups:
- try:
- grp.getgrnam(name)
- except KeyError:
- self.fail('Group {} is missing'.format(name))
-
- def test_users(self):
- for username, details in self.dist_config.users.items():
- try:
- user = pwd.getpwnam(username)
- except KeyError:
- self.fail('User {} is missing'.format(username))
- for groupname in details['groups']:
- try:
- group = grp.getgrnam(groupname)
- except KeyError:
- self.fail('Group {} referenced by user {} does not exist'.format(
- groupname, username))
- if group.gr_gid != user.pw_gid:
- self.assertIn(username, group.gr_mem, 'User {} not in group {}'.format(
- username, groupname))
-
- def test_dirs(self):
- for name, details in self.dist_config.dirs.items():
- dirpath = self.dist_config.path(name)
- self.assertTrue(dirpath.isdir(), 'Dir {} is missing'.format(name))
- stat = dirpath.stat()
- owner = pwd.getpwuid(stat.st_uid).pw_name
- group = grp.getgrgid(stat.st_gid).gr_name
- perms = stat.st_mode & ~0o40000
- self.assertEqual(owner, details.get('owner', 'root'),
- 'Dir {} ({}) has wrong owner: {}'.format(name, dirpath, owner))
- self.assertEqual(group, details.get('group', 'root'),
- 'Dir {} ({}) has wrong group: {}'.format(name, dirpath, group))
- self.assertEqual(perms, details.get('perms', 0o755),
- 'Dir {} ({}) has wrong perms: 0o{:o}'.format(name, dirpath, perms))
-
-
-if __name__ == '__main__':
- unittest.main()
Follow ups