nagios-charmers team mailing list archive
-
nagios-charmers team
-
Mailing list archive
-
Message #00222
Re: [Merge] ~aluria/hw-health-charm/+git/hw-health-charm:unittests-engrot6 into hw-health-charm:master
Review: Needs Fixing
You got a new reviewer sorry ;) Looks good, but worth another quick parse as I think a few of my comments need fixes before landing. Please keep this MP in play to make it easier to see just the changes.
Discussion here or IRC is fine.
Diff comments:
> diff --git a/Makefile b/Makefile
> index b2378ab..583944f 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -20,23 +19,15 @@ help:
> @echo " the following targets are meant to be used by the Makefile"
> @echo " make requirements - installs the requirements"
>
> -submodules:
> - @echo "Cloning submodules"
> - @git submodule update --init --recursive
> -
> lint:
> - @-if [ -x `which pre-commit` ] ; then echo "Running pre-commit as linter"; pre-commit run --all || exit 0 ; fi
Should this be removed from the template as well? Or is your rationale for removal personal or only apply to this charm? I ask because the template is still being polished.
> @echo "Running flake8"
> @flake8 ./src
>
> -test: unittest functionaltest lint
> +test: unittest lint
>
> unittest:
> @cd src && tox -e unit
>
> -functionaltest:
> - @cd src && JUJU_REPOSITORY=$(JUJU_REPOSITORY) tox -e functional
> -
> build:
> @echo "Building charm to base directory $(JUJU_REPOSITORY)"
> @git describe --tags > ./src/repo-info
> @@ -49,12 +40,12 @@ release: lint clean build
> @echo "Charm is build, it can now be pushed to the store"
> @echo "With the command:"
> @echo " charm push ./src"
> + charm push ./src cs:~nagios-charmers/hw-health
This is incorrect (as is the echo on the previous line). This is pushing the source charm in ./src, not the built charm in $JUJU_REPOSITORY/builds/hw-health.
>
> clean:
> @echo "Cleaning files"
> - @if [ -d src/.tox ] ; then rm -r src/slave/.tox ; fi
> - @if [ -d src/.pytest_cache ] ; then rm -r src/slave/.pytest_cache ; fi
> + @if [ -d src/.tox ] ; then rm -r src/.tox ; fi
> + @if [ -d src/.pytest_cache ] ; then rm -r src/.pytest_cache ; fi
>
> # The targets below don't depend on a file
> -.PHONY: lint test unittest functionaltest build release clean help \
> - submodules
> +.PHONY: lint test unittest build release clean help
> diff --git a/src/README b/src/README
> index d2302e6..e39e088 100644
> --- a/src/README
> +++ b/src/README
> @@ -1,49 +1,67 @@
> -This charm installs Vendor supplied system monitoring tools and configures a
> -Nagios NRPE check. It will only work for bare-metal installations on specific
> +# Overview
> +
> +This charm installs Vendor supplied system monitoring tools and configures
> +Nagios NRPE checks. It will only work for bare-metal installations on specific
> hardware.
>
> Currently supported hardware is:
> - Hewlett-Packard
> - * HP Smart Array Controllers and MSA Controllers with hpacucli, hpssacli,
> - ssacli
> -
> - Dell
> + * Dell
> * LSI Logic MegaRAID SAS (Broadcoam MegaCLI utility)
>
> - Supermicro
> - * LSI SAS3008 RAID card with sas3ircu (Broadcoam's SAS3IRCU_P16)
> - mp3sas linux driver
> + * Supermicro
> + * LSI SAS3008 RAID card with sas3ircu (Broadcoam's SAS3IRCU_P16) mp3sas linux driver
>
> - Huawei
> - * ES3000 V2 PCIe SSD Card with hio_info (Huawei ES3000 V2 Driver)
> + * Huawei
> * LSI SAS2308 RAID card with sas2ircu (Huawei FusionServer Tools InfoCollect)
>
> - Linux software RAID (mdadm)
> + * Linux software RAID (mdadm)
> +
> +In the backlog, hp-health logic still needs to be backported to support
> +Hewlett-Packard equipment (HP Smart Array Controllers and MSA Controllers with
> +hpacucli, hpssacli, ssacli).
> +
> +Furthermore, other hardware in the roadmap is:
> +* Huawei's ES3000 V2 PCIe SSD Card with hio_info (Huawei ES3000 V2 Driver)
> +* S.M.A.R.T. Monitoring tool (smartctl)
> +
> +# Usage
> +
> +Step by step instructions on using the charm:
> +
> +zip /tmp/zipfile.zip binarytool1 scripttool2 ...
I don't know what binarytool1 and scripttool2 are, so this isn't step by step enough for me to deploy. I assume they are particular files downloaded from the earlier URLs? I don't know which ones, except they are probably not the HP ones since earlier mentioned they are pulled from a deb archive. An example including real file names would be better, which should be enough for the user to know if they can leave some out or adjust for newer or older releases.
> +
> +juju deploy ubuntu
> +juju deploy hw-health --resource tools=/tmp/zipfile.zip
> +juju deploy nrpe
> +juju add-relation ubuntu nrpe
> +juju add-relation ubuntu hw-health
> +juju add-relation hw-health nrpe
> +
> +Alternatively, resource can be attached at a later stage:
> +juju attach-resource hw-health tools=/tmp/zipfile.zip
If the HW manufacturers give us a redistribution license, then some or all of the resources can be attached at charm publication time. The user would only need to stuff around specifying resources if they need different versions of the tools.
>
> - S.M.A.R.T. Monitoring tool (smartctl)
>
> -HP
> -==
> +## Known Limitations and Issues
>
> -HP monitoring tools are provided by HP and the repository can be
> -added in config.yaml. See:
> +Charm only install method is via Juju resources. There are plans to support
> +snaps but snapstore only supports strictly confined snaps. Hardware monitoring
> +tools need special permissions that are under development.
Hmm, if we can publish snaps, we can publish the charm with resources pre-attached.
>
> - http://downloads.linux.hp.com/SDR/
> - https://wiki.debian.org/HP/ProLiant
> +See https://forum.snapcraft.io/t/request-for-classic-confinement-sas2ircu/9023
>
> -The Ubuntu repository at the time of writing is:
> +"tools" resource needs to be attached in ZIP format, and hardware monitoring
> +tool(s) need to be on the first level of the archive tree.
>
> -deb http://downloads.linux.hp.com/SDR/downloads/MCP/ubuntu precise non-free
> +# Configuration
>
> +For now, only Nagios context and service groups are configurable. Manufacturer
> +option needs to be left in auto mode.
>
> -Huawei
> -======
> +# Contact Information
>
> -https://support.huawei.com/enterprise/en/servers/rh2288h-v3-pid-9901881/software
> -https://support.huawei.com/enterprise/en/servers/fusionserver-tools-pid-21015513/software
> -https://support.huawei.com/enterprise/en/software/23397495-SW2000057822
> +Please contact the Nagios charmers via the "Submit a bug" link.
>
> -Supermicro
> -==========
> +## Upstream Project Name
>
> -https://www.broadcom.com/site-search?q=sas3008
> + - Website: https://launchpad.net/hw-health-charm
> + - Bug tracker: https://bugs.launchpad.net/hw-health-charm
> diff --git a/src/files/check_mdadm.py b/src/files/check_mdadm.py
> index 741865e..c85046a 100644
> --- a/src/files/check_mdadm.py
> +++ b/src/files/check_mdadm.py
> @@ -23,28 +26,25 @@ def get_devices():
> except Exception as e:
> # log error
> pass
> - return
> + return set()
>
>
> -def main():
> +def parse_output():
> devices = get_devices()
> if len(devices) == 0:
> - print('WARNING: unexpectedly checked no devices')
> - return 1
> + raise WarnError('WARNING: unexpectedly checked no devices')
I was going to complain about the redundant WARNING: prefix in the message, but I see this exception is imported from a dependency.
>
> mdadm_detail = ['/sbin/mdadm', '--detail']
> mdadm_detail.extend(sorted(devices))
> try:
> - devices_details_raw = subprocess.Popen(mdadm_detail,
> - stdout=subprocess.PIPE,
> - stderr=subprocess.PIPE)
> + devices_details_raw = subprocess.Popen(
> + mdadm_detail, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
> except Exception as error:
> - print('WARNING: error executing mdadm: {}'.format(error))
> - return 1
> + raise WarnError('WARNING: error executing mdadm: {}'.format(error))
>
> - devices_re = '^(/[^ ]+):$'
> - state_re = '^\s*State :\s+(\S+)\s*'
> - status_re = '^\s*(Active|Working|Failed|Spare) Devices :\s+(\d+)'
> + devices_re = '^(/\S+):$'
> + state_re = '^\s*State\s+:\s+(\S+)$'
> + status_re = '^\s*(Active|Working|Failed|Spare) Devices\s+:\s+(\d+)$'
The above 3 strings should be raw strings, (r'blah' instead of 'blah'). This is preferable to the alternative of using \\ instead of \ to get the regexp codes passed through cleanly. What you have will emit deprecation warnings with Python3.6 and later, which is stricter about unknown string escape codes. flake8 doesn't (yet) pick this up (!)
>
> devices_cre = re.compile(devices_re)
> state_cre = re.compile(state_re)
> @@ -75,35 +80,40 @@ def main():
>
> rc = devices_details_raw.wait()
> if rc:
> - print('WARNING: mdadm returned exit status {}'.format(int(rc)))
> - return 1
> + raise WarnError('WARNING: mdadm returned exit'
> + ' status {}'.format(int(rc)))
>
> parts = []
You should remove the above line (parts = []), as the loop now initializes it each iteration. It makes the scope or the variable clearer.
> msg = []
> critical = False
> for device in devices_stats:
> + # Is device degraded?
> if devices_stats[device]['degraded']:
> critical = True
> - parts.append('{} degraded'.format(device))
> + parts = ['{} degraded'.format(device)]
> else:
> - parts.append('{} ok'.format(device))
> -
> - for status in sorted(devices_stats[device]['stats']):
> - parts.append('{}[{}]'
> - .format(status,
> - devices_stats[device]['stats'][status]))
> - if status == 'Failed' \
> - and devices_stats[device]['stats'][status] > 0:
> - critical = True
> + parts = ['{} ok'.format(device)]
> +
> + # If Failed drives are found, list counters (how many?)
> + failed_cnt = devices_stats[device]['stats'].get('Failed', 0)
> + if failed_cnt > 0:
> + critical = True
> + dev_stats = [
> + '{}[{}]'.format(status, devices_stats[device]['stats'][status])
> + for status in sorted(devices_stats[device]['stats'])
> + ]
> + parts.extend(dev_stats)
> msg.append(', '.join(parts))
>
> msg = '; '.join(msg)
> if critical:
> - print('CRITICAL: {}'.format(msg))
> - return 2
> + raise CriticalError('CRITICAL: {}'.format(msg))
> else:
> print('OK: {}'.format(msg))
> - return 0
> +
> +
> +def main():
> + try_check(parse_output)
>
>
> if __name__ == '__main__':
> diff --git a/src/files/megaraid/check_megacli.py b/src/files/megaraid/check_megacli.py
> index 57e4219..9653ea3 100755
> --- a/src/files/megaraid/check_megacli.py
> +++ b/src/files/megaraid/check_megacli.py
> @@ -54,10 +55,9 @@ def parse_output(policy=False):
> num_ldrive += 1
> state = m.group(1)
> if state != 'Optimal':
> - failed_ldrive += 1
> - msg = 'adp({}):ld({}):state({})'.format(adapter_id,
> - ldrive_id,
> - state)
> + failed_ld += 1
> + msg = 'adp({}):ld({}):state({})'.format(
> + adapter_id, ldrive_id, state)
I'll assume that means something to people using the tool :)
> errors.append(msg)
> critical = True
> continue
> @@ -72,23 +72,30 @@ def parse_output(policy=False):
> if m:
> w_policy = m.group(1)
> if w_policy != policy:
> - wrong_policy_ldrive += 1
> - msg = 'adp({}):ld({}):policy({})'.format(adapter_id,
> - ldrive_id,
> - w_policy)
> + wrg_policy_ld += 1
> + msg = 'adp({}):ld({}):policy({})'.format(
> + adapter_id, ldrive_id, w_policy)
> errors.append(msg)
> critical = True
> continue
>
> - data = {'critical': critical}
> - if len(errors) > 0 and (failed_ldrive > 0 or wrong_policy_ldrive > 0):
> - data['errors'] = {'msgs': errors,
> - 'failed_ld': failed_ldrive,
> - 'wrg_policy_ld': wrong_policy_ldrive}
> - elif nlines == 0:
> - data['errors'] = {'msgs': 'WARNING: controller not found'}
> + if nlines == 0:
> + raise WarnError('WARNING: controller not found')
> elif not match:
> - data['errors'] = {'msgs': 'WARNING: megacli output, parsing error'}
> + raise WarnError('WARNING: megacli output, parsing error')
The message should be 'error parsing megacli output'.
> + elif critical:
> + if len(errors) > 0:
> + msg = ', '.join([
> + '{}({})'.format(cnt, eval(cnt))
> + for cnt in ('failed_ld', 'wrg_policy_ld') if eval(cnt) > 0
> + ])
No need for eval here (which is generally a red flag and best avoided, because you can't feed it untrusted input).
msg = ', '.join(['{}({})'.format(cnt, vars()[cnt]) for cnt in ('failed_id', 'wrg_policy_ld') if vars()['cnt'] > 0])
(eval is safe in this particular case, since the inputs are hard coded. Its probably (imperceptibly) slower, since eval needs to parse the string into Python bytecode)
> + msg += '; '.join(errors)
> + else:
> + msg = 'failure caught but no output available'
> + raise CriticalError('CRITICAL: {}'.format(msg))
> + elif len(errors) > 0:
> + raise WarnError('WARNING: {}'.format('; '.join(errors)))
> +
> else:
> if num_ldrive == 0:
> msg = 'OK: no disks configured for RAID'
> diff --git a/src/files/mpt/check_sas2ircu.py b/src/files/mpt/check_sas2ircu.py
> index eb7d081..5881c92 100755
> --- a/src/files/mpt/check_sas2ircu.py
> +++ b/src/files/mpt/check_sas2ircu.py
> @@ -12,9 +13,11 @@ def parse_output():
> slot_re = '^\s+Slot #\s+:\s+(\d+)'
> state_re = '^\s+State\s+:\s+(\S+)'
>
> - encl_slot_state_cre = [re.compile(enclosure_re),
> - re.compile(slot_re),
> - re.compile(state_re)]
> + encl_slot_state_cre = [
> + re.compile(enclosure_re),
> + re.compile(slot_re),
> + re.compile(state_re)
If you are using this multi-line list formatting, its nice to have a trailing comma on the last item.
> + ]
>
> devices = {}
> device = []
> diff --git a/src/files/mpt/check_sas3ircu.py b/src/files/mpt/check_sas3ircu.py
> index ff9b79a..b430476 100755
> --- a/src/files/mpt/check_sas3ircu.py
> +++ b/src/files/mpt/check_sas3ircu.py
> @@ -13,21 +14,24 @@ def parse_output():
> vol_status_re = '^\s+Status of volume\s+:\s+(\S+)'
> vol_phy_re = '^\s+PHY\[\d+\] Enclosure#/Slot#\s+:\s+(\S+)'
>
> - volid_status_phy_cre = [re.compile(volid_re),
> - re.compile(vol_status_re),
> - re.compile(vol_phy_re)]
> + volid_status_phy_cre = [
> + re.compile(volid_re),
> + re.compile(vol_status_re),
> + re.compile(vol_phy_re)
trailing comma if you agree (flake8 doesn't care; I've been writing Go)
> + ]
>
> disk_re = '^Device is a Hard (disk)'
> enclosure_re = '^\s+Enclosure #\s+:\s+(\d+)'
> slot_re = '^\s+Slot #\s+:\s+(\d+)'
> state_re = '^\s+State\s+:\s+(\S+)'
>
> - vol_disk_cre = [re.compile(volume_re),
> - re.compile(disk_re)]
> + vol_disk_cre = [re.compile(volume_re), re.compile(disk_re)]
>
> - encl_slot_state_cre = [re.compile(enclosure_re),
> - re.compile(slot_re),
> - re.compile(state_re)]
> + encl_slot_state_cre = [
> + re.compile(enclosure_re),
> + re.compile(slot_re),
> + re.compile(state_re)
And here
> + ]
>
> devices = {}
> device = []
> diff --git a/src/lib/utils/tooling.py b/src/lib/utils/tooling.py
> index 387324f..0f0d149 100644
> --- a/src/lib/utils/tooling.py
> +++ b/src/lib/utils/tooling.py
> @@ -1,30 +1,80 @@
> # -*- coding: us-ascii -*-
> +import glob
> import os
> -from charmhelpers.contrib.charmsupport import nrpe
> -from charmhelpers.core.hookenv import (
> - log,
> - DEBUG,
> - ERROR,
> - INFO
> -)
> -# from charms.layer import snap
> -from shutil import copy2
> +import shutil
> import subprocess
> +import tempfile
> +import zipfile
> +
> +from charmhelpers.contrib.charmsupport import nrpe
> +from charmhelpers.core.hookenv import DEBUG, ERROR, INFO, log
>
> TOOLING = {
> - 'megacli': {'snap': 'megacli',
> - 'filename': 'megaraid/check_megacli.py',
> - 'cronjob': 'megaraid/check_megacli.sh'},
> - 'sas2ircu': {'snap': 'sas2ircu',
> - 'filename': 'mpt/check_sas2ircu.py',
> - 'cronjob': 'mpt/check_sas2ircu.sh'},
> - 'sas3ircu': {'snap': 'sas3ircu',
> - 'filename': 'mpt/check_sas3ircu.py',
> - 'cronjob': 'mpt/check_sas3ircu.sh'},
> - 'mdadm': {'filename': 'check_mdadm.py'},
> + 'megacli': {
> + 'snap': 'megacli',
> + 'filename': 'megaraid/check_megacli.py',
> + 'cronjob': 'megaraid/check_megacli.sh'
> + },
> + 'sas2ircu': {
> + 'snap': 'sas2ircu',
> + 'filename': 'mpt/check_sas2ircu.py',
> + 'cronjob': 'mpt/check_sas2ircu.sh'
> + },
> + 'sas3ircu': {
> + 'snap': 'sas3ircu',
> + 'filename': 'mpt/check_sas3ircu.py',
> + 'cronjob': 'mpt/check_sas3ircu.sh'
> + },
> + 'mdadm': {
> + 'filename': 'check_mdadm.py'
> + },
> }
>
> PLUGINS_DIR = '/usr/local/lib/nagios/plugins/'
> +TOOLS_DIR = '/usr/local/bin/'
> +TEMP_DIR = tempfile.mkdtemp()
If you create the TEMP_DIR at module scope, you can't reliably clean it up (and it might be rather large, since you extract the zip file resource into it shortly).
> +
> +
> +def install_resource(tools, resource):
> + ntools = len(tools)
> + if ntools == 0:
> + return False
> + elif 'mdadm' in tools:
> + tools = [tool for tool in tools if tool != 'mdadm']
> + ntools -= 1
> + if ntools == 0:
> + return True
> +
> + if type(resource) != str \
> + or not resource.endswith('.zip') \
> + or not os.path.exists(resource):
> + return False
> +
> + try:
> + with zipfile.ZipFile(resource, 'r') as zf:
> + zf.extractall(TEMP_DIR)
Use tempfile.TemporaryDirectory as a context manager, rather than the TEMP_DIR module scope variable (which is difficult to clean up reliably):
with tempfile.TemporaryDirectory() as tmpdir:
try:
with zipfile.ZipFile(resource, 'r') as zf:
zf.extractall(tmpdir.name)
[ ... everything else that needs to happen with the tmpdir in scope ... ]
This way everything gets cleaned up in all cases except a segfault or similar.
> + except zipfile.BadZipFile as error:
> + log('BadZipFile: {}'.format(error))
> + return False
> + except PermissionError as error:
> + log('Unable to unzip the resource: {}'.format(error), ERROR)
> + return False
How can you get a PermissionError? The zipfile containing absolute paths or something weird like that?
> +
> + count = 0
> + for filepath in glob.glob(os.path.join(TEMP_DIR, '*')):
> + filebasename = os.path.basename(filepath)
> + if filebasename in tools \
> + and filebasename in TOOLING \
> + and os.path.isfile(filepath):
> + os.chmod(filepath, 0o755)
> + shutil.chown(filepath, user=0, group=0)
> + shutil.copy2(filepath, TOOLS_DIR)
> + count += 1
> +
> + if os.path.isdir(TEMP_DIR):
> + shutil.rmtree(TEMP_DIR)
To get this called reliably, it would need to be in a try/finally block. The context manager approach above does the cleanup for you.
> +
> + return ntools == count and count > 0
>
>
> def install_tools(tools, method=None):
> @@ -39,16 +89,16 @@ def install_tools(tools, method=None):
> # snap.install(TOOLING[tool][method])
> count += 1
> except Exception as error:
> - log('Snap install error: {}'.format(error),
> - ERROR)
> + log('Snap install error: {}'.format(error), ERROR)
> + elif tool == 'mdadm':
> + # Note(aluria): mdadm is assumed to be installed by default
> + count += 1
>
> return len(tools) >= count > 0
>
>
> def _get_filepath(filename):
> - filepath = os.path.join(os.environ['CHARM_DIR'],
> - 'files',
> - filename)
> + filepath = os.path.join(os.environ['CHARM_DIR'], 'files', filename)
hookenv.charm_dir() is arguably better than os.environ['CHARM_DIR'], if for no other reason than it likely easier to mock in tests.
> if os.path.exists(filepath):
> return filepath
>
> diff --git a/src/reactive/hw_health.py b/src/reactive/hw_health.py
> index 730d405..761c7f9 100644
> --- a/src/reactive/hw_health.py
> +++ b/src/reactive/hw_health.py
> @@ -28,14 +20,32 @@ def install():
> status.blocked('manufacturer needs to be set to auto')
> return
>
> - status.maintenance("Autodiscovering hardware")
> + status.maintenance('Autodiscovering hardware')
Someone else using the status layer. I'm finding it works well.
> tools = get_tools()
> if not tools:
> set_flag('hw-health.unsupported')
> status.blocked('Hardware not supported')
> else:
> - # install vendor utilities
> - installed = install_tools(tools, method='snap')
> + resource = hookenv.resource_get('tools')
> + if resource:
> + hookenv.log('Installing from resource')
> + status.waiting('Installing from attached resource')
> + installed = install_resource(tools, resource)
> + else:
> + hookenv.log('Missing Juju resource: tools - alternative method'
> + ' is not available yet')
Probably want this to be hookenv.log('blah blah', hookenv.ERROR)
> + status.blocked('Missing Juju resource: tools')
> + return
> + # TODO(aluria): install vendor utilities via snaps
> + # https://forum.snapcraft.io/t/request-for-classic-confinement-sas2ircu/9023
> + # Snap interface for hw tooling is not supported
> + # in strict mode (confined). Classic mode
> + # (unconfined snaps) need approval
Its normally faster to get approval. Hopefully it is just the time of year.
> + #
> + # hookenv.log('Installing from snap')
> + # status.waiting('Installing from snap')
> + # installed = install_tools(tools, method='snap')
> +
> if installed:
> kv = unitdata.kv()
> kv.set('tools', list(tools))
> diff --git a/src/tests/download_nagios_plugin3.sh b/src/tests/download_nagios_plugin3.sh
> new file mode 100755
> index 0000000..78ae784
> --- /dev/null
> +++ b/src/tests/download_nagios_plugin3.sh
> @@ -0,0 +1,19 @@
> +#!/bin/bash
> +FILE=$(mktemp -p /tmp)
> +MODULENAME=nagios_plugin3.py
> +
> +env > /tmp/env.tmp
The above line looks like debugging cruft and can be removed.
> +
> +for i in .tox/py3/lib/python3*
> + do
> + if [[ -d $i && ! -e $i/$MODULENAME ]];then
> + SIZE=$(stat --printf="%s" $FILE)
> + if [[ $SIZE -eq 0 ]];then
> + curl -k -s https://git.launchpad.net/nrpe-charm/plain/files/nagios_plugin3.py > $FILE
> + SIZE=$(stat --printf="%s" $FILE)
> + fi
> + test $SIZE -gt 0 && cp $FILE ${i}/$MODULENAME || exit 1
> + fi
> + done
> +test -e $FILE && rm -f $FILE 2>/dev/null
If you are going to write this in bash, the temporary directory removal should be in a trap when the script terminates.
Per policy, this actually should be a Python script and linted:
from glob import glob
import os.path
import urllib.request
MODULE_NAME = 'nagios_plugin3.py'
MODULE_URL = 'https://git.launchpad.net/nrpe-charm/plain/files/' + MODULE_NAME
_cache = None
def content():
global _cache
if _cache is None:
_cache = urllib.request.urlopen(MODULE_URL).read()
assert len(_cache) > 0
return _cache
for i in glob('.tox/py3/lib/python3*'):
mod_path = os.path.join(i, MODULE_NAME)
if os.path.isdir(i) and not os.path.exists(mod_path):
open(mod_path, 'wb').write(content())
> +exit 0
> diff --git a/src/tests/unit/test_check_mdadm.py b/src/tests/unit/test_check_mdadm.py
> new file mode 100644
> index 0000000..f667ece
> --- /dev/null
> +++ b/src/tests/unit/test_check_mdadm.py
> @@ -0,0 +1,119 @@
> +import io
> +import os
> +import sys
> +import unittest
> +
> +import mock
Don't import mock, which is an old 3rd party library. Import the standard unittest.mock
> +import nagios_plugin3
> +
> +sys.path.append('files')
> +import check_mdadm # noqa: E402
> +
> +
> +class TestCheckMdadm(unittest.TestCase):
> + @mock.patch('os.path.exists')
> + @mock.patch('subprocess.Popen')
> + def test_get_devices_mdadm_exists(self, mdadm_details, path_exists):
> + class Test_Popen(object):
> + def __init__(cls):
> + data = (b'ARRAY /dev/md0 metadata=1.2 name=node00:0'
> + b' UUID=dfaf7413:7551b000:56dd7442:5b020adb\n'
> + b'ARRAY /dev/md1 metadata=1.2 name=node01:0'
> + b' UUID=dfaf7413:7551b000:56dd7442:5b020adc\n')
> + cls.stdout = io.BufferedReader(io.BytesIO(data))
> +
> + mdadm_details.return_value = Test_Popen()
> + path_exists.return_value = True
> +
> + actual = check_mdadm.get_devices()
> + expected = set(['/dev/md0', '/dev/md1'])
> + self.assertEqual(actual, expected)
This seems fine, and is testing existing code. Personally, I would have split out the file exists check and subprocess.Popen call from get_devices so it can be tested separately, and this test that get_devices parses that output correctly would be much clearer without the Test_Popen mock. Overall I think the tests would be less fragile, eg. if you switched from subprocess.Popen to subprocess.check_output.
> +
> + @mock.patch('os.path.exists')
> + def test_get_devices_mdadm_notfound(self, path_exists):
> + path_exists.return_value = False
> + actual = check_mdadm.get_devices()
> + expected = set()
> + self.assertEqual(actual, expected)
> +
> + @mock.patch('os.path.exists')
> + @mock.patch('subprocess.Popen')
> + def test_get_devices_mdadm_exception(self, mdadm_details, path_exists):
> + path_exists.return_value = True
> + mdadm_details.side_effect = Exception
> + actual = check_mdadm.get_devices()
> + expected = set()
> + self.assertEqual(actual, expected)
> +
> + @mock.patch('check_mdadm.get_devices')
> + @mock.patch('subprocess.Popen')
> + @mock.patch('sys.stdout', new_callable=io.StringIO)
> + def test_parse_output_ok(self, mock_print, mdadm_details, devices):
> + class Test_Popen(object):
> + def __init__(cls):
> + test_output = os.path.join(os.getcwd(), 'tests',
> + 'hw-health-samples', 'mdadm.output')
> + cls.stdout = io.FileIO(test_output)
> + cls.wait = lambda: 0
> +
> + devices.return_value = set(['/dev/md0', '/dev/md1', '/dev/md2'])
> + mdadm_details.return_value = Test_Popen()
> + check_mdadm.parse_output()
> + actual = mock_print.getvalue()
> + expected = 'OK: /dev/md0 ok; /dev/md1 ok; /dev/md3 ok; /dev/md2 ok\n'
> + self.assertEqual(actual, expected)
> +
> + @mock.patch('check_mdadm.get_devices')
> + @mock.patch('subprocess.Popen')
> + def test_parse_output_wait_warning(self, mdadm_details, devices):
> + class Test_Popen(object):
> + def __init__(cls):
> + test_output = os.path.join(os.getcwd(), 'tests',
> + 'hw-health-samples', 'mdadm.output')
> + cls.stdout = io.FileIO(test_output)
> + cls.wait = lambda: 2
> +
> + devices.return_value = set(['/dev/md0', '/dev/md1', '/dev/md2'])
> + mdadm_details.return_value = Test_Popen()
> + expected = 'WARNING: mdadm returned exit status 2'
> + with self.assertRaises(nagios_plugin3.WarnError) as context:
> + check_mdadm.parse_output()
> + self.assertTrue(expected in str(context.exception))
> +
> + @mock.patch('check_mdadm.get_devices')
> + def test_parse_output_nodevices(self, devices):
> + devices.return_value = set()
> + expected = 'WARNING: unexpectedly checked no devices'
> + with self.assertRaises(nagios_plugin3.WarnError) as context:
> + check_mdadm.parse_output()
> + self.assertTrue(expected in str(context.exception))
> +
> + @mock.patch('check_mdadm.get_devices')
> + @mock.patch('subprocess.Popen')
> + def test_parse_output_mdadm_warning(self, mdadm_details, devices):
> + devices.return_value = set(['/dev/md0', '/dev/md1', '/dev/md2'])
> + mdadm_details.side_effect = Exception('ugly error')
> + expected = 'WARNING: error executing mdadm: ugly error'
> + with self.assertRaises(nagios_plugin3.WarnError) as context:
> + check_mdadm.parse_output()
> + self.assertTrue(expected in str(context.exception))
> +
> + @mock.patch('check_mdadm.get_devices')
> + @mock.patch('subprocess.Popen')
> + def test_parse_output_degraded(self, mdadm_details, devices):
> + class Test_Popen(object):
> + def __init__(cls):
> + test_output = os.path.join(os.getcwd(), 'tests',
> + 'hw-health-samples',
> + 'mdadm.output.critical')
> + cls.stdout = io.FileIO(test_output)
> + cls.wait = lambda: 0
> +
> + devices.return_value = set(['/dev/md0', '/dev/md1', '/dev/md2'])
> + mdadm_details.return_value = Test_Popen()
> + expected = ('CRITICAL: /dev/md0 ok; /dev/md1 degraded, Active[1],'
> + ' Failed[1], Spare[0], Working[1]; /dev/md3 ok;'
> + ' /dev/md2 ok')
> + with self.assertRaises(nagios_plugin3.CriticalError) as context:
> + check_mdadm.parse_output()
> + self.assertTrue(expected in str(context.exception))
> diff --git a/src/tests/unit/test_check_megacli.py b/src/tests/unit/test_check_megacli.py
> new file mode 100644
> index 0000000..24d5c7f
> --- /dev/null
> +++ b/src/tests/unit/test_check_megacli.py
> @@ -0,0 +1,20 @@
> +import io
> +import os
> +import sys
> +import unittest
> +
> +import mock
'from unittest import mock', not 'import mock'.
> +
> +sys.path.append('files/megaraid')
> +import check_megacli # noqa: E402
> +
> +
> +class TestCheckMegaCLI(unittest.TestCase):
> + @mock.patch('sys.stdout', new_callable=io.StringIO)
> + def test_parse_output(self, mock_print):
> + check_megacli.INPUT_FILE = os.path.join(
> + os.getcwd(), 'tests', 'hw-health-samples', 'megacli.output.1')
> + check_megacli.parse_output()
> + actual = mock_print.getvalue()
> + expected = 'OK: Optimal, ldrives[1], pdrives[4]\n'
> + self.assertEqual(actual, expected)
> diff --git a/src/tests/unit/test_check_sas2ircu.py b/src/tests/unit/test_check_sas2ircu.py
> new file mode 100644
> index 0000000..4e70ff6
> --- /dev/null
> +++ b/src/tests/unit/test_check_sas2ircu.py
> @@ -0,0 +1,21 @@
> +import io
> +import os
> +import sys
> +import unittest
> +
> +import mock
unittest.mock, not 'mock'
> +
> +sys.path.append('files/mpt')
> +import check_sas2ircu # noqa: E402
> +
> +
> +class TestCheckMegaCLI(unittest.TestCase):
> + @mock.patch('sys.stdout', new_callable=io.StringIO)
> + def test_parse_output(self, mock_print):
> + check_sas2ircu.INPUT_FILE = os.path.join(os.getcwd(), 'tests',
> + 'hw-health-samples',
> + 'sas2ircu.huawei.output.1')
> + check_sas2ircu.parse_output()
> + actual = mock_print.getvalue()
> + expected = 'OK: Ready[1:0,1:1,1:2,1:3,1:4,1:5,1:6,1:7]\n'
> + self.assertEqual(actual, expected)
> diff --git a/src/tests/unit/test_tooling.py b/src/tests/unit/test_tooling.py
> index 78a6790..8b93661 100644
> --- a/src/tests/unit/test_tooling.py
> +++ b/src/tests/unit/test_tooling.py
> @@ -1,17 +1,128 @@
> -# import mock
> -import unittest
> +import os # noqa: F401
> import sys
> -# import os
> -# import io
> -# import subprocess
> -# import charmhelpers.core.host
> +import unittest
> +import zipfile
> +
> +import mock
> +import shutil # noqa: F401
>
> sys.path.append('lib/utils')
> -import tooling
> +import tooling # noqa: E402
>
>
> class TestTooling(unittest.TestCase):
> - def test_install_tools(self):
> - real = tooling.install_tools([])
> + def test_install_resource_notools(self):
> + self.assertEqual(tooling.install_resource([], False), False)
> + self.assertEqual(
> + tooling.install_resource([], '/path/to/resource'), False)
> +
> + def test_install_resource_noresource_nomdadm(self):
> + self.assertEqual(tooling.install_resource(['unittest'], False), False)
> +
> + def test_install_resource_noresource_yesmdadm(self):
> + self.assertEqual(tooling.install_resource(['mdadm'], False), True)
> +
> + @mock.patch('glob.glob')
> + @mock.patch('os.chmod')
> + @mock.patch('os.path.exists')
> + @mock.patch('os.path.isfile')
> + @mock.patch('shutil.chown')
> + @mock.patch('shutil.copy2')
> + @mock.patch('zipfile.ZipFile')
This many mocks generally means you need to split things up into smaller chunks that can be tested separately. For now this is ok, as the code being tested itself isn't bad. But even just writing and using helper like 'extract(zipfile_path, dest_dir)' would mean you could replace this test with two simpler and more readable tests.
> + def test_install_resource_ok(self, zfile, shutil_copy2, shutil_chown,
> + path_isfile, path_exists, os_chmod,
> + glob_glob):
> + class _WrapCls(object):
> + def __init__(cls):
> + cls.extractall = lambda x: None
> + cls.close = lambda: None
> +
> + class Test_resource_zipfile(object):
> + def __enter__(cls):
> + return _WrapCls()
> +
> + def __exit__(cls, type, value, traceback):
> + pass
> +
> + path_exists.return_value = True
> + zfile.return_value = Test_resource_zipfile()
> + glob_glob.return_value = ['/tmp/test/megacli']
> + os_chmod.return_value = None
> + path_isfile.return_value = True
> + shutil_chown.return_value = None
> + shutil_copy2.return_value = None
> + actual = tooling.install_resource(['megacli'], '/tmp/test.zip')
> + expected = True
> + self.assertEqual(actual, expected)
> +
> + @mock.patch('os.path.exists')
> + @mock.patch('zipfile.ZipFile')
> + def test_install_resource_zipfile_error(self, zfile, path_exists):
> + path_exists.return_value = True
> + zfile.side_effect = zipfile.BadZipFile('ugly error')
> + actual = tooling.install_resource(['megacli'], '/tmp/test.zip')
> expected = False
> - self.assertEqual(real, expected)
> + self.assertEqual(actual, expected)
> +
> + @mock.patch('glob.glob')
> + @mock.patch('os.path.exists')
> + @mock.patch('os.path.isfile')
> + @mock.patch('zipfile.ZipFile')
> + def test_install_resource_nomatchedresource(self, zfile, path_isfile,
> + path_exists, glob_glob):
> + class _WrapCls(object):
> + def __init__(cls):
> + cls.extractall = lambda x: None
> + cls.close = lambda: None
> +
> + class Test_resource_zipfile(object):
> + def __enter__(cls):
> + return _WrapCls()
> +
> + def __exit__(cls, type, value, traceback):
> + pass
> +
> + path_exists.return_value = True
> + zfile.return_value = Test_resource_zipfile()
> + glob_glob.return_value = ['/tmp/test/megacli']
> + path_isfile.return_value = True
> + self.assertEqual(
> + tooling.install_resource(['unittest'], '/tmp/test.zip'), False)
> +
> + def test_install_tools_notools(self):
> + self.assertEqual(tooling.install_tools([]), False)
> +
> + def test_install_tools_unsupported_tools(self):
> + actual = tooling.install_tools(['unittest1', 'unittest2'])
> + expected = False
> + self.assertEqual(actual, expected)
> +
> + def test_install_tools_supported_tools(self):
> + self.assertEqual(tooling.install_tools(['megacli', 'sas2ircu']), True)
> +
> + def test_install_tools_mdadm(self):
> + self.assertEqual(tooling.install_tools(['mdadm']), True)
> +
> + @mock.patch('os.environ')
> + @mock.patch('os.path.exists')
> + @mock.patch('os.path.join')
> + def test_get_filepath(self, path_join, path_exists, environ):
> + expected = 'unittest1'
> + environ.return_value = {'CHARM_DIR': 'xxx'}
> + path_join.return_value = expected
> + path_exists.return_value = True
> + self.assertEqual(tooling._get_filepath('asdfasdf'), expected)
> +
> + def test_configure_tools_notools(self):
> + self.assertEqual(tooling.configure_tools([]), False)
> +
> + def test_configure_tools_unsupported_tools(self):
> + self.assertEqual(
> + tooling.configure_tools(['unittest1', 'unittest2']), False)
> +
> + @mock.patch('shutil.copy2')
> + @mock.patch('tooling._get_filepath')
> + def test_configure_tools_mdadm(self, get_filepath, copy_file):
> + get_filepath.return_value = 'unittest'
> + copy_file.return_value = None
> + self.assertEqual(tooling.configure_tools(['mdadm']), True)
--
https://code.launchpad.net/~aluria/hw-health-charm/+git/hw-health-charm/+merge/361785
Your team Nagios Charm developers is subscribed to branch hw-health-charm:master.
References