← Back to team overview

canonical-ubuntu-qa team mailing list archive

Re: [Merge] ~uralt/autopkgtest-cloud:allowed-user-cache into autopkgtest-cloud:master

 

I've commented with a question. I'm not 100% sure about the behavior, but I have the feeling there is something unsound here. What do you think?

Diff comments:

> diff --git a/charms/focal/autopkgtest-web/webcontrol/helpers/cache.py b/charms/focal/autopkgtest-web/webcontrol/helpers/cache.py
> new file mode 100644
> index 0000000..ba2970e
> --- /dev/null
> +++ b/charms/focal/autopkgtest-web/webcontrol/helpers/cache.py
> @@ -0,0 +1,63 @@
> +import fcntl
> +import json
> +import os
> +from pathlib import Path
> +
> +
> +class JSONCache:
> +    def __init__(self, cache_path):
> +        self.path = Path(cache_path)
> +        if not self.path.exists():
> +            with open(self.path, "w") as f:
> +                json.dump({}, f)
> +
> +    def _write(self, data):
> +        temp_path = self.path.with_suffix(".tmp")
> +        with open(temp_path, "w") as f:
> +            json.dump(data, f, default=str)
> +        os.replace(temp_path, self.path)

question: Why do you need that tmp file since you already implemented the locks?

I might be mistaken, but I have the impression that the following could happen:
| Process1 | Process2    | Comment                                                                                                                           |
| set()    |             | lock on 'file'                                                                                                                    |
|          | set()       | hold on locking 'file'                                                                                                            |
| _write() |             | write on 'file.tmp' and replace 'file' with new content after the `os.replace`                                                    |
| unlock   |             | this unlocks the old 'file' that is still opened as a file descriptor                                                             |
|          | lock taken  | the lock is actually taken on the old version of 'file' that was already open                                                     |
|          | json.load() | read the old version of 'file' that can no longer be reached from the filesystem, but is held as an opened file descriptor by P2  |
|          | _write()    | write that old version with its new key added, loosing the changes of P1 during the `os.replace`                                  |

The same situation could happen without bothering with the lock, so I would suggest one of the two solutions:
1. Don't bother with the locks, and let some cache entries be dropped from time to time on race condition.
2. Keep the locks, but then remove that rename, so that every process always work on the same file, without ever rotating it and get obsolete file descriptors.

> +
> +    def _lock(self, file, mode):
> +        if mode == "r":
> +            fcntl.flock(file, fcntl.LOCK_SH)
> +        if mode == "w":
> +            fcntl.flock(file, fcntl.LOCK_EX)
> +
> +    def _unlock(self, file):
> +        fcntl.flock(file, fcntl.LOCK_UN)
> +
> +    def get(self, key):
> +        with open(self.path, "r") as f:
> +            self._lock(f, "r")
> +            try:
> +                data = json.load(f)
> +                return data.get(key, None)
> +            finally:
> +                self._unlock(f)
> +
> +    def set(self, key, value):
> +        with open(self.path, "r+") as f:
> +            self._lock(f, "w")
> +            try:
> +                data = json.load(f)
> +                data[key] = value
> +                self._write(data)
> +            finally:
> +                self._unlock(f)
> +
> +    def delete(self, key):
> +        with open(self.path, "r+") as f:
> +            self._lock(f, "w")
> +            try:
> +                data = json.load(f)
> +                if key in data:
> +                    del data[key]
> +                    self._write(data)
> +            finally:
> +                self._unlock(f)
> +
> +    def clear(self):
> +        if os.path.exists(self.path):

Same kind of issue as above here. It would be best to lock the file and write `{}` to it without bothering with the removal.

> +            os.remove(self.path)
> +        with open(self.path, "w") as f:
> +            json.dump({}, f)


-- 
https://code.launchpad.net/~uralt/autopkgtest-cloud/+git/autopkgtest-cloud/+merge/470495
Your team Canonical's Ubuntu QA is requested to review the proposed merge of ~uralt/autopkgtest-cloud:allowed-user-cache into autopkgtest-cloud:master.