canonical-ubuntu-qa team mailing list archive
-
canonical-ubuntu-qa team
-
Mailing list archive
-
Message #05474
Re: [Merge] ~andersson123/autopkgtest-cloud:charm-fixes into autopkgtest-cloud:master
Review: Needs Fixing
Don't bother testing, it's probably broken :-)
Diff comments:
> diff --git a/charms/focal/autopkgtest-web/webcontrol/db-backup b/charms/focal/autopkgtest-web/webcontrol/db-backup
> index b03100b..e060556 100755
> --- a/charms/focal/autopkgtest-web/webcontrol/db-backup
> +++ b/charms/focal/autopkgtest-web/webcontrol/db-backup
> @@ -47,40 +52,10 @@ def db_connect() -> sqlite3.Connection:
>
>
> def backup_db(db_con: sqlite3.Connection):
> - db_backup_con = sqlite3.connect(DB_BACKUP_PATH)
> - with db_backup_con:
> - db_con.backup(db_backup_con, pages=1)
> - db_backup_con.close()
> -
> -
> -def compress_db():
> - """
> - use gzip to compress database
> - """
> - with open(DB_BACKUP_PATH, "rb") as f_in, gzip.open(
> - "%s.gz" % DB_BACKUP_PATH, "wb"
> - ) as f_out:
> - shutil.copyfileobj(f_in, f_out)
> -
> -
> -def init_swift_con() -> swiftclient.Connection:
> - """
> - Establish connection to swift storage
> - """
> - swift_creds = {
> - "authurl": os.environ["OS_AUTH_URL"],
> - "user": os.environ["OS_USERNAME"],
> - "key": os.environ["OS_PASSWORD"],
> - "os_options": {
> - "region_name": os.environ["OS_REGION_NAME"],
> - "project_domain_name": os.environ["OS_PROJECT_DOMAIN_NAME"],
> - "project_name": os.environ["OS_PROJECT_NAME"],
> - "user_domain_name": os.environ["OS_USER_DOMAIN_NAME"],
> - },
> - "auth_version": 3,
> - }
> - swift_conn = swiftclient.Connection(**swift_creds)
> - return swift_conn
> + with open(DB_BACKUP_PATH, "wb") as bkp_file:
> + for line in db_con.iterdump():
> + compressed_line = zstd_compress(data=f"{line}\n".encode())
Hmm, I doubt this is very efficient. Compressing each line separately will loose a lot of the compression performance, since the algorithm can't find patterns in big chunks of data, but only really tiny ones.
Moreover, the decoding won't work (or not as expected), because you will try decoding the whole block at once, while it was compressed a smaller blocks, and you would need to separate again to do some chunk decompress.
In addition, that spawns **a lot** of zstd process, and spawning a process isn't cheap.
What I meant was something like this:
```python
bkp_file.write(zstd_compress(data="\n".join(db_con.iterdump()).encode()))
```
> + bkp_file.write(compressed_line)
>
>
> def create_container_if_it_doesnt_exist(swift_conn: swiftclient.Connection):
--
https://code.launchpad.net/~andersson123/autopkgtest-cloud/+git/autopkgtest-cloud/+merge/472678
Your team Canonical's Ubuntu QA is requested to review the proposed merge of ~andersson123/autopkgtest-cloud:charm-fixes into autopkgtest-cloud:master.
References