yade-dev team mailing list archive
-
yade-dev team
-
Mailing list archive
-
Message #15064
Re: Docker/Singularity images for production (and possibly development)
Hi Bruno,
this is very interesting! I have never heard about singularity yet. Thanks
for information.
>From my point of view, it is not a problem at all to build docker-images
with yadedaily
inside, if it is helpful for you and anybody. I have some concerns about
large dev-images,
but I am also opened to it.
I would propose to organize a short zoom/bbb/jitsi video-meeting and to
discuss it by voice.
We are practicing it already with Janek and Klaus for some other (paper)
stuff and it works
perfectly and just faster as writing long emails.
Best regards
Anton
Am Sa., 6. März 2021 um 19:18 Uhr schrieb Bruno Chareyre <
bruno.chareyre@xxxxxxxxxxxxxxx>:
>
> On 06/03/2021 17:06, Janek Kozicki (yade) wrote:
>
> I am not exactly sure what you want to discuss,
>
> I don't know either LOL. That's more an announcement in advance so someone
> can raise issues, ask features, etc.
>
> Do you want to create some sort of packages with yade installed inside?
>
> You can call it package, but that's more like some docker images in a
> different format (from a very macroscopic point of view). Main thing is
> that it is allowed on HPC (where compiling yade can be big pain) and it
> seems to become more popular.
>
> Hence people will look for a yade-docker target (one with yade inside) in
> order to build their singularity images, and it is fairly easy to offer
> some.
>
> Mind that before using Singularity I have never been able to get all
> checkTests to pass on our HPC cluster. I was able to run what I needed most
> of the time, but never to pass all tests. There was always an issue with
> something.
>
> I am not sure if yade-dev registry will be able to hold big
> docker images.
>
> Good point, though the images have no reason to be much bigger than our
> current docker images. The problem would be more images, not larger images.
> I will check registry limit. If it is a problem I can keep pushing to
> gitlab.com/bchareyre registry, not an issue. See:
> https://gitlab.com/bchareyre/docker-yade/container_registry/1672064
>
> As you see the images are from 1.1GB to 1.7GB, not a big increment.
>
> We may run out of space if we don't start paying
> gitlab for hosting.
>
> Not an issue. What I described is what I'm already doing under my account
> (without paying). If migrating one thing from gitlab/bchareyre to
> gitlab/yade-dev is the cause of running out of space, then I'll just not
> migrate. It is not an problem to provide the images to the users under my
> registry.
>
> Perhaps these singularity_docker packages should also be on yade-dem.org ?
>
> Excessively complex. We would have to setup a registry on our local server
> while gitlab does that very well.
>
>
> The interesting stuff for me would be if we could use these HPC
> singularity servers in our gitlab continuous integration pipeline :)
>
> If you mean accessing more hardware ressources, no, it will not work in
> Grenoble.
> The HPC clusters are dedicated to scientific computing. They have special
> job submission systems, it will absolutely not integrate in a CI framework.
>
> The yade-runner-01 quickly runs out of space whenever try to I enable
> it ;-)
>
> Yeah, but this is a completely different type of ressources, even if they
> are provided by the same people overall.
> Maybe it is a good time to check again how I could get gitlab runners for
> yade. They improved a number of things and offered new services in the
> recent years. There might be docker farms more easily accessible than when
> Rémi configured yade-runner-01, now. Rémi was basically ahead of things.
>
>
> Maybe it is only a matter of single line in
> file /etc/gitlab-runner/config.toml , change:
>
> executor = "docker"
>
> to
>
> executor = "singularity"
>
> I think this is quite likely.
>
>
> Very likely but there is no point doing that, I think.
> Why would you generate a singularity image from a docker image to achieve
> something the docker image does just as well?
> In the context of using gitlabCI/docker we have root privileges, hence no
> issue with docker.
>
>
> We already have incremental recompilation in our gitlab CI pipeline.
> The ccache is used for that. The trick was to mount inside docker
> (for you: inside singularity) a local directory from the host
> filesystem, where the ccache files are stored.
>
> That the gitlab compilation is incremental doesn't make my own local
> compilation incremental.
> However if I can download a snapshot of the gitlab pipeline as a virtual
> machine I can compile incrementally, locally, even though the initial
> compilation wasn't local.
>
> Note that the docker images are re-downloaded from gitlab only when
> they have been rebuilt on https://gitlab.com/yade-dev/docker-yade/-/pipelines
> And this download is pretty slow. Fortunately it happens only every
> few weeks. Otherwise docker uses the cached linux distro image.
>
> I see where I lost you. Singularity images (at least in my project) are
> not in any way related to CI.
> They are related to, primarily: how actual users get actual results
> (production).
> And optionally, to how devs actually compile locally.
>
> Well, download once (wait for download to finish) then start working.
> Not much difference to waiting for local compilation (for me that's
> inside chroot, sometimes inside docker) then start working :)
>
> With my university connection speed, downloading a docker image and
> recompiling just one *.cpp is way faster than downloading trunk and
> compiling everything from scratch. Like incredibly faster.
> I'm not speaking of what happens on gitlab, I'm speaking of what happens
> on my own computer.
>
>
>
> pushing to registry is part of the pipeline on docker-yade:
> https://gitlab.com/yade-dev/docker-yade/-/blob/master/.gitlab-ci.yml#L17
>
> Yes, that's my fault
> <https://gitlab.com/yade-dev/docker-yade/-/commit/c0674c4aacdd3207bb156d2f385704ac5bf5d763>.
> :)
>
> But I was speaking of pushing from trunk pipeline. Anyway, the incremental
> compilation part is a secondary point.
>
> Main point is we can easily provide some docker/singularity images that
> people could directly use to run yade (on HPC especially). Currently we
> don't. Depending on storage limits they can be under yade-dev or bchareyre,
> I don't mind. Either way we can point to them in the doc.
>
> Cheers
>
> Bruno
>
Follow ups
References