yade-mpi team mailing list archive
Mailing list archive
Re: Yade-MPI sending/receiving serialized bodies
Deepak Kn <deepak.kn1990@xxxxxxxxx>
Janek Kozicki <janek_listy@xxxxx>
Sun, 28 Apr 2019 21:05:45 +0200
You could add a new flag ``mpiRequired`` to attribute flags  and
by default assume that the attribute does not need to be send by mpi.
Or maybe this flag is simply ``noSave`` and ``readonly`` and
``hidden`` flags together? This can be possible. Why send over mpi
something that does not need to be saved? Why send something
readonly? Why send something hidden from GUI inspect?
Deepak Kn said: (by the date of Sun, 28 Apr 2019 19:57:54 +0200)
> Hello All,
> I have finished implementing the functions to serialize and deserialize the
> bodies and the necessary communications to exchange the serialized bodies.
> The serialized messages (binary) are several times larger than the usual
> (state) messages in mpy/mpf.py (vel,pos, ori, id, bounds) etc. And as a
> result the simulations are far slower ! Here's how it scales : for
> exchanging 220 bodies, the serialized messages were around 215456 elements
> (chars), and in the previous versions, the message sizes were 4180
> (doubles). (testMPI2D no merge, on 3 procs, 20000 bodies.)
> Most of the functions are implemented in Subdomain.cpp Subdomain.hpp and
> MPIBodyContainer.hpp, (branch : mpi_dpk on gitlab) I followed the same
> logic of the previous version (mpy/mpf), non blocking sends and blocking
> recvs (with probes) are used for the communications between the workers,
> and blocking communication for sending the forces to the master (the same
> as in mpy/mpf).
> I think this can be improved, we can serialize the states,bounds and send
> these instead of the bodies? what do you think? (all the functions do
> serialize, deserialize and communicate are ready. We just need a container
> to hold only the required info.)