← Back to team overview

yade-mpi team mailing list archive

Re: Yade-MPI sending/receiving serialized bodies

 

Deepak, could you please provide an example script to test this feature ?

Le lun. 29 avr. 2019 à 13:39, François <francois.kneib@xxxxxxxxx> a écrit :

> Hi, and nice work Deepak.
> I would add that we can't compare directly the number of elements that are
> transmitted, but rather the total volume, as size of char is 8bits and size
> of double is 64bits by default on modern cpus.
> As a result, we should rather compare 4180*8=33440 with 215456, which
> still represents a factor of 10.
>
> Le lun. 29 avr. 2019 à 13:16, Bruno Chareyre <
> bruno.chareyre@xxxxxxxxxxxxxxx> a écrit :
>
>> Hi Deepak,
>> Clearly there is no reason to compare speed of sending pos+vel vs. speed
>> of sending the entire body, since these communications are not used at the
>> same stages.
>> Sending serialized bodies is an alternative to sending the whole scene,
>> and I guess (?) sending 5% of the bodies in a scene will take less time
>> than sending the scene.
>> The communication which occures at each iteration will always be only
>> pos+vel.
>>
>> @Janek, when we break a scene in multiple pieces we want to send
>> everything because we don't want to worry about what needs to be sent in
>> each other particular case, and actually if an attribute is there then we
>> can safely assume that there is a reason for it. When it's already broken
>> in pieces we update (communicate) only positions and velocity because
>> otherwise it's too expensive, as Deepak found, but it's not foolproof. For
>> instance differential growing of particles in each subdomain would break
>> this concept since radius is not communicated at every iteration.
>>
>> Bruno
>>
>>
>> On Sun, 28 Apr 2019 at 19:58, Deepak Kn <deepak.kn1990@xxxxxxxxx> wrote:
>>
>>> Hello All,
>>>
>>> I have finished implementing the functions to serialize and deserialize
>>> the bodies and the necessary communications to exchange the serialized
>>> bodies. The serialized messages (binary) are several times larger than the
>>> usual (state) messages in mpy/mpf.py (vel,pos, ori, id, bounds) etc. And as
>>> a result the simulations are far slower ! Here's how it scales : for
>>> exchanging 220 bodies, the serialized messages were around 215456 elements
>>> (chars), and in the previous versions, the message sizes were 4180
>>> (doubles). (testMPI2D no merge, on 3 procs, 20000 bodies.)
>>>
>>> Most of the functions are implemented in Subdomain.cpp Subdomain.hpp and
>>> MPIBodyContainer.hpp, (branch : mpi_dpk on gitlab) I followed the same
>>> logic of the previous version (mpy/mpf), non blocking sends and blocking
>>> recvs (with probes) are used for the communications between the workers,
>>> and blocking communication for sending the forces to the master (the same
>>> as in mpy/mpf).
>>>
>>> I think this can be improved, we can serialize the states,bounds and
>>> send these instead of the bodies? what do you think? (all the functions do
>>> serialize, deserialize and communicate are ready. We just need a container
>>> to hold only the required info.)
>>> --
>>> Mailing list: https://launchpad.net/~yade-mpi
>>> Post to     : yade-mpi@xxxxxxxxxxxxxxxxxxx
>>> Unsubscribe : https://launchpad.net/~yade-mpi
>>> More help   : https://help.launchpad.net/ListHelp
>>>
>>
>>
>> --
>> --
>> _______________
>> Bruno Chareyre
>> Associate Professor
>> ENSE³ - Grenoble INP
>> Lab. 3SR
>> BP 53
>> 38041 Grenoble cedex 9
>> Tél : +33 4 56 52 86 21
>> ________________
>>
>> Email too brief?
>> Here's why: email charter
>> <https://marcuselliott.co.uk/wp-content/uploads/2017/04/emailCharter.jpg>
>> --
>> Mailing list: https://launchpad.net/~yade-mpi
>> Post to     : yade-mpi@xxxxxxxxxxxxxxxxxxx
>> Unsubscribe : https://launchpad.net/~yade-mpi
>> More help   : https://help.launchpad.net/ListHelp
>>
>

Follow ups

References