← Back to team overview

yade-mpi team mailing list archive

Re: deadlock fixed (?)

 

Great work Bruno, it looks like this one wasn't easy to debug. Some answers
below.

Le sam. 1 juin 2019 à 15:31, Bruno Chareyre <bruno.chareyre@xxxxxxxxxxxxxxx>
a écrit :

> Hi,
>
> On Fri, 31 May 2019 at 20:22, Deepak Kn <deepak.kn1990@xxxxxxxxx> wrote:
>
>> I think there is one more fix to be done (from commit 7dd44a4a in mpi) :
>> The bodies are sent using the non blocking MPI_ISend, this has to be
>> completed with the MPI_Wait, as of now the mpi_waits are not called, and
>> there is a minory memory issue to be fixed which François is working on.
>>
> Yes, I'll do it soon.

There are a few additional changes which I think are useful even if no bug
> were identified yet (namely: do not try to insert interaction b1-b2 until
> both b1 and b2 are inserted), and some simplifications in Subdomain.cpp
> implementations. Let me know what you think (better check my last version
> since some changes have been reverted after finding the main problem).
>
Concerning your comments/changes in Subdomain.cpp, it looks like two of the
concerned functions are not used nor exposed to python (setIDstoSubdomain
and setBodyIntrsMerge). Shall we remove them ?

1/ @Francois, Isn't there an obvious deadlock at [2] since if there is
> nothing to send we don't even send an empty list?
>
No I don't think so, we have 2 communications here and the python one is
actually sending the empty lists if there is nothing to exchange.
1- First "for" loop: every rank>0 isend then recv the bodiesToImport list
into requestedIds list (even if len==0).
2- If len>0 (requestedIds in sender side (second "for" loop),
requestedSomethingFrom
in receiver side (third "for" loop)) actually send/recv bodies with
sendBodies/receiveBodies

3/ checkcollider is the most expensive communication according to the
> script output, is it real or artificial? If it's real we can easily combine
> it with another comm to remove that barrier.
>
That's an interesting point, as it makes me realize that calling allreduce
in checkcollider avoids workers to compute DEM time-steps asynchronously
(its a barrier as you said). The communication itself is not
time-consuming, I presume that most of the time you are actually waiting
other workers to finish their steps before the allreduce (calling
comm.barrier() at the top of checkcollider should confirm that). But the
fact is that each worker have to know at each time-step if it should do a
split/merge or updateMirrorIntersections operation, don't you think so ?

p.s. François, if you reply to yade-mpi list please make sure you are not
> in quaratine, this time :)
>
Well, we will see if its fixed :-P

References