← Back to team overview

yade-mpi team mailing list archive

Re: deadlock fixed (?)

 

>
> Concerning the non blocking MPI_ISend, using MPI_Wait was not necessary
> with the use of a basic global barrier. I'm afraid that looping on send
> requests and wait for them to complete can slow down the communications, as
> you force (the send) order one more time (the receive order is already
> forced here <https://gitlab.com/yade-dev/trunk/blob/mpi/py/mpy.py#L641>).
>
... but not using a global barrier allows the first threads that finished
their sends/recvs to start the next DEM iteration before the others, +1 for
your fix so finally I don't know what's better. Anyway that's probably not
meaningful compared to the interaction loop timings.

Follow ups

References