← Back to team overview

yade-users team mailing list archive

Re: all yade users, please introduce yourself!


> > Bruno: "The time gain with "parallel" option is still very low". What
> > speedups you get? For TriaxialTest(fast=True,numberOfGrains=10000), I
> > have 48 vs. 28 iter/sec by setting OMP_NUM_THREADS to 4 and 1
> > respectively. If you have some ideas for improvements, they will be very
> > welcome.
> >
> >   
> Oh!! I still have to try this. I was only looking at this yesterday : 
> http://yade.wikia.com/wiki/Triaxial_Test_Parallel
> Suggesting a max of 25% speedup. Has it change?
> I had a few ideas, like merging geomEngineUnit and physEngineUnit (less 
> loops and less sync), or at least don't dispatch interactions when 
> i->physics exists . I tried that, it gives 3% speedup just moving 
> if(!interaction->interactionPhysics) from the engine unit to the dispatcher.
> However, I saw you already merged geom+physics+contact law, so I gave 
> up... Is this merged version in the current SVN?

Yes, since perhaps more than a month. it is the InteractionDispatchers
that is activated for TriaxialTest::fast. If you have some other ideas
for speed, add them somewhere at the bottom of

> Btw, the scripts/test/triax-perf.py you refer to in this page in not in 
> the trunk.
Sorry, I fixed that page. It should be examples/triax-perf/triax-perf.py
(see comments in it)

> I don't see any parallel command in the triaxialTest or in 
> ElasticContactLaw. How can it work?
Dispatchers handle that (either the triplet
InteractionGeometryEngineUnit, InteractionPhysicsEngineUnit,
ConstitutiveLawDispatcher; or InteractionDispatchers in one loop) with a
few openmp #pragma statemenets. The number of threads is set by the
OMP_NUM_THREADS env var in the shell.

> Do you still have those big cpu time for the collider with 50k spheres? 
> I could have a few ideas for this.

I didn't run that with InsertionSortCollider (it didn't exist back
then), but as seen from http://yade.wikia.com/wiki/Colliders_performace,
the collider should be about 2x faster. But it scales the same, so you
will get 80% of time in collider once you use 100k spheres again.
(BTW is it OK if I use InsertionSortCollider in TriaxialTest by default,
even without "fast"?)

> Regarding paralelization, we are working on solid-fluid coupling and we 
> will use a solver that support parallelization for solving large linear 
> systems (an open source library called "taucs", used for sparse system 
> in mathematica, matlab, comsol and various good commercial codes). I 
> think it will be the good time to start using our multiproc server with 
> paralel Yade (I admit I did not really use this server yet...).

(taucs has been last released in 2003, does that give you great deal of
confidence in its future?). How often are you going to solve the sparse
system? If at every step, most time will be spent there probably (unless
you have 100k+ particles). If taucs runs multi-threaded (it seems it
does), it may interact badly with openMP threads which will be allocated
independently. You will see.

> > Anton: "Good to have YADE deb-package in repositories.". The
> > infrastructure is there, https://launchpad.net/~yade-users/+archive/ppa
> > has (one) package, but given the speed how yade evolves and until
> > recently, you couldn't practically use it without writing c++ code, it
> > didn't make much sense to distribute binaries.
> >   
> You can still run triaxial tests with various number of grains, 
> friction, granulometry, compacity, etc. Not a negligeable thing as the 
> triaxial test is the first simulation for more than 50% of the dem users 
> I think.

Hm, good point. I would like to release yade at  some near future point,
just for such purposes. Please add bugs and attach them to the 0.20-0
milestone so that we can track what rests to be done. It looks more or
less stabilized now.


Follow ups