Hi there,
I'm having an issue with results differing between initially identical
simulations. I know we have two stock explanations:
* the fairytale, authored by myself back then, that (force, ...)
summation order is non-deterministic due to OpenMP (threads
switching controlled by the kernel) and these small rounding errors
propagate in chaos-like fashion, having macroscopic effect at the end;
* CPU may "randomly" keep some doubles in registers (which have more
width, IIRC 80bit?), and keep other ones in cache/RAM, which have
less width (64bit); thus some numbers will be randomly more precise
than others, which again leads to non-determinism and propagation as
above. I heard this from Stefan Luding (IIRC) some 10 years ago at a
conference, but I am not sure how valid that is at all; I have not
heard about that from other sources.
I am not conviced by either explanation, the scatter seems to be too big
(roller screens -- I get something like 2% scatter in resulting PSD
curves over a long simulation run multiple times, with completely
idetnical initial situation). Even though I use Woo, I assume the
problem is still the same as in Yade. So I would like to hear about:
* Someone else having that problem at all?
* Someone did a proper (quantitative) in-depth investigation of this?
* Someone run Yade through valgrind recently, to check for
uninitialized memory reads?
* I know that PFC 5.0 has what they call "deterministic mode"
simulations, someone knows the internals of what it does? (Emanuele?
If you may disclose that --)
Cheers,
Václav