← Back to team overview

fenics team mailing list archive

Re: Application domains

 

"Robert C. Kirby" <kirby@xxxxxxxxxxxx> writes:

>>
>> The main point I was attacking with my post was the suggestion that we
>> should sacrifice performance by not pre-compiling code, since the
>>
> 1.) We don't know that not-precompiling code sacrifices significant  performance.  This is not a settled question.  It
> could be that the  optimal run-time code is only epsilon more expensive than the optimal  compiled code.  It largely
> depends on how the systems are  engineered.  An interesting comparison (don't want to ruffle any  feathers though) would
> be to make a run of solving some problem in  DOLFIN and in Sundance and compare the time to assemble the matrix.

  I have talked to Kevin about this many times. The problems he solves are EXPLICITLY solve dominated. This is by no
means true for all problems. Many, many problems are dominated by assembly time, particularly for complex, nonlinear
physics since you are constantly computing the residual. Here, precompilation clearly wins (I think Kevin agrees).

> The first is neither possible nor practical.
> 1.) Blue Gene runs a very stripped-down kernel (parts of it are  highly guarded as they contain proprietary IBM secrets,
> I believe) to  allow maximum memory usage per node.  The claim is that when you  start adding stuff to the kernel (like
> dynamic loading) you will lose  available memory for the highest-end runs.  Hence, even if you wanted  to spend all your
> time rewriting the BlueGene kernel, they wouldn't  let you.

  This is complete crap I think. IBM is lazy and unmotivated here. If you can load a program, you can load my damn
dynamic library with the exact same system call (patching is a litle different, but easy). Memory is also a red herring
since this code will be loaded by the static executable anyway. These guys are blowing smoke.

      Matt
-- 
"Failure has a thousand explanations. Success doesn't need one" -- Sir Alec Guiness



References