dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #22101
Re: [Question #149552]: CUDA and OpenMP/Threads
On Friday March 18 2011 03:01:23 Gennadiy Rishkin wrote:
> New question #149552 on DOLFIN:
> https://answers.launchpad.net/dolfin/+question/149552
>
> Hi,
>
> I've just installed FEniCs and it looks like an excellent library.
Thanks!
> I have a few questions.
> CUDA :
> Is it possible to generate (GPU) CUDA code in general and for massively
> parallel assembly in particular? If it is, how is this done?
There are experimental code which do this. But none of it is merged into main.
Take a look at:
https://code.launchpad.net/~florian-rathgeber/dolfin/gpu-wrappers
and this email discussion:
https://lists.launchpad.net/dolfin/msg04040.html
Eventhough there is a dolfin branch you also need patches for ffc and I guess
also ufc to make this work.
> OpenMP/Threads:
> Is it possible to use OpenMP/Threads? Is it turned on by default, i,e.,
> does Dolfin automatically recognise a multicore machine? If it's not
> automatic, what would I have to do to use it?
In Python you can just set the number of threads to a nonzero value:
parameters["num_threads"] = 4
But it is still under development. If I am not wrong is only cell assemble
supported for now.
> Linear Algebra Backend:
> How to I change the backend to use Trilinos? I notice that it uses Petsc by
> default.
parameters["linear_algebra_backend"] = "Epetra"
But you need to have compiled DOLFIN with trilinos support.
Johan
> Gennadiy
References