dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #14907
Re: GenericVector and PyDOLFIN
> On Wednesday 19 August 2009 19:59:00 kent-and@xxxxxxxxx wrote:
>> > On Wednesday 19 August 2009 19:11:42 Garth N. Wells wrote:
>> >> Johan Hake wrote:
>> >> > On Wednesday 19 August 2009 18:50:13 Garth N. Wells wrote:
>> >> >> Johan Hake wrote:
>> >> >>> On Wednesday 19 August 2009 17:29:52 Garth N. Wells wrote:
>> >> >>>> What's going on behind the scene when I copy one vector to
>> another
>> >>
>> >> in
>> >>
>> >> >>>> PyDOLFIN using
>> >> >>>>
>> >> >>>> u0.vector()[:] = u.vector()[:]
>> >> >>>
>> >> >>> When I add:
>> >> >>>
>> >> >>> u2 = Function(V)
>> >> >>> u2.vector()[:] = u.vector()[:]
>> >> >>>
>> >> >>> # Plot solution
>> >> >>> plot(u2)
>> >> >>>
>> >> >>> In a the Poisson demo it works fine.
>> >> >>
>> >> >> In parallel?
>> >> >
>> >> > Yupp!
>> >> >
>> >> >>> This type of slice only uses the assignment operator.
>> >> >>
>> >> >> The GenericVector assignment operator?
>> >> >
>> >> > Yes. This is done in __setslice__/__getslice__ in the extended
>> python
>> >> > class.
>> >>
>> >> I don't see then why the we end up inside
>> >>
>> >> _compare_vector_with_vector
>> >>
>> >> for the Cahn-Hilliard demo when running in parallel?
>> >
>> > Are you sure it is in
>> >
>> > _compare_vector_with_vector
>> >
>> > this should only kick in if you used '==' or some other comparison
>> > operator.
>> >
>> > When I run this demo in parallel, I get passed the assignment line but
>> it
>> > stops when calling the solve function with the following message:
>> >
>> > Traceback (most recent call last):
>> > File "demo.py", line 86, in <module>
>> > Traceback (most recent call last):
>> > solver.solve(problem, u.vector())
>> > RuntimeError: *** Error: MUMPS is required for parallel symbolic LU.
>> >
>> >> >> I delved into the problem and the program ends up in the function
>> >> >> _compare_vector_with_vector from dolfin_la_get_set_items.i. It
>> uses
>> >> >> GenericVector::get and GenericVector::set. These need to be used
>> with
>> >> >> caution in parallel.
>> >> >
>> >> > Yes, when other type of slices are used we need to find a way to do
>> >>
>> >> that
>> >>
>> >> > in parallel. I have just started digging in the parallel code, and
>> >>
>> >> need
>> >>
>> >> > to look into all the changes to get familiar with it before I try
>> to
>> >> > solve this.
>> >>
>> >> This shouldn't be too hard.
>> >>
>> >> set(const double* block, uint m, const uint* rows)
>> >>
>> >> works in parallel but
>> >>
>> >> get(double* block, uint m, const uint* rows)
>> >>
>> >> doesn't yet (not too hard to fix though). The really evil functions
>> are
>> >>
>> >> set(const double*)
>> >>
>> >> and
>> >>
>> >> get(double*)
>> >
>> > I think I am only using
>> >
>> > set(const double* block, uint m, const uint* rows)
>> > get(double* block, uint m, const uint* rows)
>> >
>> > However GenericVector.array is using the get(double*) function, and
>> > returns
>> > some fishy numbers in the numpy.array.
>> >
>> > Johan
>>
>> I guess anything that involves NumPy will turn into garbage in parallel,
>> right?
>
> Should we issue an error, or should we try to work with the local values?
>
> Johan
>
One option is to put some ifdefs into the SWIG code that turn of stuff
when compiled in a parallel setting. Or else I guess there will
be a lot of checks that are needed ?
What have e.g. PyTrilinos done about this ?
Kent
References