← Back to team overview

dolfin team mailing list archive

Re: GenericVector and PyDOLFIN

 

>
>
> Johan Hake wrote:
>> On Wednesday 19 August 2009 19:59:00 kent-and@xxxxxxxxx wrote:
>>>> On Wednesday 19 August 2009 19:11:42 Garth N. Wells wrote:
>>>>> Johan Hake wrote:
>>>>>> On Wednesday 19 August 2009 18:50:13 Garth N. Wells wrote:
>>>>>>> Johan Hake wrote:
>>>>>>>> On Wednesday 19 August 2009 17:29:52 Garth N. Wells wrote:
>>>>>>>>> What's going on behind the scene when I copy one vector to
>>>>>>>>> another
>>>>> in
>>>>>
>>>>>>>>> PyDOLFIN using
>>>>>>>>>
>>>>>>>>>      u0.vector()[:] = u.vector()[:]
>>>>>>>> When I add:
>>>>>>>>
>>>>>>>> u2 = Function(V)
>>>>>>>> u2.vector()[:] = u.vector()[:]
>>>>>>>>
>>>>>>>> # Plot solution
>>>>>>>> plot(u2)
>>>>>>>>
>>>>>>>> In a the Poisson demo it works fine.
>>>>>>> In parallel?
>>>>>> Yupp!
>>>>>>
>>>>>>>> This type of slice only uses the assignment operator.
>>>>>>> The GenericVector assignment operator?
>>>>>> Yes. This is done in __setslice__/__getslice__ in the extended
>>>>>> python
>>>>>> class.
>>>>> I don't see then why the we end up inside
>>>>>
>>>>>    _compare_vector_with_vector
>>>>>
>>>>> for the Cahn-Hilliard demo when running in parallel?
>>>> Are you sure it is in
>>>>
>>>>   _compare_vector_with_vector
>>>>
>>>> this should only kick in if you used '==' or some other comparison
>>>> operator.
>>>>
>>>> When I run this demo in parallel, I get passed the assignment line but
>>>> it
>>>> stops when calling the solve function with the following message:
>>>>
>>>> Traceback (most recent call last):
>>>>   File "demo.py", line 86, in <module>
>>>>     Traceback (most recent call last):
>>>> solver.solve(problem, u.vector())
>>>> RuntimeError: *** Error: MUMPS is required for parallel symbolic LU.
>>>>
>>>>>>> I delved into the problem and the program ends up in the function
>>>>>>> _compare_vector_with_vector from dolfin_la_get_set_items.i. It uses
>>>>>>> GenericVector::get and GenericVector::set. These need to be used
>>>>>>> with
>>>>>>> caution in parallel.
>>>>>> Yes, when other type of slices are used we need to find a way to do
>>>>> that
>>>>>
>>>>>> in parallel. I have just started digging in the parallel code, and
>>>>> need
>>>>>
>>>>>> to look into all the changes to get familiar with it before I try to
>>>>>> solve this.
>>>>> This shouldn't be too hard.
>>>>>
>>>>>     set(const double* block, uint m, const uint* rows)
>>>>>
>>>>> works in parallel but
>>>>>
>>>>>     get(double* block, uint m, const uint* rows)
>>>>>
>>>>> doesn't yet (not too hard to fix though). The really evil functions
>>>>> are
>>>>>
>>>>>    set(const double*)
>>>>>
>>>>> and
>>>>>
>>>>>    get(double*)
>>>> I think I am only using
>>>>
>>>>    set(const double* block, uint m, const uint* rows)
>>>>    get(double* block, uint m, const uint* rows)
>>>>
>>>> However GenericVector.array is using the get(double*) function, and
>>>> returns
>>>> some fishy numbers in the numpy.array.
>>>>
>>>> Johan
>>> I guess anything that involves NumPy will turn into garbage in
>>> parallel,
>>> right?
>>
>> Should we issue an error, or should we try to work with the local
>> values?
>>
>
> If we use get/set(double*) very carefully and use the longer forms of
> get/set more, and give an offet to numpy arrays if possible (this can be
> got using the MPI::range function or GenericVector::range) could work
> out ok.
>
> Garth
>

I guess the numpy typemaps etc. assume contiguous memory (at
least earlier versions of the typemaps did). Should the parallel
arrays be made into such arrays ?

Kent



References