dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #18938
Re: Interpolation in parallel
On Mon, Aug 09, 2010 at 03:02:18PM +0100, Garth N. Wells wrote:
> On Mon, 2010-08-09 at 15:54 +0200, Ola Skavhaug wrote:
> > On Mon, Aug 9, 2010 at 3:39 PM, Garth N. Wells <gnw20@xxxxxxxxx> wrote:
> > > On Mon, 2010-08-09 at 15:32 +0200, Ola Skavhaug wrote:
> > >> Hi,
> > >>
> > >> when running DOLFIN in parallel, data reduction would be a nice thing
> > >> to have. Typically, it would be good to have something like this:
> > >>
> > >> mesh = UnitSquare(2000, 2000)
> > >> reduced_mesh = UnitSquare(200, 200)
> > >>
> > >> V = FunctionSpace(mesh, "CG", 1)
> > >> reduced_V = FunctionSpace(reduced_mesh, "CG", 1)
> > >>
> > >> u = Function(V)
> > >>
> > >> ...
> > >>
> > >> reduced_u = interpolate(u, reduced_V)
> > >> f << reduced_u
> > >>
> > >> However, the problem is that the automatic partitioning of the meshes
> > >> constructs two different non-overlapping meshes on each local node. I
> > >> really don't want to allow extrapolation in this case. I would be
> > >> happy to contribute fixing this problem; does anybody see how to
> > >> attack it?
> > >>
> > >
> > > We just need to have the program handle correctly the case in which a
> > > point is not found in the domain. It shouldn't be hard to fix (it may be
> > > just a case of removing an error message ans stressing in the code
> > > comment that points must reside on the same process).
> > >
> > > Start by looking in the Function::interpolate functions.
> >
> > Isn't the issue here that since the local meshes don't overlap, we
> > need to fetch the values from one or more neighboring processes? So,
> > during interpolation in parallel, all processes send a list of
> > coordinates for the values needed that are not local, and then all
> > processes send and receive these values. Or is this too simple for
> > more advanced elements?
> >
>
> The first step of finding the cell which contains a point doesn't depend
> on the element. We don't have this in place for points which reside on
> other processes. The next step would be for the value of function to be
> evaluated and communicated.
I suggest that when a point is found which is not inside the domain
and MPI::num_processes > 1, the point is added to a list of points
which are not inside the domain. Then all the missing points are
distributed to all processes and each process goes through the list of
points to see which points it can handle. Then the results are
communicated back. I think this is the same as what Ola suggests.
The problem with this is that it can potentially lead to quite a lot
of communication. In the worst case, all points would be missing for
all processes and then all points would have to be passed around
between all processes. But I don't see any other way since it's not
possible for a process to know where a missing point may be located.
--
Anders
Attachment:
signature.asc
Description: Digital signature
Follow ups
References