← Back to team overview

dolfin team mailing list archive

Re: Interpolation in parallel

 

I've considered this problem a little in the case of multigrid; however, I
haven't really implemented anything parallel yet.  My serial implementation
works for meshes where the domains are not nested by
nearest-point-in-domain.  The problem is a little different as I have to
create an interpolation matrix rather than just interpolating between
vectors. This matrix is built by doing a search-by-traversal across the two
meshes in tandem, with special cases to handle when it gets lost. Processor
boundaries will be one of these cases.  When I parallelize the multigrid
remeshing I intend to keep the domains local, but It wouldn't be such an
issue to not do this as I want to be able to supply meshes to the multigrid
routines as well.  This would require the kind of search that you're talking
about here, though.

The difference is that my interpolator doesn't have to be very accurate
because I'm interpolating errors, not solutions.  This means that I can take
some shortcuts and/or do things by some integral averaging (a Scott-Zhang
type interpolant).

I had some trouble finding the interpolation routines in DOLFIN that support
multiple meshes.  Could you point me to them?  This problem is quite
important to what I'm trying to do here and I might be able to contribute
something.

Doing this with some matrix object might be for the best, also, as these
operations tend to be fairly heavyweight.  You could also use the patterns
generated here to weight some redistribution of the meshes if you're doing
this operation a lot.

Thanks,

- Peter

On Mon, Aug 9, 2010 at 9:19 AM, Anders Logg <logg@xxxxxxxxx> wrote:

> On Mon, Aug 09, 2010 at 03:02:18PM +0100, Garth N. Wells wrote:
> > On Mon, 2010-08-09 at 15:54 +0200, Ola Skavhaug wrote:
> > > On Mon, Aug 9, 2010 at 3:39 PM, Garth N. Wells <gnw20@xxxxxxxxx>
> wrote:
> > > > On Mon, 2010-08-09 at 15:32 +0200, Ola Skavhaug wrote:
> > > >> Hi,
> > > >>
> > > >> when running DOLFIN in parallel, data reduction would be a nice
> thing
> > > >> to have. Typically, it would be good to have something like this:
> > > >>
> > > >> mesh = UnitSquare(2000, 2000)
> > > >> reduced_mesh = UnitSquare(200, 200)
> > > >>
> > > >> V = FunctionSpace(mesh, "CG", 1)
> > > >> reduced_V = FunctionSpace(reduced_mesh, "CG", 1)
> > > >>
> > > >> u = Function(V)
> > > >>
> > > >> ...
> > > >>
> > > >> reduced_u = interpolate(u, reduced_V)
> > > >> f << reduced_u
> > > >>
> > > >> However, the problem is that the automatic partitioning of the
> meshes
> > > >> constructs two different non-overlapping meshes on each local node.
> I
> > > >> really don't want to allow extrapolation in this case. I would be
> > > >> happy to contribute fixing this problem; does anybody see how to
> > > >> attack it?
> > > >>
> > > >
> > > > We just need to have the program handle correctly the case in which a
> > > > point is not found in the domain. It shouldn't be hard to fix (it may
> be
> > > > just a case of removing an error message ans stressing in the code
> > > > comment that points must reside on the same process).
> > > >
> > > > Start by looking in the Function::interpolate functions.
> > >
> > > Isn't the issue here that since the local meshes don't overlap, we
> > > need to fetch the values from one or more neighboring processes? So,
> > > during interpolation in parallel, all processes send a list of
> > > coordinates for the values needed that are not local, and then all
> > > processes send and receive these values. Or is this too simple for
> > > more advanced elements?
> > >
> >
> > The first step of finding the cell which contains a point doesn't depend
> > on the element. We don't have this in place for points which reside on
> > other processes. The next step would be for the value of function to be
> > evaluated and communicated.
>
> I suggest that when a point is found which is not inside the domain
> and MPI::num_processes > 1, the point is added to a list of points
> which are not inside the domain. Then all the missing points are
> distributed to all processes and each process goes through the list of
> points to see which points it can handle. Then the results are
> communicated back. I think this is the same as what Ola suggests.
>
> The problem with this is that it can potentially lead to quite a lot
> of communication. In the worst case, all points would be missing for
> all processes and then all points would have to be passed around
> between all processes. But I don't see any other way since it's not
> possible for a process to know where a missing point may be located.
>
> --
> Anders
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.10 (GNU/Linux)
>
> iEYEARECAAYFAkxgDncACgkQTuwUCDsYZdG6cACdE3u0O50658K3Llr2DNLaHyU6
> 44UAn1/zql0Uh/oRuIpKnWrW6VnYwdgm
> =98Vv
> -----END PGP SIGNATURE-----
>
> _______________________________________________
> Mailing list: https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> Post to     : dolfin@xxxxxxxxxxxxxxxxxxx
> Unsubscribe : https://launchpad.net/~dolfin<https://launchpad.net/%7Edolfin>
> More help   : https://help.launchpad.net/ListHelp
>
>

References