← Back to team overview

dolfin team mailing list archive

Re: Interpolation in parallel

 

On Wed, 2010-08-11 at 13:33 +0200, Ola Skavhaug wrote:
> On Wed, Aug 11, 2010 at 1:06 PM, Garth N. Wells <gnw20@xxxxxxxxx> wrote:
> > On Wed, 2010-08-11 at 13:01 +0200, Ola Skavhaug wrote:
> >> On Wed, Aug 11, 2010 at 12:09 PM, Garth N. Wells <gnw20@xxxxxxxxx> wrote:
> >> > On Wed, 2010-08-11 at 11:59 +0200, Ola Skavhaug wrote:
> >> >> On Wed, Aug 11, 2010 at 11:46 AM, Anders Logg <logg@xxxxxxxxx> wrote:
> >> >> > On Wed, Aug 11, 2010 at 11:38:12AM +0200, Ola Skavhaug wrote:
> >> >> >> On Mon, Aug 9, 2010 at 4:19 PM, Anders Logg <logg@xxxxxxxxx> wrote:
> >> >> >> > On Mon, Aug 09, 2010 at 03:02:18PM +0100, Garth N. Wells wrote:
> >> >> >> >> On Mon, 2010-08-09 at 15:54 +0200, Ola Skavhaug wrote:
> >> >> >> >> > On Mon, Aug 9, 2010 at 3:39 PM, Garth N. Wells <gnw20@xxxxxxxxx> wrote:
> >> >> >> >> > > On Mon, 2010-08-09 at 15:32 +0200, Ola Skavhaug wrote:
> >> >> >> >> > >> Hi,
> >> >> >> >> > >>
> >> >> >> >> > >> when running DOLFIN in parallel, data reduction would be a nice thing
> >> >> >> >> > >> to have. Typically, it would be good to have something like this:
> >> >> >> >> > >>
> >> >> >> >> > >> mesh = UnitSquare(2000, 2000)
> >> >> >> >> > >> reduced_mesh = UnitSquare(200, 200)
> >> >> >> >> > >>
> >> >> >> >> > >> V = FunctionSpace(mesh, "CG", 1)
> >> >> >> >> > >> reduced_V = FunctionSpace(reduced_mesh, "CG", 1)
> >> >> >> >> > >>
> >> >> >> >> > >> u = Function(V)
> >> >> >> >> > >>
> >> >> >> >> > >> ...
> >> >> >> >> > >>
> >> >> >> >> > >> reduced_u = interpolate(u, reduced_V)
> >> >> >> >> > >> f << reduced_u
> >> >> >> >> > >>
> >> >> >> >> > >> However, the problem is that the automatic partitioning of the meshes
> >> >> >> >> > >> constructs two different non-overlapping meshes on each local node. I
> >> >> >> >> > >> really don't want to allow extrapolation in this case. I would be
> >> >> >> >> > >> happy to contribute fixing this problem; does anybody see how to
> >> >> >> >> > >> attack it?
> >> >> >> >> > >>
> >> >> >> >> > >
> >> >> >> >> > > We just need to have the program handle correctly the case in which a
> >> >> >> >> > > point is not found in the domain. It shouldn't be hard to fix (it may be
> >> >> >> >> > > just a case of removing an error message ans stressing in the code
> >> >> >> >> > > comment that points must reside on the same process).
> >> >> >> >> > >
> >> >> >> >> > > Start by looking in the Function::interpolate functions.
> >> >> >> >> >
> >> >> >> >> > Isn't the issue here that since the local meshes don't overlap, we
> >> >> >> >> > need to fetch the values from one or more neighboring processes? So,
> >> >> >> >> > during interpolation in parallel, all processes send a list of
> >> >> >> >> > coordinates for the values needed that are not local, and then all
> >> >> >> >> > processes send and receive these values. Or is this too simple for
> >> >> >> >> > more advanced elements?
> >> >> >> >> >
> >> >> >> >>
> >> >> >> >> The first step of finding the cell which contains a point doesn't depend
> >> >> >> >> on the element. We don't have this in place for points which reside on
> >> >> >> >> other processes. The next step would be for the value of function to be
> >> >> >> >> evaluated and communicated.
> >> >> >> >
> >> >> >> > I suggest that when a point is found which is not inside the domain
> >> >> >> > and MPI::num_processes > 1, the point is added to a list of points
> >> >> >> > which are not inside the domain. Then all the missing points are
> >> >> >> > distributed to all processes and each process goes through the list of
> >> >> >> > points to see which points it can handle. Then the results are
> >> >> >> > communicated back. I think this is the same as what Ola suggests.
> >> >> >>
> >> >> >> Yes, this is more or less what I have in mind. Should I add a
> >> >> >> structure in Data to hold the points?
> >> >> >
> >> >> > In Data? That doesn't sound like an optimal solution. Data is supposed
> >> >> > to be a small data structure for local data. We probably need a new
> >> >> > data structure for that, or maybe just some std::vector<>s as part of
> >> >> > the algorithm that does the interpolation.
> >> >>
> >> >> Its kind of messy, this, since restrict_as_ufc_function is responsible
> >> >> for calling evaluate that in turn calls eval, right? Hence, if this
> >> >> should work, we need to store the list of missing points from outside
> >> >> eval. Do you suggest some vectors in GenericFunction?
> >> >>
> >> >
> >> > It's easy for FooFunction to get out of hand. What about some separate
> >> > objects for
> >> >
> >> >  1) Finding/communicating off-process points
> >> >
> >> >  2) Evaluating off-process Functions
> >> >
> >> > I feel that that this functionality is a bit sophisticated given what
> >> > we're still lacking in parallel. It would be nice to try to get some
> >> > more of the fundamentals in place and tested first. It will then become
> >> > clearer how best to do more complicated things.
> >> >
> >> > Garth
> >>
> >> That's a very good point. Is there a blueprint on the parallel effort
> >> and what's missing there?
> >>
> >
> > There are a few (the most obvious a prefixed with parallel-).
> >
> > A starting point would be to get more demos working in parallel. Stokes
> > would be a good start.
> >
> > Garth
> 
> I have to be a bit more applied these days, so I can't really take
> responsibility for fixing the demos. 

It's not the demos that are broken - they show up basic functionality
that's lacking.

Garth 

> However, getting mesh functions
> to work in parallel, at least for cell indicators, is something I
> really need. I will probably start with that.
> 
> Ola
> 
> >
> >
> >> Ola
> >>
> >> >> Ola
> >> >>
> >> >> > --
> >> >> > Anders
> >> >> >
> >> >> >
> >> >> >> > The problem with this is that it can potentially lead to quite a lot
> >> >> >> > of communication. In the worst case, all points would be missing for
> >> >> >> > all processes and then all points would have to be passed around
> >> >> >> > between all processes. But I don't see any other way since it's not
> >> >> >> > possible for a process to know where a missing point may be located.
> >> >> >>
> >> >> >> Yes, this might be pretty bad. Also if the two different meshes don't
> >> >> >> overlap at all, all points in each mesh are outside the other local
> >> >> >> mesh, giving lots of values to send around. But lets not worry too
> >> >> >> much about this yet :)
> >> >> >>
> >> >> >> Ola
> >> >> >>
> >> >> >> >
> >> >> >> > -----BEGIN PGP SIGNATURE-----
> >> >> >> > Version: GnuPG v1.4.10 (GNU/Linux)
> >> >> >> >
> >> >> >> > iEYEARECAAYFAkxgDncACgkQTuwUCDsYZdG6cACdE3u0O50658K3Llr2DNLaHyU6
> >> >> >> > 44UAn1/zql0Uh/oRuIpKnWrW6VnYwdgm
> >> >> >> > =98Vv
> >> >> >> > -----END PGP SIGNATURE-----
> >> >> >> >
> >> >> >> >
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >
> >> >> > -----BEGIN PGP SIGNATURE-----
> >> >> > Version: GnuPG v1.4.10 (GNU/Linux)
> >> >> >
> >> >> > iEYEARECAAYFAkxicXUACgkQTuwUCDsYZdFaYwCfTQ5wTroq1RyNRY3OqXLhRYr8
> >> >> > AYQAn2xJMjNjOX3ERBjsoir2TPM3L+yY
> >> >> > =epcb
> >> >> > -----END PGP SIGNATURE-----
> >> >> >
> >> >> >
> >> >>
> >> >>
> >> >>
> >> >
> >> >
> >> >
> >>
> >>
> >>
> >
> >
> >
> 
> 
> 





References