← Back to team overview

dolfin team mailing list archive

Re: Distance to boundary

 


On 12/08/11 11:18, Johan Hake wrote:
> On Friday August 12 2011 06:20:50 Mikael Mortensen wrote:
> 
>> On 12 August 2011 14:31, Garth N. Wells <gnw20@xxxxxxxxx> wrote:
> 
>> > On 12/08/11 02:52, Johan Hake wrote:
> 
>> > > Any objections to me merging this into trunk?
> 
>> > >
> 
>> > >
> 
>> > > Will add unittest for all methods soonish.
> 
>> >
> 
>> > How soon?
> 
> 
> Early next week
> 
> 
>> > > The code now resides in:
> 
>> > >
> 
>> > >
> 
>> > > lp:~dolfin-core/dolfin/hake
> 
>> > >
> 
>> > >
> 
>> > > Also, what would the best way to make it work in parallel. The
> distance
> 
>> > > from all vertices in a mesh to the closest boundary might not be easy
> 
>> > > to compute in Parallel as some vertices residing in one mesh might
> 
>> > > have the closest distance in the mesh on another processor.
> 
>> >
> 
>> > With difficulty. Sounds like an operation that is inherently not
> 
>> > parallel.
> 
>>
> 
>> I agree, because one has to run through the entire boundary mesh for all
> 
>> vertices in the domain. However, perhaps it's not all that bad?, because
> 
>> only a few of the processors will contain a boundary, and even fewer a
> 
>> marked boundary. So perhaps the speed will not be all that bad even though
> 
>> one has to circle through all processors?
> 
> 
> You might be correct. It just sounds very non-scalable...
> 
> 
>> I guess that for each vertex in the domain one has to broadcast that
> 
>> vertex's position to all processors, compute the nearest distance on the
> 
>> local mesh if it contains a boundary, and then in the end do an mpi_reduce
> 
>> with mpi_min?
> 
> 
> The "only" thing that needs to be broad casted is the boundary mesh.
> Then each processor meassure the distance to the whole Boundary for the
> local vertices.
> 

Getting the distance of all vertices from the boundary will scale
poorly, but there should be quite some tricks that can mitigate this.

Broadcasting the BoundaryMesh is a no-no. Asking each process to compute
the distance for a subset of vertices at a time will work but will
become slow. Broadcasting the BoundaryMesh is very bad since because for
a large problem an individual process will not have enough memory and
the method will break.

> 
>> I have no idea how to merge that algorithm with CGAL though.
> 
> 
> That will not be any problem. Each processor use its own local Boundary
> mesh of the whole mesh and just check its local vertices.
> 
> 
>> But then again, in my tests on one processor the CGAL implementation is
> 
>> more robust and as much as 20-50 times faster than the Eikonal solver, so
> 
>> it might be worth the trouble. The Eikonal solver works in parallel
> 
>> though.
> 
> 
> Ok.
> 
> 
>> > > I am inclined to think that this is a bad side effect a user need
> to be
> 
>> > > aware of when using this function in parallel. But then I know someone
> 
>> > > who would disagree ;)
> 
>> >
> 
>> > Add a line to throw an error in parallel until we figure out what's
> best.
> 
> 
> Ok.
> 
>> > > Another thing is that the present implementation takes a GenericVector
> 
>> > > representing the return values of the distances at each vertex.
> 
>> > > Somthing like:
> 
>> > >
> 
>> > >
> 
>> > > distances = Function(V)
> 
>> > >
> 
>> > >
> 
>> > > # Compute distance to Boundary for each vertex
> 
>> > >
> 
>> > > distance_computation = DistanceToBoundaryComputation(mesh)
> 
>> > >
> 
>> > > distance_computation.vertex_distances(distances.vector())
> 
>> > >
> 
>> > >
> 
>> > > In vertex_distances() I check that the local size of the passed vector
> 
>> > > has the same size as the mesh.num_vertices() this gives an error when
> 
>> > > running in parallel:
> 
>> > >
> 
>> > >
> 
>> > > Expected a vector with the same local size as the number of vertices
> 
>> > > (1449) in the mesh, got 1427
> 
>> > >
> 
>> > >
> 
>> > > Expected a vector with the same local size as the number of vertices
> 
>> > > (1457) in the mesh, got 1441
> 
>> >
> 
>> > I would suggest not using Vector. It's really for linear algebra
> 
>> > operations. Use std::vector or dolfin::Array.
> 
> 
> The nice thing with using a vector from a Function is that I then can
> use it directly in a form. Not sure
> 
>> > > I suspect that it has something to do with shared vertices. How do I
> 
>> > > access the "correct" number of vertices of a mesh partition and how do
> 
>> > > I know which one is only owned by local mesh?
> 
>> >
> 
>> > Boundary vertices are not 'owned' by any one processes. Dofs on a
> 
>> > partition boundary are owned by one process. This information is
> 
>> > available via the DofMap.
> 
> 
> Ok, it looks like there isn't an easy way to map vertices on a mesh to a
> degree of freedom in a Vector without a dofmap, certainly not in
> parallel. Maybe the vertex_distance function should be re-named to
> degree_of_freedom_distance and also include a dofmap?
> 


There is a lot of work to be done on parallel meshes, especially with
respect to communication. This is needed for interior facet integration
and mesh adaption. When this is in place it should be easy to work with
shared vertices, facets, etc.

Garth

> 
>> > > I figure I have to look into ParallelData, which btw is not wrapped to
> 
>> > > Python. We need to add it to dolfin_mesh.h. Will do later...
> 
>> >
> 
>> > Don't rush to expose it. It shouldn't be required in user code.
> 
> 
> Ok, but it is nice to have a look at these structure to better
> understand them :)
> 
> 
> Johan
> 
> 
>> > Garth
> 
>> >
> 
>> > > Johan
> 
>> > >
> 
>> > > On Wednesday August 10 2011 14:02:34 Johan Hake wrote:
> 
>> > >> Hello!
> 
>> > >>
> 
>> > >>
> 
>> > >>
> 
>> > >> I have created a class, DistanceToBoundaryComputation, similar to
> 
>> > >>
> 
>> > >> IntersectionOperator, which takes a Mesh or a FacetFunction and
> 
>> > >> compute
> 
>> > >>
> 
>> > >> distances between any point and the closest point to the Boundary
> or a
> 
>> > >>
> 
>> > >> subset of the boundary given by the FacetFunction.
> 
>> > >>
> 
>> > >>
> 
>> > >>
> 
>> > >> I have published it together with two demos, cpp and python to
> 
>> >
> 
>> > illustrate
> 
>> >
> 
>> > >> some of its functions.
> 
>> > >>
> 
>> > >>
> 
>> > >>
> 
>> > >> lp:~johan-hake/dolfin/distance-to-boundary
> 
>> > >>
> 
>> > >>
> 
>> > >>
> 
>> > >> If the distances from each vertex is computed, will the result be
> 
>> >
> 
>> > similar
> 
>> >
> 
>> > >> (not always equal) to the signed distance function, or the eikonal
> 
>> > >>
> 
>> > >> equation, but it computes faster.
> 
>> > >>
> 
>> > >>
> 
>> > >>
> 
>> > >> I am not sure how this best can be integrated into the present
> dolfin.
> 
>> >
> 
>> > It
> 
>> >
> 
>> > >> generates some data, like a BoundaryMesh, which it need to store, so
> 
>> > >> it
> 
>> > >>
> 
>> > >> might not be something we want to put into the Mesh class? If I use
> 
>> > >> the
> 
>> > >>
> 
>> > >> same lazy initialization that Andre used it might be possible.
> 
>> > >>
> 
>> > >>
> 
>> > >>
> 
>> > >> Please feel free to have alook at one of the demos to see it in
> 
>> > >> action.
> 
>> > >>
> 
>> > >>
> 
>> > >>
> 
>> > >> Johan
> 
>> > >>
> 
>> > >>
> 
>> > >>
> 
>> > >> _______________________________________________
> 
>> > >>
> 
>> > >> Mailing list: https://launchpad.net/~dolfin
> 
>> > >>
> 
>> > >> Post to : dolfin@xxxxxxxxxxxxxxxxxxx
> 
>> > >>
> 
>> > >> Unsubscribe : https://launchpad.net/~dolfin
> 
>> > >>
> 
>> > >> More help : https://help.launchpad.net/ListHelp
> 
>> > >
> 
>> > > _______________________________________________
> 
>> > > Mailing list: https://launchpad.net/~dolfin
> 
>> > > Post to : dolfin@xxxxxxxxxxxxxxxxxxx
> 
>> > > Unsubscribe : https://launchpad.net/~dolfin
> 
>> > > More help : https://help.launchpad.net/ListHelp
> 
>> >
> 
>> > _______________________________________________
> 
>> > Mailing list: https://launchpad.net/~dolfin
> 
>> > Post to : dolfin@xxxxxxxxxxxxxxxxxxx
> 
>> > Unsubscribe : https://launchpad.net/~dolfin
> 
>> > More help : https://help.launchpad.net/ListHelp
> 
> 



References