dolfin team mailing list archive
-
dolfin team
-
Mailing list archive
-
Message #10829
Parallelization and PXMLMesh
Ola and I have started to look into the parallelization again and have
come to some conclusions and suggestions:
The code in PXMLMesh.cpp is a complex (and impressive) beast which
handles quite a few things at the same time: parsing XML, partitioning
with ParMetis, redistribution of mesh data with MPI, and building a
distributed mesh with DynamicMeshEditor.
It would be good to break all this code up into pieces. In particular,
it seems non-optimal to let an XML parser handle partitioning and
parallel communication.
We therefore suggest that the XML parser should only read in data from
file. No calls to MPI, no calls to ParMetis, just reading data.
Instead, the mesh must be distributed by first parsing portions of a
mesh XML file on each processor and then calling Mesh::distribute() to
do the actual work (partitioning and distribution). Something like
this:
Mesh mesh;
ParallelMeshData pdata("mesh.xml");
mesh.distribute(pdata); // calls MeshDistribution::distribute()
I'm not entirely happy with the above syntax so suggestions are
welcome.
Based on the above algorithms, we can also create a simple script
which allows offline distribution of meshes:
dolfin-partition mesh.xml 16
This will create 16 mesh files named mesh-0.xml, mesh-1.xml etc.
One may then read in the partitioned data on each processor by
Mesh mesh("mesh-*.xml");
So in summary, we suggest that we let all XML parsers just parse mesh
data, and move the parallel partitioning and distribution to separate
classes MeshPartitioning and/or MeshDistribution. We have made some
progress on reworking PXMLMesh.cpp already and can hopefully have a
first prototype ready pretty soon (unless I get tied up with finishing
the Python interface for the next release).
--
Anders
Attachment:
signature.asc
Description: Digital signature
Follow ups