← Back to team overview

fenics team mailing list archive

build system

 

Point taken. I will look at the GTK build process and see how they do native
Windows builds and hope to learn how they solve this problem of having one
build process for both Linux and Windows.

Personally, I like the build process to be explicit so that I can quickly
get a sense how the software is build, by whom, and for what reason. For any
size library or application this implies that the source is organized so
that it simplifies the build process. However, I see your problem with
mixing two complex libraries and somehow making it work. I haven't been part
of an organization where build process wasn't an issue, and it is always a
problem of divergent requirements. I would love to build this stuff natively
on Windows, so I am biased in the Windows direction. I figured I notch you a
bit now since it would be easier to fix early in the process instead of two
years from now when the environment is 200k LOC.

Another cool idea is to start using automated build systems like Cruise
Control, a bug tracker like JIRA (http://www.atlassian.com/software/jira/)
and a Wiki like Confluence (http://www.atlassian.com/software/confluence).
This is all stuff I can do for FEniCS since I can't do squad yet on the FEA
side and you guys shouldn't be side tracked by this peripheral stuff.

Theo

> P.S. I see that the build process is moving further away from what is
> customary in the Windows world. I need to redo most of the build process
> anyway to get it included in a .NET world, so I don't know yet if this is
> problematic, but tricks like `dolfin-config --cflags` are just not very
> Windows friendly, nor are lots of shell scripted conditionals or pre and
> post build file moves like the "make install" target.
> 
>   I am a big fan of the jam/bjam cross platform build tools, mostly
because
> I think Perforce and BOOST are well done, and these tools are very light
> weight and comprehend the windows world natively. Something that autoconf
> just doesn't. So for now, I'll hold of judgment until I get more familiar
> with the environment.

We modified the build process for DOLFIN 0.5.11 to follow the (Unix)
standard better. Before, we had sort of a home-grown solution, where
the demos could be compiled against the library without needing to
install the library. The new version requires the library to be
installed (make install) to compile the demos. This is the standard
that 99% of all Unix libraries follow: headers installed under
$prefix/include and libraries under $prefix/lib.

Then there is the problem of compiling and linking against a big
modular library consisting of a large number of smaller
libraries. Getting all the libraries -lfoo -lbar etc and in the
correct order can be difficult, especially if the library (in this
case DOLFIN) uses stuff from other libraries (like PETSc). Then these
libraries must also be listed.

To make life easier for a user, the big library must thus communicate
in some way the correct libraries to the user. We do this through the
dolfin-config script, which is automatically generated and installed
during a build of DOLFIN. This may not be a standard approach, but it
is common enough and it's modelled after what GTK and LibXML2 does.

PETSc uses a different approach and forces you to extract the includes
and libs through a Makefiles:

PETSC_CFLAGS=`make -s -C $PETSC_DIR getincludedirs PETSC_DIR=$PETSC_DIR`
PETSC_LIBS=`make -s -C $PETSC_DIR getlinklibs PETSC_DIR=$PETSC_DIR`

So, I wouldn't say dolfin-config is a trick. It's something I consider
to be close to standard and it's there to help the user. If you can
get your programs to compile in another way, you don't have to use it.

/Anders

_______________________________________________
fenics-dev mailing list
fenics-dev@xxxxxxxxxx
http://www.fenics.org/cgi-bin/mailman/listinfo/fenics-dev






References