← Back to team overview

dolfin team mailing list archive

Re: Integer type transition

 

On Mon, Nov 19, 2012 at 9:08 AM, Anders Logg <logg@xxxxxxxxx> wrote:
> On Mon, Nov 19, 2012 at 09:04:29AM +0000, Garth N. Wells wrote:
>> On Mon, Nov 19, 2012 at 8:41 AM, Anders Logg <logg@xxxxxxxxx> wrote:
>> > On Mon, Nov 19, 2012 at 08:13:10AM +0000, Garth N. Wells wrote:
>> >
>> >> Then configure the backend to support 64 bit integers.
>> >>
>> >> > So why do we need size_t? Shouldn't we rather use the LA int type for
>> >> > everything that needs to be big (and dolfin::uint (typedef for
>> >> > unsigned int as before) for all other integers?
>> >>
>> >> Because Mesh, etc should not depend on the backend int type.
>> >
>> > ok. So the logic is:
>> >
>> > - Bigger problems need bigger numbers
>> >
>> > - This implies bigger numbers for LA backends and we can't dictate what
>> >   they use, hence PetscInt in PETScMatrix::add
>> >
>>
>> Yes, although for our own backends we can dictate the type (e.g.
>> STLMatrix, which use std::size_t for data storage).
>
> Is that relevant for big problems?
>

Could be. I use STLMatrix to interface to 3rd party solver than cannot
be used via PETSc, or for which the PETSc interface is too old or too
limited.

For consistency with the GenericMatrux interface, STLMatrix has to use
DolfinIndex for the add/set functions.

> If so, would an option be to always assemble into our own matrix, then
> copy to whatever backend and isolate the special indexing that way?
>
>> > - It also implies bigger numbers for numbering of mesh entities and we
>> >   don't want to use PETScInt for the mesh, hence size_t
>> >
>>
>> Yes.
>>
>> Here's a currect example: I would like to test for some very large
>> meshes (including for parallel mesh refinement), but I can't get PETSc
>> to build with 64 bit integers. I don't want a PETSc build problem to
>> mean that I can't move on with testing a big mesh.
>>
>> > ?
>> >
>> > If so, why don't we use size_t all over, except for the linear algebra
>> > interfaces? Any performance hit will mostly be visible for large
>> > arrays and for those we use size_t anyway so it would just mean
>> > changing the small numbers from dolfin::uint to size_t.
>>
>> Yes, I think that std::size_t should be the default integer type.
>
> ok, good. The more standard types we can use, the better.
>
>> This however does not preclude using 'unsigned int' if there is a
>> compelling reason, just as there may be cases where 'unsigned short
>> int; might be appropriate (in both cases likely due to memory
>> issues).
>
> Sure.
>
>> >> I think we should use std::size_t because it signals intent. It also
>> >> maps naturally onto the STL and numpy (numpy.uintp). I would like to
>> >> remove dolfin::uint and use 'unsigned int' when we want an 'unsigned
>> >> int'. The integer type for backend compatibly should be localised as
>> >> much as possible. Otherwise we will get into the situation we have
>> >> now: a typedef for uint that cannot be changed without breakages all
>> >> over.
>> >
>> > I'll think more about it. I'm still not comfortable with 3 different
>> > integer types.
>> >
>>
>> C++ provides more than three integer types.
>>
>> We should get rid dolfin::uint and default to std::size_t. That gives
>> one integer type in over 95% of the interface.
>
> Sounds good then.
>

OK. Let's rid of dolfin::uint usage from within DOLFIN, but keep it as
a typedef for the next release.

Garth

> --
> Anders


Follow ups

References