← Back to team overview

dolfin team mailing list archive

Re: Integer type transition

 

On Mon, Nov 19, 2012 at 8:41 AM, Anders Logg <logg@xxxxxxxxx> wrote:
> On Mon, Nov 19, 2012 at 08:13:10AM +0000, Garth N. Wells wrote:
>
>> Then configure the backend to support 64 bit integers.
>>
>> > So why do we need size_t? Shouldn't we rather use the LA int type for
>> > everything that needs to be big (and dolfin::uint (typedef for
>> > unsigned int as before) for all other integers?
>>
>> Because Mesh, etc should not depend on the backend int type.
>
> ok. So the logic is:
>
> - Bigger problems need bigger numbers
>
> - This implies bigger numbers for LA backends and we can't dictate what
>   they use, hence PetscInt in PETScMatrix::add
>

Yes, although for our own backends we can dictate the type (e.g.
STLMatrix, which use std::size_t for data storage).

> - It also implies bigger numbers for numbering of mesh entities and we
>   don't want to use PETScInt for the mesh, hence size_t
>

Yes.

Here's a currect example: I would like to test for some very large
meshes (including for parallel mesh refinement), but I can't get PETSc
to build with 64 bit integers. I don't want a PETSc build problem to
mean that I can't move on with testing a big mesh.

> ?
>
> If so, why don't we use size_t all over, except for the linear algebra
> interfaces? Any performance hit will mostly be visible for large
> arrays and for those we use size_t anyway so it would just mean
> changing the small numbers from dolfin::uint to size_t.
>

Yes, I think that std::size_t should be the default integer type.
This however does not preclude using 'unsigned int' if there is a
compelling reason, just as there may be cases where 'unsigned short
int; might be appropriate (in both cases likely due to memory issues).

>> >> With the change I just pushed, if you make DolfinIndex a typedef for
>> >> 'long long int', Trilinos works with 64 bit integers. I did some basic
>> >> testing and it worked fine (with Krylov solvers). You need Trilinos 11
>> >> for this.
>> >
>> > Does Trilinos have a typedef for this like PETSc?
>> >
>>
>> No. It overloads functions.
>
> ok.
>
>> >> > And if we want to use PETSc for large index ranges, then we need to
>> >> > use PETScInt which may or may not be compatible with size_t.
>> >> >
>> >>
>> >> Which a typedef can handle. Block sizes are passed by value, which
>> >> takes care of any casting.
>> >
>> > Sure, but it's confusing with 3 index types: the LA type (now
>> > dolfin::DolfinIndex), size_t and dolfin::uint).
>> >
>> > My suggestion would be:
>> >
>> > - typedef dolfin::Index (not dolfin::DolfinIndex) used for integers
>> >   that need to be large
>> >
>> >   This can be a typedef for either PetscInt or whatever Epetra uses.
>> >
>>
>> We shouldn't couple all of DOLFIN to the backend typedef.
>>
>> > - typedef dolfin::uint for unsigned int as now for all other integers
>> >   that don't need to be large
>> >
>> > - don't use size_t
>> >
>>
>> I think we should use std::size_t because it signals intent. It also
>> maps naturally onto the STL and numpy (numpy.uintp). I would like to
>> remove dolfin::uint and use 'unsigned int' when we want an 'unsigned
>> int'. The integer type for backend compatibly should be localised as
>> much as possible. Otherwise we will get into the situation we have
>> now: a typedef for uint that cannot be changed without breakages all
>> over.
>
> I'll think more about it. I'm still not comfortable with 3 different
> integer types.
>

C++ provides more than three integer types.

We should get rid dolfin::uint and default to std::size_t. That gives
one integer type in over 95% of the interface.

Garth

> --
> Anders


Follow ups

References